exercise of CV course in TUD
- using W of shape[2,2], and discribe this problem as a classification problem, i.e. using softmax_cross_entropy as loss function,( PS: how to draw a line : y = W * x + b with W of shape[2,2], I still don't know. with reference, I got a result as following, but can't explain why there are somewhere are not linear.)
- using W of shape[2,1], using mean square as loss function
- inversed camera response function: (my implementation used Mitsunagaand Nayar Technique)
- my result: (I don't know which part got a problem, but I guess it happens during the calculating inversed camera response function. because before I visualize the result of Radiance Map, there are also many black and white dot noise.)
- my implemention
- opencv implemention
- exercise description
- P1.py
- P2.py
- P3.py
TODO : P4.py
- P1
- P2
- P3
* I took a mistake in original code(bilateral filter part) : I actually ran the opencv function when I wanted to test my function. So the time of calculation belongs to opencv, rather then mine. Thanks a lot for suggestion of professor Heidrich.
- median filter (with opencv; with my implement)
- min filter(naive; with PriorityQueue)
- bilateral filter(with opencv; with my implement)
- addtional work : guid filter
We compare different implement of those filter, and test their time. The result as following:
- my implement of median filter cost much more time then opencv, if anyone has an idea about how to improve it, please contact with me.
- when the kernel size of median filter < 70, it is faster to use quick sorting rather then histogram. because the average steps of 1 time calculate with histogram method ist 128, while the quick sorting is n*log(n), so when n is small, sorting method is the better choice.
- my implemet of min filter with PriorityQueue is much slower then naive. Any idea about it?
- my implemet of bilateral filter is a little bit faster then opencv, i also don't konw why.