UCSD CSE190a Haili Wang

Wednesday, March 10, 2010

L channel responded index

This is a imagesc show the index table of max L channel responds.

It looks good on the upper middle region. It was a big block of mud on the original image, but the index table just shows different texture than those bees.

Sunday, March 7, 2010

filter bank result

I think the previous failure of filter bank approach was caused by alignment of those windows. In this post I am using a fine 2 by 2 window, overlap by 7 pixels, to slide through the input image(actual window size is 16 by 16, 2 + 7*2).

It rejects those negative objects pretty well but also does not fire on the positive bees. Again, I may need a bigger sample size to get an ideal result.

Tuesday, March 2, 2010

many many clicks

Since the filter bank approach is taking too long to run and the result is not impressive, I am playing with the old algorithm with a lot more samples. The sample size is 170 (instead of 30 originally) and blue stars are those negative samples. I think the histogram approach is good for determining "what is this" rather than "what is not". The older implementation just check if the distance is far enough for those negative samples. This time I check if the window and negative samples are close enough, and then reject the window if it is true.

However the red stars at the upper middle of the image is just a big block of mud. I have 3 negative samples to account for that region but it just does not work out.

Monday, March 1, 2010

filter bank

in order to display properly, FB*256 ::should be imagesc() with colormap('gray')::

Apply each of these filters to the image and save the largest *absolute* responded filter's index to an index table. After that do a computer the histogram of these index.

I may have missed something that the result is completely off. At the learning stage I apply the filter to the user selected windows only, while at the finding stage I apply the filters to the entire image. May it be a problem that the convolution of small window is different than convolution of the entire image?

Another problem is that the algorithm is very slow, takes 3 minutes+ to compute.