Sayfalar

25 Aralık 2014 Perşembe

SURF and KeyPoint matching

In my first experiment , I use the temple matching functions of OpenCV. But they are variant to scale,size and orientation , so I should use an Invariant method.  I try my chance with SURF detector. SURF actually very similar to SIFT detector. Both of them try to find the keypoints that could be used for object recognition or matching problems. In surf algorithm like SIFT method we should apply some steps ; ın SURF firstly we calculate Integrals than Hessian , than we calculate orientation then we create a descriptor.Details can be found here . In OpenCV there are ready functions to apply SURF detector and matching algorithm. My code Here ;

For using SURF detector we must add this headers  additionaly;

#include <opencv2/nonfree/features2d.hpp>
#include <opencv2/legacy/legacy.hpp>




int main( int argc, char** argv )
{
Mat frame;
Mat templ;// marker
Mat descriptors1;
Mat descriptors2;
Mat result;
Mat img_matches;

std::vector< DMatch > good_matches;

VideoCapture cam1("/home/maygun/Desktop/source.mp4");//our video
templ = imread("/home/maygun/Desktop/matrix.png"); // our object that we want to find in an image

cvtColor( templ,templ,cv::COLOR_BGR2GRAY );// transform color to gray image
cv::GaussianBlur(templ,templ,Size(5,5),0.5,0.5,0);

cv::FastFeatureDetector detector(21); // parameter as a min hessian
vector<KeyPoint> keypoints_frame; // keypoints for matching
vector<KeyPoint> keypoints_matrix;

cv::SurfDescriptorExtractor extractor;

std::vector<Point2f> obj;
std::vector<Point2f> scene;

BFMatcher matcher(NORM_L2);
vector< DMatch > matches;

detector.detect(templ, keypoints_matrix); // detect keypoint in object
extractor.compute(templ,keypoints_matrix,descriptors1); // create descriptors



namedWindow( "result", WINDOW_NORMAL );
while(1)
{
char b1 = cvWaitKey(33);
if (b1 == 27) // if esc exit
break;
cam1 >> frame;
cv::GaussianBlur(frame,frame,Size(5,5),0.5,0.5,0);

detector.detect(frame,keypoints_frame); // detector for each frame
extractor.compute(frame,keypoints_frame,descriptors2);

matcher.match(descriptors1,descriptors2,matches); // match images

cv::drawMatches(templ,keypoints_matrix,frame,keypoints_frame,matches,result,Scalar::all(-1),Scalar::all(-1),vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

imshow("result",result);
}
  waitKey(0);
  return 0;
}


Here result ;




References ; http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html

2 Aralık 2014 Salı

Temple Matching with OpenCV

If you want to find an object in an image , you can use temple matching methods. I try to detect marker in video , and I try these methods. But unfortunately , temple matching methods in OpenCV are not sufficient because they are not invariant to illumination condition and scale sizes.

 You can see in my code there is a matching method , there are a lot of options , their approach like this ;



















My Code ;
















Result ;



As you can see in result image a rectangle appear bottom but in the middle there is me and I pasted my marker on to my chest .In video sequence nearly 30 seconds I am moving very slowly but this method cannot find right place. In this week , I will try more accurate method for recognizing object and tracking.If I do any progress, i will share the results.




References; http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html