gpt4 book ai didi

OpenCV iOS - 显示从 drawMatches 返回的图像

转载 作者:太空宇宙 更新时间:2023-11-03 21:15:50 26 4
gpt4 key购买 nike

我是 OpenCV 的新手。我正在尝试在 iOS 上的 OpenCV 中使用 FLANN/SURF 在图像之间绘制特征匹配。我正在关注这个例子:

http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann

这是我的代码,稍作修改(将示例中的代码包装在一个返回 UIImage 作为结果的函数中,并从 bundle 中读取起始图像):

UIImage* SURFRecognition::test()
{
UIImage *img1 = [UIImage imageNamed:@"wallet"];
UIImage *img2 = [UIImage imageNamed:@"wallet2"];

Mat img_1;
Mat img_2;

UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);

if( !img_1.data || !img_2.data )
{
std::cout<< " --(!) Error reading images " << std::endl;
}

//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector<KeyPoint> keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );

//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}

printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;

for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance <= 2*min_dist )
{ good_matches.push_back( matches[i]); }
}

//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Show detected matches
//imshow( "Good Matches", img_matches );

UIImage *imgTemp = MatToUIImage(img_matches);

for( int i = 0; i < good_matches.size(); i++ )
{
printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
}

return imgTemp;
}

上面函数的结果是:

enter image description here

仅显示连接匹配项的线,但不显示原始图像。如果我理解得很好,drawMatches 函数会返回一个 cv::Mat,其中包含图像和相似特征之间的连接。这是正确的还是我遗漏了什么?有人可以帮助我吗?

最佳答案

我自己找到了解决方案。经过大量搜索,似乎 drawMatches 需要 img1 和 img2 具有 1 到 3 个 channel 。我打开了一个带有 alpha 的 PNGa,所以这些是 4 channel 图像。这是我审查的代码:

已添加

UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);

cvtColor(img_1, img_1, CV_BGRA2BGR);
cvtColor(img_2, img_2, CV_BGRA2BGR);

关于OpenCV iOS - 显示从 drawMatches 返回的图像,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18716718/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com