gpt4 book ai didi

c++ - 使用 openCV 匹配图像

转载 作者:塔克拉玛干 更新时间:2023-11-03 07:51:17 26 4
gpt4 key购买 nike

首先,我对匹配技术还很陌生,所以请多多包涵:

我正在开发一个将训练图像与收集的图像(单细胞样本)相匹配的应用程序。

我使用了 SIFT 检测器和 SURF 检测器以及基于 FLANN 的匹配来将一组训练数据与收集的图像进行匹配。但是我得到的结果真的很差。我使用的代码与 openCV 文档中的代码相同:

    void foramsMatching(Mat img_object, Mat img_scene){
int minHessian = 400;

SiftFeatureDetector detector(minHessian);

std::vector<KeyPoint> keypoints_object, keypoints_scene;

detector.detect(img_object, keypoints_object);
detector.detect(img_scene, keypoints_scene);

//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;

Mat descriptors_object, descriptors_scene;

extractor.compute(img_object, keypoints_object, descriptors_object);
extractor.compute(img_scene, keypoints_scene, descriptors_scene);

//-- Step 3: Matching descriptor vectors using FLANN matcher

FlannBasedMatcher matcher;
//BFMatcher matcher;
std::vector< DMatch > matches;
matcher.match(descriptors_object, descriptors_scene, matches);


double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < descriptors_object.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}

printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);

//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;

for (int i = 0; i < descriptors_object.rows; i++)
{
if (matches[i].distance < 3 * min_dist)
{
good_matches.push_back(matches[i]);
}
}

Mat img_matches;
drawMatches(img_object, keypoints_object, img_scene, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;

for (int i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}

Mat H = findHomography(obj, scene, CV_RANSAC);

//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0);
obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows);
std::vector<Point2f> scene_corners(4);

perspectiveTransform(obj_corners, scene_corners, H);

//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

//-- Show detected matches
namedWindow("Good Matches & Object detection", CV_WINDOW_NORMAL);
imshow("Good Matches & Object detection", img_matches);
//imwrite("../../Samples/Matching.jpg", img_matches);
}

这是结果—— Matching Two Images

与我使用这些方法看到的其他一些结果相比,它们确实很差。屏幕底部的两个 Blob (单元格)应该有两个匹配项。

关于我做错了什么或如何改进这些结果有什么想法吗?我正在考虑编写自己的匹配器/描述提取器,因为我的训练图像不是我正在查询的单元格的精确拷贝。这是一个好主意吗?如果是这样,我应该看哪些教程?

问候,

最佳答案

将评论转换为答案:

在运行 SIFT/SURF 之前,您应该使用可用知识进行某种预处理,以找到感兴趣的区域并消除噪声。这是总体思路:

  1. 执行 segmentation
  2. 根据特定标准 (*) 检查 segmentation 并选择感兴趣的候选人。
  3. 对候选片段进行匹配。

(*) 您可以在此步骤中使用的东西例如区域大小、形状、颜色分布等。根据您提供的示例,它可以例如可以看出您的物体是圆形的并且具有一定的最小尺寸。使用您拥有的任何知识来消除进一步的误报。当然,您需要进行一些调整,以使您的规则集不会过于严格,即保持真阳性。

关于c++ - 使用 openCV 匹配图像,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28345911/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com