gpt4 book ai didi

c++ - 使用 BruteForceMatcher 或 FlannBasedMatcher 的 KnnMatch 错误地完成人脸/图像匹配

转载 作者:太空宇宙 更新时间:2023-11-04 13:18:06 25 4
gpt4 key购买 nike

我正在尝试使用 ORB 检测器/描述符和 Flann 或蛮力匹配器在更大的目标图像(allimg.jpg 包含 3 张面孔)中识别源图像(c1.jpg- 一张脸)。 c1.jpg 是通过裁剪/复制从 allimg.jpg 创建的。ORB 检测器/描述符按预期工作,正确返回检测器/描述符,但 Flann 或蛮力匹配器给出了目标的不正确匹配结果。结果当我进一步尝试使用 findHomography() 时,它显示不正确的结果,将源映射到某个地方else 在目的地而不是目的地(allimg)中的正确面孔。虽然下面的代码没有显示,但是在Knnmatch之后,我在匹配后的c1.jpg和allimag.jpg上画了一个bounding rect并显示图像。我发现源bounding rect是正确的但是allimag的bounding rect非常非常大而且包括源面。它应该刚刚在目的地找到源面。我正在使用 opencv 3.0。有人遇到过这样的问题吗?是否有任何其他匹配器可以准确地在目标中找到源图像(面部或任何东西)?

我已经给出了下面的代码和图片(通过链接给出):

#include <opencv2/core/core.hpp>
#include <opencv2\opencv.hpp>
#include <opencv2/features2d/features2d.hpp>

using namespace std;
using namespace cv;

const double nn_match_ratio = 0.80f; // Nearest-neighbour matching ratio
const double ransac_thresh = 2.5f; // RANSAC inlier threshold
const int bb_min_inliers = 100; // Minimal number of inliers to draw BBox

Mat img1;
Mat img2;

bool refineMatchesWithHomography(const vector<cv::KeyPoint>& queryKeypoints,
const vector<cv::KeyPoint>& trainKeypoints,
float reprojectionThreshold,
vector<cv::DMatch>& matches,
Mat& homography )
{
const int minNumberMatchesAllowed = 4;
if (matches.size() <minNumberMatchesAllowed)
return false;
// Prepare data for cv::findHomography
vector<cv::Point2f> queryPoints(matches.size());
std::vector<cv::Point2f> trainPoints(matches.size());
for (size_t i = 0; i <matches.size(); i++)
{
queryPoints[i] = queryKeypoints[matches[i].queryIdx].pt;
trainPoints[i] = trainKeypoints[matches[i].trainIdx].pt;
}
// Find homography matrix and get inliers mask
std::vector<unsigned char> inliersMask(matches.size());
homography = findHomography(queryPoints,
trainPoints,
CV_FM_RANSAC,
reprojectionThreshold,
inliersMask);
vector<cv::DMatch> inliers;
for (size_t i=0; i<inliersMask.size(); i++)
{
if (inliersMask[i])
inliers.push_back(matches[i]);
}
matches.swap(inliers);
Mat homoShow;
drawMatches (img1,queryKeypoints,img2,trainKeypoints,matches,homoShow,

Scalar::all(-1),CV_RGB(255,255,255), Mat(), 2);

imshow("homoShow",homoShow);


waitKey(100000);
return matches.size() > minNumberMatchesAllowed;

}




int main()
{
//Stats stats;
vector<String> fileName;

fileName.push_back("D:\\pmn\\c1.jpg");
fileName.push_back("D:\\pmn\\allimg.jpg");

img1 = imread(fileName[0], CV_LOAD_IMAGE_COLOR);
img2 = imread(fileName[1], CV_LOAD_IMAGE_COLOR);

if (img1.rows*img1.cols <= 0)
{
cout << "Image " << fileName[0] << " is empty or cannot be found\n";
return(0);
}
if (img2.rows*img2.cols <= 0)
{
cout << "Image " << fileName[1] << " is empty or cannot be found\n";
return(0);
}

// keypoint for img1 and img2
vector<KeyPoint> keyImg1, keyImg2;
// Descriptor for img1 and img2

Mat descImg1, descImg2;


Ptr<Feature2D> porb = ORB::create(500,1.2f,8,0,0,2,0,14);


porb->detect(img2, keyImg2, Mat());
// and compute their descriptors with method compute
porb->compute(img2, keyImg2, descImg2);

// We can detect keypoint with detect method
porb->detect(img1, keyImg1,Mat());
// and compute their descriptors with method compute
porb->compute(img1, keyImg1, descImg1);


//FLANN parameters

// Ptr<flann::IndexParams> indexParams =
makePtr<flann::LshIndexParams> (6, 12, 1);

// Ptr<flann::SearchParams> searchParams = makePtr<flann::SearchParams>
(50);

String itMatcher = "BruteForce-L1";

Ptr<DescriptorMatcher>

matdescriptorMatchercher(newcv::BFMatcher(cv::NORM_HAMMING, false));

vector<vector<DMatch> > matches,bestMatches;
vector<DMatch> m;

matdescriptorMatchercher->knnMatch(descImg1, descImg2, matches,2);

const float minRatio = 0.95f;//1.f / 1.5f;
for (int i = 0; i<matches.size(); i++)
{
if(matches[i].size()>1)
{
DMatch& bestMatch = matches[i][0];
DMatch& betterMatch = matches[i][1];
float distanceRatio = bestMatch.distance / betterMatch.distance;
if (distanceRatio <minRatio)
{
bestMatches.push_back(matches[i]);
m.push_back(bestMatch);
}
}
}


Mat homo;
float homographyReprojectionThreshold = 1.0;
bool homographyFound = refineMatchesWithHomography(
keyImg1,keyImg2,homographyReprojectionThreshold,m,homo);

return 0;
}

[c1.jpg][1]

[allimg.jpg][2]


[1]: http://i.stack.imgur.com/Uuy3o.jpg
[2]: http://i.stack.imgur.com/Kwne7.jpg

最佳答案

感谢 EdChum。我使用了链接 (ratiotest/symmetrytest) 中给出的代码,它仅在源图像是目标的一部分时才提供了一些不错的图像匹配,尽管它不够准确。请注意,我确实注释掉了最后一个 ransacTest,因为它不必要地删除了很多积极因素。我附上了 2 张图片(source.jpg/destination.jpg),它们将通过突出显示目标中的匹配部分来显示我在说什么。是否有任何算法可以更准确/正确地 (>90%) 识别目标中的源?

此外,如果源是相似的图像(而不是与目标中的完全相同),我发现目标图像匹配有很大偏差且无用。我对吗?请分享您的观点。 1 =来源,2 =目的地

关于c++ - 使用 BruteForceMatcher 或 FlannBasedMatcher 的 KnnMatch 错误地完成人脸/图像匹配,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36427377/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com