gpt4 book ai didi

OpenCV 关键点匹配 DMatch 距离变量

转载 作者:太空宇宙 更新时间:2023-11-03 22:57:26 24 4
gpt4 key购买 nike

我正在研究 Features2D + Homography to find a known object 中的代码OpenCV 教程..

没看清楚,matcher类中的distance变量是什么。是两个图像中匹配关键点的像素之间的距离吗?

QA说出它的相似性度量(二进制描述符的欧几里得距离或汉明距离)并根据描述符向量之间的距离计算。

能否分享一些信息,了解如何计算此距离或如何在不使用 OpenCV 现有匹配器的情况下匹配关键点。

 //-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance; // --> What Distance indicate here
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}

enter image description here

谢谢。

最佳答案

我在使用 SIFT 特征检测器进行实时对象匹配时遇到了一些问题。这是我的视频解决方案。

首先我创建了一个结构来存储匹配的关键点。该结构包含关键点在模板图像中的位置、关键点在输入图像中的位置和相似性度量。这里我使用向量的互相关作为相似性度量。

struct MatchedPair
{
Point locationinTemplate;
Point matchedLocinImage;
float correlation;
MatchedPair(Point loc)
{
locationinTemplate=loc;
}
}

我将选择根据相似性对匹配的关键点进行排序,因此我需要一个辅助函数来告诉 std::sort() 如何比较我的 MatchedPair对象。

bool comparator(MatchedPair a,MatchedPair b)
{
return a.correlation>b.correlation;
}

现在主要代码开始了。我使用标准方法从输入图像和模板图像中检测和解密特征。计算特征后,我实现了自己的匹配函数。这就是您正在寻找的答案

 int main()
{
Mat templateImage = imread("template.png",IMREAD_GRAYSCALE); // read a template image
VideoCapture cap("input.mpeg");
Mat frame;

vector<KeyPoint> InputKeypts,TemplateKeypts;
SiftFeatureDetector detector;
SiftDescriptorExtractor extractor;
Mat InputDescriptor,templateDescriptor,result;
vector<MatchedPair> mpts;
Scalar s;
cap>>frame;
cvtColor(image,image,CV_BGR2GRAY);
Mat outputImage =Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1);
detector.detect(templateImage,TemplateKeypts); // detect template interesting points
extractor.compute(templateImage,TemplateKeypts,templateDescriptor);

while( true)
{
mpts.clear(); // clear for new frame
cap>>frame; // read video to frame
outputImage=Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1); // create output image
cvtColor(frame,frame,CV_BGR2GRAY);
detector.detect(frame,InputKeypts);
extractor.compute(frame,InputKeypts,InputDescriptor); // detect and descrypt frames features

/*
So far we have computed descriptors for template and current frame using traditional methods
From now onward we are going to implement our own match method

- Descriptor matrixes are by default have 128 colums to hold features of a keypoint.
- Each row in descriptor matrix represent 128 feature of a keypoint.

Match methods are using this descriptor matrixes to calculate similarity.

My approach to calculate similarity is using cross correlation of keypoints descriptor vector.Check code below to see how I achieved.
*/

// Iterate over rows of templateDesciptor ( for each keypoint extracted from // template Image) i keypoints in template,j keypoints in input
for ( int i=0;i<templateDescriptor.rows;i++)
{
mpts.push_back(MatchedPair(TemplateKeypts[i].pt));
mpts[i].correlation =0;
for ( int j=0;j<InputDescriptor.rows;j++)
{
matchTemplate(templateDescriptor.row(i),InputDescriptor.row(j),result,CV_TM_CCOR_NORMED);
// I have used opencvs built function to calculate correlation.I am calculating // row(i) of templateDescriptor with row(j) of inputImageDescriptor.
s=sum(result); // sum is correlation of two rows
// Here I am looking for the most similar row in input image.Storing the correlation of best match and matchLocation in input image.
if(s.val[0]>mpts[i].correlation)
{
mpts[i].correlation=s.val[0];
mpts[i].matchedLocinImage=InputKeypts[j].pt;
}
}

}

// I would like to show template,input and matching lines in one output. templateImage.copyTo(outputImage(Rect(0,0,templateImage.cols,templateImage.rows)));
frame.copyTo(outputImage(Rect(templateImage.cols,templateImage.rows,frame.cols,frame.rows)));

// Here is the matching part. I have selected 4 best matches and draw lines  // between them. You should check for correlation value again because there can // be 0 correlated match pairs.

std::sort(mpts.begin(),mpts.end(),comparator);
for( int i=0;i<4;i++)
{

if ( mpts[i].correlation>0.90)
{
// During drawing line take into account offset of locations.I have added
// template image to upper left of input image in output image.
cv::line(outputImage,mpts[i].locationinTemplate,mpts[i].matchedLocinImage+Point(templateImage.cols,templateImage.rows),Scalar::all(255));
}
}
imshow("Output",outputImage);
waitKey(33);
}

}

关于OpenCV 关键点匹配 DMatch 距离变量,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26548419/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com