gpt4 book ai didi

c++ - 来自具有相同图像的相机的 OpenCV 匹配图像不会产生 100% 匹配

转载 作者:太空宇宙 更新时间:2023-11-03 23:11:04 26 4
gpt4 key购买 nike

我的目标是将从相机拍摄的图像与某些模型进行匹配,并找到最接近的图像。但是我想我错过了一些东西。这就是我正在做的:首先,我从相机获取一帧,选择一个部分,提取关键点并使用 SURF 计算描述符并将它们存储在一个 xml 文件中(我还将模型存储为 model.png)。这是我的模型。然后我拍摄另一帧(几秒钟后),选择相同的部分,计算描述符并将它们与之前存储的描述符进行匹配。结果并不像我预期的那样接近 100%(我使用良好匹配与关键点数量之间的比率)。为了进行比较,如果我加载 model.png,计算其描述符并与存储的描述符匹配,我将获得 100% 匹配(或多或少),这是合理的。这是我的代码:

#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/nonfree/nonfree.hpp"

using namespace std;

std::vector<cv::KeyPoint> detectKeypoints(cv::Mat image, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
std::vector<cv::KeyPoint> keypoints;
cv::SurfFeatureDetector detector(hessianTh,nOctaves,nOctaveLayers,extended,upright);
detector.detect(image,keypoints);
return keypoints; }

cv::Mat computeDescriptors(cv::Mat image,std::vector<cv::KeyPoint> keypoints, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
cv::SurfDescriptorExtractor extractor(hessianTh,nOctaves,nOctaveLayers,extended,upright);
cv::Mat imageDescriptors;
extractor.compute(image,keypoints,imageDescriptors);
return imageDescriptors; }

int main(int argc, char *argv[]) {
cv::VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 2304);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 1536);
cap >> frame;
cv::Rect selection(939,482,1063-939,640-482);

cv::Mat roi = frame(selection).clone();
//cv::Mat roi=cv::imread("model.png");
cv::cvtColor(roi,roi,CV_BGR2GRAY);
cv::equalizeHist(roi,roi);

if (std::stoi(argv[1])==1)
{
std::vector<cv::KeyPoint> keypoints = detectKeypoints(roi,400,4,2,true,false);
cv::FileStorage fs("model.xml", cv::FileStorage::WRITE);
cv::write(fs,"keypoints",keypoints);
cv::write(fs,"descriptors",computeDescriptors(roi,keypoints,400,4,2,true,false));
fs.release();
cv::imwrite("model.png",roi);
}
else
{
cv::FileStorage fs("model.xml", cv::FileStorage::READ);
std::vector<cv::KeyPoint> modelkeypoints;
cv::Mat modeldescriptor;
cv::FileNode filenode = fs["keypoints"];
cv::read(filenode,modelkeypoints);
filenode = fs["descriptors"];
cv::read(filenode, modeldescriptor);
fs.release();

std::vector<cv::KeyPoint> roikeypoints = detectKeypoints(roi,400,4,2,true,false);
cv::Mat roidescriptor = computeDescriptors(roi,roikeypoints,400,4,2,true,false);

std::vector<std::vector<cv::DMatch>> matches;
cv::BFMatcher matcher(cv::NORM_L2);
if(roikeypoints.size()<modelkeypoints.size())
matcher.knnMatch(roidescriptor, modeldescriptor, matches, 2); // Find two nearest matches
else
matcher.knnMatch(modeldescriptor, roidescriptor, matches, 2);

vector<cv::DMatch> good_matches;
for (int i = 0; i < matches.size(); ++i)
{
const float ratio = 0.7;
if (matches[i][0].distance < ratio * matches[i][1].distance)
{
good_matches.push_back(matches[i][0]);
}
}

cv::Mat matching;

cv::Mat model = cv::imread("model.png");
if(roikeypoints.size()<modelkeypoints.size())
cv::drawMatches(roi,roikeypoints,model,modelkeypoints,good_matches,matching);
else
cv::drawMatches(model,modelkeypoints,roi,roikeypoints,good_matches,matching);

cv::imwrite("matches.png",matching);

float result = static_cast<float>(good_matches.size())/static_cast<float>(roikeypoints.size());
std::cout << result << std::endl;
}
return 0; }

任何建议将不胜感激,这让我发疯..

最佳答案

这是预料之中的,两帧之间的微小变化是您无法获得 100% 匹配的原因。但在同一幅图像上,SURF 特征将完全位于相同的点,并且计算出的描述符将是相同的。因此,调整相机的方法,在特征应该相同时绘制特征之间的距离。为距离设置一个阈值,以便接受大多数(可能是 95%)的匹配项。这样,您的错误匹配率就会降低,而真正匹配率仍然很高。

关于c++ - 来自具有相同图像的相机的 OpenCV 匹配图像不会产生 100% 匹配,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49942561/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com