gpt4 book ai didi

c++ - 在 OpenCV C++ 中估计基本矩阵之前标准化对应点的正确方法是什么?

转载 作者:行者123 更新时间:2023-11-28 04:30:55 25 4
gpt4 key购买 nike

我正在尝试为对应点手动实现基本矩阵估计函数(基于两幅图像之间的相似性)。 ORB特征检测、提取、匹配、比值检验后得到对应点。

有很多关于此主题的良好来源的文献。然而,它们似乎都没有给出执行该过程的良好伪代码。我浏览了多 View 几何书的各个章节;以及许多在线资源。

source似乎给出了进行归一化的公式,我遵循了该来源第 13 页上提到的公式。

基于这个公式,我创建了以下算法。不过,我不确定我这样做是否正确!

标准化.hpp

class Normalization {
typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;
public:
Normalization () {}
~Normalization () {}

void makeAverage(intercepts pointsVec);

std::tuple <cv::Mat, cv::Mat> normalize(intercepts pointsVec);

matVec getNormalizedPoints(intercepts pointsVec);

private:
double xAvg = 0;
double yAvg = 0;
double count = 0;
matVec normalizedPts;
double distance = 0;
matVec matVecData;
cv::Mat forwardTransform;
cv::Mat reverseTransform;
};

规范化.cpp

#include "Normalization.hpp"

typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;

/*******
*@brief : The makeAverage function receives the input 2D coordinates (x, y)
* and creates the average of x and y
*@params : The input parameter is a set of all matches (x, y pairs) in image A
************/
void Normalization::makeAverage(intercepts pointsVec) {
count = pointsVec.size();
for (auto& member : pointsVec) {
xAvg = xAvg + member.x;
yAvg = yAvg + member.y;
}
xAvg = xAvg / count;
yAvg = yAvg / count;
}

/*******
*@brief : The normalize function accesses the average distance calculated
* in the previous step and calculates the forward and inverse transformation
* matrices
*@params : The input to this function is a vector of corresponding points in given image
*@return : The returned data is a tuple of forward and inverse transformation matrices
*************/
std::tuple <cv::Mat, cv::Mat> Normalization::normalize(intercepts pointsVec) {
for (auto& member : pointsVec) {
// Accumulate the distance for every point

distance += ((1 / (count * std::sqrt(2))) *\
(std::sqrt(std::pow((member.x - xAvg), 2)\
+ std::pow((member.y - yAvg), 2))));
}
forwardTransform = (cv::Mat_<double>(3, 3) << (1 / distance), \
0, -(xAvg / distance), 0, (1 / distance), \
-(yAvg / distance), 0, 0, 1);

reverseTransform = (cv::Mat_<double>(3, 3) << distance, 0, xAvg, \
0, distance, yAvg, 0, 0, 1);

return std::make_tuple(forwardTransform, reverseTransform);
}

/*******
*@brief : The getNormalizedPoints function trannsforms the raw image coordinates into
* transformed coordinates using the forwardTransform matrix estimated in previous step
*@params : The input to this function is a vector of corresponding points in given image
*@return : The returned data is vector of transformed coordinates
*************/
matVec Normalization::getNormalizedPoints(intercepts pointsVec) {
cv::Mat triplet;
for (auto& member : pointsVec) {
triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
matVecData.emplace_back(forwardTransform * triplet);
}
return matVecData;
}

这是正确的方法吗?还有其他规范化方法吗?

最佳答案

我认为您可以按照自己的方式进行,但在“计算机视觉中的多 View 几何”中,Hartley 和 Zisserman 推荐各向同性缩放(第 107 页):

Isotropic scaling. As a first step of normalization, the coordinates in each image are translated (by a different translation for each image) so as to bring the centroid of the set of all points to the origin. The coordinates are also scaled so that on the average a point x is of the form x = (x, y,w)T, with each of x, y and w having the same average magnitude. Rather than choose different scale factors for each coordinate direction, an isotropic scaling factor is chosen so that the x and y-coordinates of a point are scaled equally. To this end, we choose to scale the coordinates so that the average distance of a point x from the origin is equal to √ 2. This means that the “average” point is equal to (1, 1, 1)T. In summary the transformation is as follows:
(i) The points are translated so that their centroid is at the origin.
(ii) The points are then scaled so that the average distance from the origin is equal to √2.
(iii) This transformation is applied to each of the two images independently.

他们说这对于直接线性变换 (DLT) 很重要,但对于像您想要的那样计算基本矩阵更重要。您选择的算法将点坐标归一化为 (1, 1, 1),但应用缩放,以便与原点的平均距离等于 √2。

下面是这种规范化的一些代码。平均步骤保持不变:

std::vector<cv::Mat> normalize(std::vector<cv::Point2d> pointsVec) {
// Averaging
double count = (double) pointsVec.size();
double xAvg = 0;
double yAvg = 0;
for (auto& member : pointsVec) {
xAvg = xAvg + member.x;
yAvg = yAvg + member.y;
}
xAvg = xAvg / count;
yAvg = yAvg / count;

// Normalization
std::vector<cv::Mat> points3d;
std::vector<double> distances;
for (auto& member : pointsVec) {

double distance = (std::sqrt(std::pow((member.x - xAvg), 2) + std::pow((member.y - yAvg), 2)));
distances.push_back(distance);
}
double xy_norm = std::accumulate(distances.begin(), distances.end(), 0.0) / distances.size();

// Create a matrix transforming the points into having mean (0,0) and mean distance to the center equal to sqrt(2)
cv::Mat_<double> Normalization_matrix(3, 3);
double diagonal_element = sqrt(2) / xy_norm;
double element_13 = -sqrt(2) * xAvg / xy_norm;
double element_23 = -sqrt(2)* yAvg/ xy_norm;

Normalization_matrix << diagonal_element, 0, element_13,
0, diagonal_element, element_23,
0, 0, 1;

// Multiply the original points with the normalization matrix
for (auto& member : pointsVec) {
cv::Mat triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
points3d.emplace_back(Normalization_matrix * triplet);
}
return points3d;
}

关于c++ - 在 OpenCV C++ 中估计基本矩阵之前标准化对应点的正确方法是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52940822/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com