gpt4 book ai didi

c++ - 累积单应性错误缩放

转载 作者:太空宇宙 更新时间:2023-11-03 22:58:05 24 4
gpt4 key购买 nike

我要构建一个由面朝下的相机覆盖的地面全景图像(在固定高度,离地面约 1 米)。这可能会运行到数千帧,因此 Stitcher 类的内置 panorama 方法并不适合 - 它太慢且占用内存。

相反,我假设地板和运动是平面的(这里并非不合理)并尝试在我看到每一帧时建立累积单应性。也就是说,对于每一帧,我计算从前一帧到新帧的单应性。然后,我将累积单应性乘以所有先前单应性的乘积。

假设我在第 0 帧和第 1 帧之间得到 H01,然后在第 1 帧和第 2 帧之间得到 H12。要获得将第 2 帧放置到马赛克上的转换,我需要得到 H01*H12。这会随着帧数的增加而继续,这样我就会得到 H01*H12*H23*H34*H45*...

在代码中,这类似于:

cv::Mat previous, current;

// Init cumulative homography
cv::Mat cumulative_homography = cv::Mat::eye(3);

video_stream >> previous;
for(;;) {

video_stream >> current;
// Here I do some checking of the frame, etc

// Get the homography using my DenseMosaic class (using Farneback to get OF)
cv::Mat tmp_H = DenseMosaic::get_homography(previous,current);

// Now normalise the homography by its bottom right corner
tmp_H /= tmp_H.at<double>(2, 2);

cumulative_homography *= tmp_H;

previous = current.clone( );
}

它工作得很好,除了当相机在视点中“向上”移动时,单应性尺度会减小。当它向下移动时,比例再次增加。这给我的全景图带来了一种我真的不想要的透视效果。

例如,这是在几秒钟的视频中向前然后向后拍摄的。第一帧看起来不错: Frame 2 implanted onto frame 1

当我们向前移动几帧时,问题就来了: The homography scales down causing the frame to become smaller on the panorama

然后当我们再次回来时,你可以看到框架再次变大了: enter image description here

我不知道这是从哪里来的。

我正在使用 Farneback 密集光流来计算像素-像素对应关系,如下所示(稀疏特征匹配在此数据上效果不佳)并且我检查了我的流 vector - 它们通常非常好,所以它是不是跟踪问题。我还尝试切换输入的顺序以找到单应性(以防我混淆了帧号),但仍然没有更好。

cv::calcOpticalFlowFarneback(grey_1, grey_2, flow_mat, 0.5, 6,50, 5, 7, 1.5, flags);

// Using the flow_mat optical flow map, populate grid point correspondences between images
std::vector<cv::Point2f> points_1, points_2;
median_motion = DenseMosaic::dense_flow_to_corresp(flow_mat, points_1, points_2);
cv::Mat H = cv::findHomography(cv::Mat(points_2), cv::Mat(points_1), CV_RANSAC, 1);

另一件我认为可能是我在转换中包含的翻译,以确保我的全景图在场景中居中:

cv::warpPerspective(init.clone(), warped, translation*homography, init.size());

但是在应用翻译之前检查了单应性中的值,我提到的缩放问题仍然存在。

非常感谢收到任何提示。我可以输入很多代码,但似乎无关紧要,如果缺少某些内容,请告诉我

更新我试过为完全乘法切换 *= 运算符,并尝试颠倒单应性相乘的顺序,但没有成功。下面是我计算单应性的代码:

/**
\brief Calculates the homography between the current and previous frames


*/
cv::Mat DenseMosaic::get_homography()
{

cv::Mat grey_1, grey_2; // Grayscale versions of frames


cv::cvtColor(prev, grey_1, CV_BGR2GRAY);
cv::cvtColor(cur, grey_2, CV_BGR2GRAY);

// Calculate the dense flow
int flags = cv::OPTFLOW_FARNEBACK_GAUSSIAN;
if (frame_number > 2) {
flags = flags | cv::OPTFLOW_USE_INITIAL_FLOW;
}
cv::calcOpticalFlowFarneback(grey_1, grey_2, flow_mat, 0.5, 6,50, 5, 7, 1.5, flags);

// Convert the flow map to point correspondences
std::vector<cv::Point2f> points_1, points_2;
median_motion = DenseMosaic::dense_flow_to_corresp(flow_mat, points_1, points_2);

// Use the correspondences to get the homography
cv::Mat H = cv::findHomography(cv::Mat(points_2), cv::Mat(points_1), CV_RANSAC, 1);

return H;
}

这是我用来从流程图中找到对应关系的函数:

/**
\brief Calculate pixel->pixel correspondences given a map of the optical flow across the image
\param[in] flow_mat Map of the optical flow across the image
\param[out] points_1 The set of points from #cur
\param[out] points_2 The set of points from #prev
\param[in] step_size The size of spaces between the grid lines
\return The median motion as a point

Uses a dense flow map (such as that created by cv::calcOpticalFlowFarneback) to obtain a set of point correspondences across a grid.
*/
cv::Point2f DenseMosaic::dense_flow_to_corresp(const cv::Mat &flow_mat, std::vector<cv::Point2f> &points_1, std::vector<cv::Point2f> &points_2, int step_size)
{

std::vector<double> tx, ty;
for (int y = 0; y < flow_mat.rows; y += step_size) {
for (int x = 0; x < flow_mat.cols; x += step_size) {
/* Flow is basically the delta between left and right points */
cv::Point2f flow = flow_mat.at<cv::Point2f>(y, x);
tx.push_back(flow.x);
ty.push_back(flow.y);


/* There's no need to calculate for every single point,
if there's not much change, just ignore it
*/
if (fabs(flow.x) < 0.1 && fabs(flow.y) < 0.1)
continue;

points_1.push_back(cv::Point2f(x, y));
points_2.push_back(cv::Point2f(x + flow.x, y + flow.y));
}
}

// I know this should be median, not mean, but it's only used for plotting the
// general motion direction so it's unimportant.
cv::Point2f t_median;
cv::Scalar mtx = cv::mean(tx);
t_median.x = mtx[0];
cv::Scalar mty = cv::mean(ty);
t_median.y = mty[0];

return t_median;
}

最佳答案

事实证明这是因为我的视点靠近特征,这意味着跟踪特征的非平面性导致单应性偏斜。我设法通过使用 estimateRigidTransform 而不是 findHomography 来防止这种情况(它更像是一种 hack 而不是一种方法......),因为这不会估计透视变化。

在这种特殊情况下,这样做是有意义的,因为 View 只会经历严格的转换。

关于c++ - 累积单应性错误缩放,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24787777/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com