gpt4 book ai didi

c++ - OpenCV>>相机相对位姿估计

转载 作者:塔克拉玛干 更新时间:2023-11-03 04:46:16 32 4
gpt4 key购买 nike

环境

  1. 使用 3DS Max 生成的 3D 场景
  2. 相机 FOV 为 45 度
  3. 使用同一个摄像头以 800x600 分辨率渲染两张图片
  4. 图像 A 的相机 Z 轴旋转 == 0 度
  5. 图像 B 的相机 Z 轴旋转 == 25 度
  6. 8个对应点(人工驱动),无异常值

手头的任务

解决图像 A 和图像 B(上图)之间的相对相机姿势,期望导致 Z 轴上的诱导 25 度旋转


实现

选项 A:

  1. 信件是手动生成的,保证没有异常值(请参阅下面代码段中的“rotZ0”和“rotZ25”)
  2. 以像素分辨率表示的相机焦距是根据 this link 使用图像分辨率和 FOV 得出的
  3. 相机固有矩阵由(根据 this link )组成,由图像分辨率和相机 FOV 组成
  4. 基本矩阵是使用'cv::findFundamentalMat'
  5. 导出的
  6. 基本矩阵由(根据 this link )组成,作为相机本征矩阵 'K' 和基本矩阵 'F' 的函数,如下所示方式:'K.t() * F * K'* 其中 'K.t()' 是内在矩阵转置。
  7. 对基本矩阵 'matE' 执行 SVD
  8. 解决 4 种可能的解决方案:[U*W*Vt]、[U*W.t()*Vt]、[U*W.t()*Vt.t()] & [U*W*Vt.t( )]

选项 B:

  1. 信件是手动生成的,保证没有异常值(请参阅“rotZ0”和“rotZ25”)
  2. Essential Matrix 由 'cv::findEssentialMat' 组成
  3. 使用 'cv::recoverPose' 估计相机姿势

结果

以上两个选项都不能正确恢复相对相机位姿(预计在 Z 轴上旋转 25 度)

我做错了什么?
如何正确解析相机相对位姿?

如有任何帮助,我们将不胜感激。


完整代码

#define RAD2DEG(rad) (((rad) * 180)/M_PI)
#define DEG2RAD(deg) (((deg) * M_PI)/180)
#define FOV2FOCAL(pixelssensorsize, fov) ((pixelssensorsize) / (2 * tan((fov) / 2)))// http://books.google.co.il/books?id=bXzAlkODwa8C&pg=PA48&lpg=PA48&dq=expressing+focal+length+in+pixels&source=bl&ots=gY4972kxAC&sig=U1BUeNHhOHmYIrDrO0YDb1DrNng&hl=en&sa=X&ei=45dLU9u9DIyv7QbN2oGIDA&ved=0CGsQ6AEwCA#v=onepage&q=expressing%20focal%20length%20in%20pixels&f=false

// http://nghiaho.com/?page_id=846
void DecomposeRotation(IN const cv::Mat& R, OUT float& fX, OUT float& fY, OUT float& fZ) {// Taken from MatLab
fX = (float)atan2(R.at<double>(2, 1), R.at<double>(2, 2));
fY = (float)atan2(-R.at<double>(2, 0), sqrt(R.at<double>(2, 1)*R.at<double>(2, 1) + R.at<double>(2, 2)*R.at<double>(2, 2)));
fZ = (float)atan2(R.at<double>(1, 0), R.at<double>(0, 0));
}

int _tmain(int argc, _TCHAR* argv[])
{
// 25 deg rotation in the Z axis (800x600)
const cv::Point2f rotZ0[] = { { 109, 250 }, { 175, 266 }, { 204, 279 }, { 221, 253 }, { 324, 281 }, { 312, 319 }, { 328, 352 }, { 322, 365 } };
const cv::Point2f rotZ25[] = { { 510, 234 }, { 569, 622 }, { 593, 278 }, { 616, 257 }, { 716, 303 }, { 698, 340 }, { 707, 377 }, { 697, 390 } };
const cv::Point2f rotZminus15[] = { { 37, 260 }, { 106, 275 }, { 135, 286 }, { 152, 260 }, { 258, 284 }, { 248, 324 }, { 266, 356 }, { 260, 370 } };


const double dFOV = DEG2RAD(45);
const cv::Point2d res(800, 600);
const cv::Point2d pntPriciplePoint(res.x / 2, res.y / 2);
const cv::Point2d pntFocal(FOV2FOCAL(res.x, dFOV), FOV2FOCAL(res.y, dFOV));

//transfer the vector of points to the appropriate opencv matrix structures
const int numPoints = sizeof(rotZ0) / sizeof(rotZ0[0]);
std::vector<cv::Point2f> vecPnt1(numPoints);
std::vector<cv::Point2f> vecPnt2(numPoints);

for (int i = 0; i < numPoints; i++) {
vecPnt2[i] = rotZ0[i];
//vecPnt2[i] = rotZminus15[i];
vecPnt1[i] = rotZ25[i];
}

//// Normalize points
//for (int i = 0; i < numPoints; i++) {
// vecPnt1[i].x = (vecPnt1[i].x - pntPriciplePoint.x) / pntFocal.x;
// vecPnt1[i].y = (vecPnt1[i].y - pntPriciplePoint.y) / pntFocal.y;

// vecPnt2[i].x = (vecPnt2[i].x - pntPriciplePoint.x) / pntFocal.x;
// vecPnt2[i].y = (vecPnt2[i].y - pntPriciplePoint.y) / pntFocal.y;
//}

try {
// http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
cv::Mat matK = cv::Mat::zeros(3, 3, CV_64F);
matK.at<double>(0, 0) = pntFocal.x;
matK.at<double>(1, 1) = pntFocal.y;
matK.at<double>(0, 2) = pntPriciplePoint.x;
matK.at<double>(1, 2) = pntPriciplePoint.y;
matK.at<double>(2, 2) = 1;

float x, y, z;
cv::Mat R1, R2, R3, R4;
cv::Mat t;
cv::Mat matE;

#if 1 // Option [A]
cv::Mat matF = cv::findFundamentalMat(vecPnt1, vecPnt2);
matE = matK.t() * matF * matK; // http://en.wikipedia.org/wiki/Essential_matrix

cv::Mat _tmp;
cv::Mat U;
cv::Mat Vt;

cv::SVD::compute(matE, _tmp, U, Vt);

cv::Matx33d W(0, -1, 0,
1, 0, 0,
0, 0, 1);

R1 = U*cv::Mat(W)*Vt; // See http://stackoverflow.com/questions/14150152/extract-translation-and-rotation-from-fundamental-matrix for details
R2 = U*cv::Mat(W)*Vt.t();
R3 = U*cv::Mat(W).t()*Vt;
R4 = U*cv::Mat(W).t()*Vt.t();
#else // Option [B]
matE = cv::findEssentialMat(vecPnt1, vecPnt2, pntFocal.x, pntPriciplePoint);// http://docs.opencv.org/trunk/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
cv::decomposeEssentialMat(matE, R1, R2, t);
int iInliers = cv::recoverPose(matE, vecPnt1, vecPnt2, R4, t);// , pntFocal.x, pntPriciplePoint);
R3 = cv::Mat::zeros(3, 3, CV_64F);
#endif

DecomposeRotation(R1, x, y, z);
std::cout << "Euler Angles R1 (X,Y,Z): " << RAD2DEG(x) << ", " << RAD2DEG(y) << ", " << RAD2DEG(z) << std::endl;
DecomposeRotation(R2, x, y, z);
std::cout << " R2 (X,Y,Z): " << RAD2DEG(x) << ", " << RAD2DEG(y) << ", " << RAD2DEG(z) << std::endl;
DecomposeRotation(R3, x, y, z);
std::cout << " R3 (X,Y,Z): " << RAD2DEG(x) << ", " << RAD2DEG(y) << ", " << RAD2DEG(z) << std::endl;
DecomposeRotation(R4, x, y, z);
std::cout << " R4 (X,Y,Z): " << RAD2DEG(x) << ", " << RAD2DEG(y) << ", " << RAD2DEG(z) << std::endl;

//cv::Mat res = matFrom.t() * matF * matTo;// Results in a null vector ( as it should ) http://en.wikipedia.org/wiki/Fundamental_matrix_(computer_vision)
//res = matFrom.t() * matE * matTo;// Results in a null vector ( as it should )
}
catch (cv::Exception e) {
_ASSERT(FALSE);
}
return 0;
}

执行结果

选项 A:

Euler Angles R1 (X,Y,Z): -26.2625, 8.70029, 163.643
R2 (X,Y,Z): 16.6929, -29.9901, -3.81642
R3 (X,Y,Z): 5.59033, -20.841, -19.9316
R4 (X,Y,Z): -5.76906, 7.25413, -179.086

选项 B:

Euler Angles R1 (X,Y,Z): -13.8355, 3.0098, 171.451
R2 (X,Y,Z): 2.22802, -22.3479, -11.332
R3 (X,Y,Z): 0, -0, 0
R4 (X,Y,Z): 2.22802, -22.3479, -11.332

最佳答案

首先,校准您的相机而不是使用预定义的值。它总是会产生很大的影响。由 8-pt 或 5-pt 计算的相对位姿会受到很多噪音的影响,绝不意味着最终结果。话虽如此,重建点然后捆绑调整整个场景是个好主意。优化你的外部力量,你应该想出一个更好的姿势。

关于c++ - OpenCV>>相机相对位姿估计,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23114047/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com