gpt4 book ai didi

c++ - OpenCV:使用 cv::triangulatepoints() 进行立体相机跟踪的问题

转载 作者:太空宇宙 更新时间:2023-11-03 23:10:54 25 4
gpt4 key购买 nike

我正在尝试使用 cv::triangulatePoints() 函数对棋盘图案进行立体相机跟踪,使用两个现成的网络摄像头。我使用 MATLAB 的 Stereocamera Calibration Toolbox 校准了我的设置,然后在我的 OpenCV 代码中使用了它。

我的问题是,当我从 cv::triangulatePoints() 获取坐标时(在我将其转换为欧几里德空间之后),当将它们 3D 绘制到 MATLAB 中时,它们不会形成点平面。 我想知道我的代码中是否存在我忽略的错误?

下面列出了我使用的代码。任何见解都会有很大帮助!

Mat cameraMat1 = (Mat_<double>(3,3) << 1411.3, 2.2527, 958.3516,
0, 1404.1, 566.6821,
0, 0, 1);

Mat distCoeff1 =(Mat_<double>(5,1) << 0.0522,
-0.1651,
0.0023,
0.0020,
0.0924);

Mat cameraMat2 = (Mat_<double>(3,3) << 1413.7, -1.2189, 968.5768,
0, 1408.1, 505.1645,
0, 0, 1);

Mat distCoeff2 =(Mat_<double>(5,1) << 0.0465,
-0.1948,
-0.0013,
-0.00016774,
0.1495);


Mat R = (Mat_<double>(3,3) << 0.9108, 0.0143, -0.4127, -0.0228, 0.9996, -0.0157, 0.4123, 0.0237, 0.9107);
Mat T = (Mat_<double>(3,1) << -209.4118, 0.2208, 49.1987);

Mat R1, R2, P1, P2, Q;

Size imSize = Size(1920,1080); //Pixel Resolution

Mat frame1, frame2;

vector<Point2f> foundCorners1;
vector<Point2f> foundCorners2;

Size chessSize(11,8);

//for undistort
vector<Point2f> ufoundCorners1;
vector<Point2f> ufoundCorners2;

Mat homopnts3D(4, foundCorners1.size(), CV_64F);
Mat pnts3D;

int main(int argc, char** argv){
//Read in checkerboard images
frame1 = imread(file1);
frame2 = imread(file2);

//get corners
found1 = findChessboardCorners(frame1, chessSize, foundCorners1);
found2 = findChessboardCorners(frame2, chessSize, foundCorners2);


stereoRectify(cameraMat1, distCoeff1, cameraMat2, distCoeff2, imSize, R, T, R1, R2, P1, P2, Q);

//Addition - Undistort those points
undistortPoints(foundCorners1, ufoundCorners1, cameraMat1, distCoeff1, R1, P1);
undistortPoints(foundCorners2, ufoundCorners2, cameraMat2, distCoeff2, R2, P2);

//StereoTriangulation
triangulatePoints(P1, P2, ufoundCorners1, ufoundCorners2, homopnts3D);

//convert to euclidean
convertPointsFromHomogeneous(homopnts3D.reshape(4,1), pnts3D);
}

最佳答案

代码看起来是正确的

您应该使用重映射功能检查您的立体校正是否正确。

Mat rmap[2][2];
Mat rectifiedLeftImg;
Mat rectifiedRightImg;

initUndistortRectifyMap(cameraMat1, distCoeff1, R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMat2, distCoeff2, R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);

cv::remap(frame1, rectifiedLeftimg, rmap[0][0], rmap[0][1], INTER_LINEAR, BORDER_CONSTANT, Scalar());
cv::remap(frame2, rectifiedRightimg, rmap[1][0], rmap[1][1], INTER_LINEAR, BORDER_CONSTANT, Scalar());

imshow("rectifiedLeft",rectifiedLeftimg);
imshow("rectifiedRight",rectifiedRightimg);

关于c++ - OpenCV:使用 cv::triangulatepoints() 进行立体相机跟踪的问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50416049/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com