gpt4 book ai didi

python - 如何使用立体相机创建良好的深度图?

转载 作者:行者123 更新时间:2023-12-04 03:44:46 25 4
gpt4 key购买 nike

我正在研究一个项目很长时间。我的目标是从立体相机的图像中获取深度图并仅过滤人体以计算内部人体。

我正在尝试连续 1-2 个月校准我的相机。然而,当我在整流对上画对极线时,结果不够好(我附上了我的整流对结果)。我现在正在工作,校准结果一般,并尝试从视差图中获取深度图。我已经录制了一个图像序列,.avi 文件,当我尝试从该视频中获取深度图时,我遇到了不稳定的情况。前一帧中的白色点在下一帧中可能非常黑。所以我不能仅仅通过过滤差异来统计人数。我使用 SGBM 从校正后的图像中获取深度。在这个项目中我仍然被认为是业余爱好者。我愿意接受任何建议。 (如何做更好的标定?更好的视差图?更好的深度图?)

这是深度图和校正对: depth map

整流对和对极线 pair and lines

我已经用近 600 对相机校准并改进了它。我的总体平均误差是 0.13 像素,有 35 对图像。

minDisparity=-1,
numDisparities=2*16, # max_disp has to be dividable by 16 f. E. HH 192, 256
blockSize=window_size,
P1=8 * 3 * window_size,
# wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely
P2=32 * 3 * window_size,
disp12MaxDiff=12,
uniquenessRatio=1,
speckleWindowSize=50,
speckleRange=32,
preFilterCap=63,
mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY

这是我的 block 匹配参数。

最佳答案

要改善视差图的结果,您可以实现后置过滤,这里有一个教程(https://docs.opencv.org/master/d3/d14/tutorial_ximgproc_disparity_filtering.html)。我还使用了一个额外的散斑过滤器和填充缺失差异的选项。 python实现如下:

stereoProcessor = cv2.StereoSGBM_create(
minDisparity=0,
numDisparities = max_disparity, # max_disp has to be dividable by 16 f. E. HH 192, 256
blockSize=window_size,
P1 = p1, # 8*number_of_image_channels*SADWindowSize*SADWindowSize
P2 = p2, # 32*number_of_image_channels*SADWindowSize*SADWindowSize
disp12MaxDiff=disp12Maxdiff,
uniquenessRatio= uniquenessRatio,
speckleWindowSize=speckle_window,
speckleRange=speckle_range,
preFilterCap=prefiltercap,
# mode=cv2.STEREO_SGBM_MODE_HH# numDisparities = max_disparity, # max_disp has to be dividable by 16 f. E. HH 192, 256

)

#stereoProcessor = cv2.StereoBM_create(numDisparities=16, blockSize=15)

# set up left to right + right to left left->right + right->left matching +
# weighted least squares filtering (not used by default)

left_matcher = stereoProcessor
right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)

#Image information
height, width, channels = I.shape

frameL= I[:,0:int(width/2),:]
frameR = I[:,int(width/2):width,:]

# remember to convert to grayscale (as the disparity matching works on grayscale)

grayL = cv2.cvtColor(frameL,cv2.COLOR_BGR2GRAY)
grayR = cv2.cvtColor(frameR,cv2.COLOR_BGR2GRAY)

# perform preprocessing - raise to the power, as this subjectively appears
# to improve subsequent disparity calculation

grayL = np.power(grayL, 0.75).astype('uint8')
grayR = np.power(grayR, 0.75).astype('uint8')

# compute disparity image from undistorted and rectified versions
# (which for reasons best known to the OpenCV developers is returned scaled by 16)

if (wls_filter):

wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=left_matcher)
wls_filter.setLambda(wls_lambda)
wls_filter.setSigmaColor(wls_sigma)
displ = left_matcher.compute(cv2.UMat(grayL),cv2.UMat(grayR)) # .astype(np.float32)/16
dispr = right_matcher.compute(cv2.UMat(grayR),cv2.UMat(grayL)) # .astype(np.float32)/16
displ = np.int16(cv2.UMat.get(displ))
dispr = np.int16(cv2.UMat.get(dispr))
disparity = wls_filter.filter(displ, grayL, None, dispr)
else:

disparity_UMat = stereoProcessor.compute(cv2.UMat(grayL),cv2.UMat(grayR))
disparity = cv2.UMat.get(disparity_UMat)

speckleSize = math.floor((width * height) * 0.0005)
maxSpeckleDiff = (8 * 16) # 128

cv2.filterSpeckles(disparity, 0, speckleSize, maxSpeckleDiff)

# scale the disparity to 8-bit for viewing
# divide by 16 and convert to 8-bit image (then range of values should
# be 0 -> max_disparity) but in fact is (-1 -> max_disparity - 1)
# so we fix this also using a initial threshold between 0 and max_disparity
# as disparity=-1 means no disparity available

_, disparity = cv2.threshold(disparity,0, max_disparity * 16, cv2.THRESH_TOZERO)
disparity_scaled = (disparity / 16.).astype(np.uint8)

# fill disparity if requested

if (fill_missing_disparity):

_, mask = cv2.threshold(disparity_scaled,0, 1, cv2.THRESH_BINARY_INV)
mask[:,0:120] = 0
disparity_scaled = cv2.inpaint(disparity_scaled, mask, 2, cv2.INPAINT_NS)

# display disparity - which ** for display purposes only ** we re-scale to 0 ->255
disparity_to_display = (disparity_scaled * (256. / self.value_NumDisp)).astype(np.uint8)

关于python - 如何使用立体相机创建良好的深度图?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65357601/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com