gpt4 book ai didi

python - OpenCV 找到正确的阈值来确定图像匹配与否与匹配分数

转载 作者:太空狗 更新时间:2023-10-29 21:11:50 25 4
gpt4 key购买 nike

我目前正在使用各种特征提取器和各种匹配器制作识别程序。使用匹配器的分数,我想创建一个分数阈值,它可以进一步确定它是正确匹配还是错误匹配。

我正在尝试了解各种匹配器的 DMatch 距离含义,距离值越小匹配越好吗?如果是,我很困惑,因为具有不同位置的相同图像返回的值比两个不同的图像更大。

我运行了两个测试用例:

  1. 将一张图片与不同位置的相同图片进行比较等。
  2. 将一张图片与具有几个不同位置的完全不同的图片进行比较,等等。

这是我的测试结果:

-----------------------------------------------

Positive image average distance
Total test number: 70
Comparing with SIFT
Use BF with Ratio Test: 874.071456255
Use FLANN : 516.737270464

Comparing with SURF
Use BF with Ratio Test: 2.92960552163
Use FLANN : 1.47225751158

Comparing with ORB
Use BF : 12222.1428571
Use BF with Ratio Test: 271.638643755

Comparing with BRISK
Use BF : 31928.4285714
Use BF with Ratio Test: 1537.63658578

Maximum positive image distance
Comparing with SIFT
Use BF with Ratio Test: 2717.88008881
Use FLANN : 1775.63563538

Comparing with SURF
Use BF with Ratio Test: 4.88817568123
Use FLANN : 2.81848525628

Comparing with ORB
Use BF : 14451.0
Use BF with Ratio Test: 1174.47851562

Comparing with BRISK
Use BF : 41839.0
Use BF with Ratio Test: 3846.39746094

-----------------------------------------

Negative image average distance
Total test number: 72
Comparing with SIFT
Use BF with Ratio Test: 750.028228866
Use FLANN : 394.982576052

Comparing with SURF
Use BF with Ratio Test: 2.89866939275
Use FLANN : 1.59815886725

Comparing with ORB
Use BF : 12098.9444444
Use BF with Ratio Test: 261.874231339

Comparing with BRISK
Use BF : 31165.8472222
Use BF with Ratio Test: 1140.46670034

Minimum negative image distance
Comparing with SIFT
Use BF with Ratio Test: 0
Use FLANN : 0

Comparing with SURF
Use BF with Ratio Test: 1.25826786458
Use FLANN : 0.316588282585

Comparing with ORB
Use BF : 10170.0
Use BF with Ratio Test: 0

Comparing with BRISK
Use BF : 24774.0
Use BF with Ratio Test: 0

同样在某些情况下,当两个不同的图像相互测试并且没有匹配时,匹配器也会返回 0 个分数,这与两个相同的图像一起比较时的分数完全相同。

进一步考察,主要有四种情况:

  1. 两张相同的图片,很多匹配项,距离 = 0
  2. 两张相同的图片(不相同),很多匹配项,距离 = 大值(value)
  3. 两张完全不同的图片,没有匹配,距离=0
  4. 两张不同的图片,一些匹配,距离=小值

根据这些案例找到正确的阈值似乎是个问题,因为有些案例相互矛盾。通常图像越相似,距离值越小。

匹配器.py

def useBruteForce(img1, img2, kp1, kp2, des1, des2, setDraw):
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

totalDistance = 0
for g in matches:
totalDistance += g.distance

if setDraw == True:
# Draw matches.
img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches, None, flags=2)
plt.imshow(img3),plt.show()

return totalDistance


def useBruteForceWithRatioTest(img1, img2, kp1, kp2, des1, des2, setDraw):
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append(m)

totalDistance = 0
for g in good:
totalDistance += g.distance

if setDraw == True:
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,[good],None,flags=2)
plt.imshow(img3),plt.show()

return totalDistance


def useFLANN(img1, img2, kp1, kp2, des1, des2, setDraw, type):
# Fast Library for Approximate Nearest Neighbors
MIN_MATCH_COUNT = 1
FLANN_INDEX_KDTREE = 0
FLANN_INDEX_LSH = 6

if type == True:
# Detect with ORB
index_params= dict(algorithm = FLANN_INDEX_LSH,
table_number = 6, # 12
key_size = 12, # 20
multi_probe_level = 1) #2
else:
# Detect with Others such as SURF, SIFT
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)

# It specifies the number of times the trees in the index should be recursively traversed. Higher values gives better precision, but also takes more time
search_params = dict(checks = 90)

flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)

# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)

totalDistance = 0
for g in good:
totalDistance += g.distance

if setDraw == True:
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)

M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()

h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)

img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)

else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None

draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)

img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
plt.imshow(img3, 'gray'),plt.show()

return totalDistance

比较器.py

import matcher    

def check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB):
if matcherType == 1:
return matcher.useBruteForce(img1, img2, kp1, kp2, des1, des2, setDraw)
elif matcherType == 2:
return matcher.useBruteForceWithRatioTest(img1, img2, kp1, kp2, des1, des2, setDraw)
elif matcherType == 3:
return matcher.useFLANN(img1, img2, kp1, kp2, des1, des2, setDraw, ORB)
else:
print "Matcher not chosen correctly, use Brute Force matcher as default"
return matcher.useBruteForce(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw)


def useORB(filename1, filename2, matcherType, setDraw):
img1 = cv2.imread(filename1,0) # queryImage
img2 = cv2.imread(filename2,0) # trainImage

# Initiate ORB detector
orb = cv2.ORB_create()

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
ORB = True
return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)


def useSIFT(filename1, filename2, matcherType, setDraw):
img1 = cv2.imread(filename1,0) # queryImage
img2 = cv2.imread(filename2,0) # trainImage

# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
ORB = False
return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)


def useSURF(filename1, filename2, matcherType, setDraw):
img1 = cv2.imread(filename1, 0)
img2 = cv2.imread(filename2, 0)

# Here I set Hessian Threshold to 400
surf = cv2.xfeatures2d.SURF_create(400)

# Find keypoints and descriptors directly
kp1, des1 = surf.detectAndCompute(img1, None)
kp2, des2 = surf.detectAndCompute(img2, None)
ORB = False
return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)


def useBRISK(filename1, filename2, matcherType, setDraw):
img1 = cv2.imread(filename1,0) # queryImage
img2 = cv2.imread(filename2,0) # trainImage

# Initiate BRISK detector
brisk = cv2.BRISK_create()

# find the keypoints and descriptors with BRISK
kp1, des1 = brisk.detectAndCompute(img1,None)
kp2, des2 = brisk.detectAndCompute(img2,None)
ORB = True
return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)

最佳答案

在OpenCV的教程中,是这样说的

For BF matcher, first we have to create the BFMatcher object using cv.BFMatcher(). It takes two optional params. First one is normType. It specifies the distance measurement to be used. By default, it is cv.NORM_L2. It is good for SIFT, SURF etc (cv.NORM_L1 is also there). For binary string based descriptors like ORB, BRIEF, BRISK etc, cv.NORM_HAMMING should be used, which used Hamming distance as measurement. If ORB is using WTA_K == 3 or 4, cv.NORM_HAMMING2 should be used.

https://docs.opencv.org/3.4/dc/dc3/tutorial_py_matcher.html

所以您应该为 SIFT 和 ORB 创建不同的匹配器对象(您明白了)。这可能就是您计算的距离差异如此之大的原因。

关于python - OpenCV 找到正确的阈值来确定图像匹配与否与匹配分数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43423411/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com