gpt4 book ai didi

python - 如何隔离轮廓内的所有内容、缩放轮廓并测试与图像的相似性?

转载 作者:行者123 更新时间:2023-12-02 03:36:54 27 4
gpt4 key购买 nike

我正在开发一个项目只是为了好玩,我的目标是玩在线扑克并让程序识别 table 上的牌。我使用 OpenCV 和 python 来隔离卡片所在的区域。我已经能够拍摄该区域的图像,对其进行灰度和阈值处理,并在卡片边缘绘制轮廓。我现在陷入了如何前进的困境。

这是我到目前为止的代码:

import cv2
from PIL import ImageGrab
import numpy as np

def processed(image):
grayscaled = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresholded = cv2.Canny(grayscaled, threshold1 = 200, threshold2 = 200)

return thresholded

def drawcard1():
screen = ImageGrab.grab(bbox = (770,300,850,400))

processed_img = processed(np.array(screen))

outside_contour, dummy = cv2.findContours(processed_img.copy(), 0,2)

colored = cv2.cvtColor(processed_img, cv2.COLOR_GRAY2BGR)

cv2.drawContours(colored, outside_contour, 0, (0,255,0),2)

cv2.imshow('resized_card', colored)

while True:
drawcard1()



if cv2.waitKey(25) & 0xFF == ord('w'):
cv2.destroyAllWindows()
break

这是我到目前为止所得到的结果:

Contour of a single card

我需要能够获取轮廓的内部,并删除其外部的任何内容。那么生成的图像应该只是卡片,我需要将其缩放到 49x68 像素。一旦我能让它发挥作用,我的计划就是获取等级和花色的轮廓,并用白色像素填充它,然后我将其与一组图像进行比较以确定最合适的。

我对 OpenCV 和图像处理非常陌生,但我发现这些东西非常迷人!我已经通过 Google 找到了这一步,但这次我什么也没找到。

这是我现在用来替换游戏的图像:

Image sample of the game

这是我将用来与 table 卡进行比较的图像之一:

Picture of the known card

最佳答案

这种情况非常适合 template matching 。这个想法是在更大的图像中搜索并找到模板图像的位置。为了执行此方法,模板会在输入图像上滑动(类似于 2D 卷积),在其中执行比较方法以确定像素相似性。这是模板匹配背后的基本思想。不幸的是,这种基本方法有缺陷,因为它只有在模板图像大小与输入图像中要查找的所需项目相同时才有效。因此,如果您的模板图像小于在输入图像中查找的所需区域,则此方法将不起作用。

为了解决这个限制,我们可以通过使用np.linspace()动态重新缩放图像来实现比例变体模板匹配。在每次迭代中,我们都会调整输入图像的大小并跟踪比率。我们继续调整大小,直到模板图像大小大于调整大小的图像,同时跟踪最高相关值。相关值越高意味着匹配越好。一旦我们迭代了各种尺度,我们就会找到最大匹配的比率,然后计算边界框的坐标以确定投资返回率。

<小时/>

使用您的模板图像:

enter image description here

这是检测到的卡,以绿色突出显示。要可视化动态模板匹配的过程,请取消代码中该部分的注释。

enter image description here

代码

import cv2
import numpy as np

# Resizes a image and maintains aspect ratio
def maintain_aspect_ratio_resize(image, width=None, height=None, inter=cv2.INTER_AREA):
# Grab the image size and initialize dimensions
dim = None
(h, w) = image.shape[:2]

# Return original image if no need to resize
if width is None and height is None:
return image

# We are resizing height if width is none
if width is None:
# Calculate the ratio of the height and construct the dimensions
r = height / float(h)
dim = (int(w * r), height)
# We are resizing width if height is none
else:
# Calculate the ratio of the 0idth and construct the dimensions
r = width / float(w)
dim = (width, int(h * r))

# Return the resized image
return cv2.resize(image, dim, interpolation=inter)

# Load template and convert to grayscale
template = cv2.imread('template.png')
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
(tH, tW) = template.shape[:2]
cv2.imshow("template", template)

# Load original image, convert to grayscale
original_image = cv2.imread('1.jpg')
gray = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
found = None

# Dynamically rescale image for better template matching
for scale in np.linspace(0.1, 3.0, 20)[::-1]:

# Resize image to scale and keep track of ratio
resized = maintain_aspect_ratio_resize(gray, width=int(gray.shape[1] * scale))
r = gray.shape[1] / float(resized.shape[1])

# Stop if template image size is larger than resized image
if resized.shape[0] < tH or resized.shape[1] < tW:
break

# Threshold resized image and apply template matching
thresh = cv2.threshold(resized, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
detected = cv2.matchTemplate(thresh, template, cv2.TM_CCOEFF)
(_, max_val, _, max_loc) = cv2.minMaxLoc(detected)

# Uncomment this section for visualization
'''
clone = np.dstack([thresh, thresh, thresh])
cv2.rectangle(clone, (max_loc[0], max_loc[1]), (max_loc[0] + tW, max_loc[1] + tH), (0,255,0), 2)
cv2.imshow('visualize', clone)
cv2.waitKey(50)
'''

# Keep track of correlation value
# Higher correlation means better match
if found is None or max_val > found[0]:
found = (max_val, max_loc, r)

# Compute coordinates of bounding box
(_, max_loc, r) = found
(start_x, start_y) = (int(max_loc[0] * r), int(max_loc[1] * r))
(end_x, end_y) = (int((max_loc[0] + tW) * r), int((max_loc[1] + tH) * r))

# Draw bounding box on ROI
cv2.rectangle(original_image, (start_x, start_y), (end_x, end_y), (0,255,0), 5)
cv2.imshow('detected', original_image)
cv2.imwrite('detected.png', original_image)
cv2.waitKey(0)

关于python - 如何隔离轮廓内的所有内容、缩放轮廓并测试与图像的相似性?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59401389/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com