gpt4 book ai didi

objective-c - OpenCV 检测图像上的 Blob

转载 作者:太空狗 更新时间:2023-10-30 03:15:03 25 4
gpt4 key购买 nike

我需要在图像上找到(并在周围绘制矩形)/获取最大和最小半径 Blob 。 (下面的示例)

问题是为图像找到正确的过滤器,以允许 CannyThreshold 转换来突出显示 Blob 。然后我将使用 findContours 来查找矩形。

我试过:

  • Threshold - 不同级别

  • blur->erode->erode->grayscale->canny

  • 更改 image tone有各种“线”

等等。更好的结果是检测到一 block (20-30%) 的 Blob 。并且此信息不允许在 blob 周围绘制矩形。另外,感谢阴影,与检测到的 Blob 无关,因此也可以防止检测到该区域。

据我所知,我需要找到具有强烈对比度的计数器(不像阴影那样平滑)。有什么办法可以用 openCV 做到这一点吗?

更新

分箱:image 1 , image 2 , image 3 , image 4 , image 5 , image 6 , image 7 , image 8 , image 9 , image 10 , image 11 , image 12

再更新一次

我相信 Blob 的边缘有对比区域。所以,我试图让边缘更强:我创建了 2 个 gray scale Mat: A and B,为第二个应用 Gaussian blur - B (稍微降低噪音),然后我进行了一些计算:遍历每个像素并找到 'A' 的 Xi,Yi 之间的最大差异和 'B' 附近的点:

enter image description here

并将max 差应用于Xi,Yi。所以我得到这样的东西:

enter image description here

我走对了吗?顺便说一句,我可以通过 OpenCV 方法达到这样的效果吗?

更新 Image Denoising有助于降低噪声,Sobel - 突出轮廓,然后 threshold + findContourscustome convexHull 变得相似我正在寻找,但它对某些 Blob 不利。

最佳答案

由于输入图像之间存在较大差异,算法应该能够适应这种情况。由于 Canny 基于检测高频,因此我的算法将图像的锐度视为用于预处理自适应的参数。我不想花一个星期的时间来弄清楚所有数据的函数,所以我应用了一个基于 2 张图像的简单线性函数,然后用第三张图像进行了测试。这是我的结果:

first result

second result

third result

请记住,这是一种非常基本的方法,只是证明了一点。它将需要实验、测试和改进。这个想法是使用 Sobel 并对获取的所有像素求和。那,除以图像的大小,应该给你一个基本的高频估计。图像的响应。现在,通过实验,我发现 CLAHE 过滤器的 clipLimit 值在 2 个测试用例中起作用,并找到了 linear function 连接高频。使用 CLAHE 滤波器对输入进行响应,产生良好的结果。

sobel = get_sobel(img)
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557

那是自适应部分。现在的轮廓。我花了一段时间才想出过滤掉噪音的正确方法。我选择了一个简单的技巧:使用轮廓查找两次。首先,我用它来过滤掉不必要的、嘈杂的轮廓。然后我继续使用一些形态学魔法,最终得到被检测对象的正确 Blob (代码中有更多详细信息)。最后一步是根据计算出的平均值过滤边界矩形,因为在所有样本中, Blob 的大小相对相似。

import cv2
import numpy as np


def unsharp_mask(img, blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
gaussian = cv2.GaussianBlur(img, (5,5), 0)
return cv2.addWeighted(img, imgWeight, gaussian, gaussianWeight, 0)


def smoother_edges(img, first_blur_size, second_blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
img = cv2.GaussianBlur(img, first_blur_size, 0)
return unsharp_mask(img, second_blur_size, imgWeight, gaussianWeight)


def close_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)


def open_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)


def shrink_rect(rect, scale = 0.8):
center, (width, height), angle = rect
width = width * scale
height = height * scale
rect = center, (width, height), angle
return rect


def clahe(img, clip_limit = 2.0):
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(5,5))
return clahe.apply(img)


def get_sobel(img, size = -1):
sobelx64f = cv2.Sobel(img,cv2.CV_64F,2,0,size)
abs_sobel64f = np.absolute(sobelx64f)
return np.uint8(abs_sobel64f)


img = cv2.imread("blobs4.jpg")
# save color copy for visualizing
imgc = img.copy()
# resize image to make the analytics easier (a form of filtering)
resize_times = 5
img = cv2.resize(img, None, img, fx = 1 / resize_times, fy = 1 / resize_times)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# use sobel operator to evaluate high frequencies
sobel = get_sobel(img)
# experimentally calculated function - needs refining
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557

# don't apply clahe if there is enough high freq to find blobs
if(clip_limit < 1.0):
clip_limit = 0.1
# limit clahe if there's not enough details - needs more tests
if(clip_limit > 8.0):
clip_limit = 8

# apply clahe and unsharp mask to improve high frequencies as much as possible
img = clahe(img, clip_limit)
img = unsharp_mask(img)

# filter the image to ensure edge continuity and perform Canny
# (values selected experimentally, using trackbars)
img_blurred = (cv2.GaussianBlur(img.copy(), (2*2+1,2*2+1), 0))
canny = cv2.Canny(img_blurred, 35, 95)

# find first contours
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

# prepare black image to draw contours
canvas = np.ones(img.shape, np.uint8)
for c in cnts:
l = cv2.arcLength(c, False)
x,y,w,h = cv2.boundingRect(c)
aspect_ratio = float(w)/h

# filter "bad" contours (values selected experimentally)
if l > 500:
continue
if l < 20:
continue
if aspect_ratio < 0.2:
continue
if aspect_ratio > 5:
continue
if l > 150 and (aspect_ratio > 10 or aspect_ratio < 0.1):
continue
# draw all the other contours
cv2.drawContours(canvas, [c], -1, (255, 255, 255), 2)

# perform closing and blurring, to close the gaps
canvas = close_image(canvas, (7,7))
img_blurred = cv2.GaussianBlur(canvas, (8*2+1,8*2+1), 0)
# smooth the edges a bit to make sure canny will find continuous edges
img_blurred = smoother_edges(img_blurred, (9,9))
kernel = np.ones((3,3), np.uint8)
# erode to make sure separate blobs are not touching each other
eroded = cv2.erode(img_blurred, kernel)
# perform necessary thresholding before Canny
_, im_th = cv2.threshold(eroded, 50, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(im_th, 11, 33)

# find contours again. this time mostly the right ones
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# calculate the mean area of the contours' bounding rectangles
sum_area = 0
rect_list = []
for i,c in enumerate(cnts):
rect = cv2.minAreaRect(c)
_, (width, height), _ = rect
area = width*height
sum_area += area
rect_list.append(rect)
mean_area = sum_area / len(cnts)

# choose only rectangles that fulfill requirement:
# area > mean_area*0.6
for rect in rect_list:
_, (width, height), _ = rect
box = cv2.boxPoints(rect)
box = np.int0(box * 5)
area = width * height

if(area > mean_area*0.6):
# shrink the rectangles, since the shadows and reflections
# make the resulting rectangle a bit bigger
# the value was guessed - might need refinig
rect = shrink_rect(rect, 0.8)
box = cv2.boxPoints(rect)
box = np.int0(box * resize_times)
cv2.drawContours(imgc, [box], 0, (0,255,0),1)

# resize for visualizing purposes
imgc = cv2.resize(imgc, None, imgc, fx = 0.5, fy = 0.5)
cv2.imshow("imgc", imgc)
cv2.imwrite("result3.png", imgc)
cv2.waitKey(0)

总的来说,我认为这是一个非常有趣的问题,有点大到无法在这里回答。我提出的方法应被视为路标,而不是完整的解决方案。基本思想是:

  1. 自适应预处理。

  2. 查找轮廓两次:用于过滤,然后用于实际分类。

  3. 根据平均大小过滤 blob。

感谢您带来的乐趣,祝您好运!

关于objective-c - OpenCV 检测图像上的 Blob ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42519707/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com