gpt4 book ai didi

opencv - Pi 直播视频颜色检测

转载 作者:太空宇宙 更新时间:2023-11-03 22:49:17 25 4
gpt4 key购买 nike

我打算在我的电视后面创建流光溢彩效果。我想通过使用指向我的电视的相机来实现这一点。我认为最简单的方法是使用简单的网络摄像机。我需要颜色检测来检测屏幕上的颜色并将其转换为 LED 灯带上的 RGB 值。

我的房子中央有一个 Raspberry Pi 作为集线器。我正在考虑像这样使用它

IP 摄像头指向我的屏幕 处理 pi 上的视频并将其转换为 rgb 值并将其发送到 mqtt 服务器。在我的电视后面接收我的 nodeMCU 上的颜色。

如何在我的 pi 上的实时流(多点)上检测颜色?

最佳答案

如果您可以创建任何背景颜色,最好的方法可能是计算 k 均值或中值以获得“最受欢迎”的颜色。如果不同地方的环境光可能不同,那么在图像边缘使用 ROI,您可以检查该区域中哪种颜色占主导地位(通过比较不同颜色的样本数量)。

如果您只有有限的颜色(例如只有 R、G 和 B),那么您可以简单地检查哪个 channel 在所需区域具有最高强度。

我编写代码时假设您可以创建任何 RGB 环境颜色。

作为测试图像,我使用了这个: Input image

代码是:

import cv2
import numpy as np

# Read an input image (in your case this will be an image from the camera)
img = cv2.imread('saul2.png ', cv2.IMREAD_COLOR)

# The block_size defines how big the patches around an image are
# the more LEDs you have and the more segments you want, the lower block_size can be
block_size = 60

# Get dimensions of an image
height, width, chan = img.shape

# Calculate number of patches along height and width
h_steps = height / block_size
w_steps = width / block_size

# In one loop I calculate both: left and right ambient or top and bottom
ambient_patch1 = np.zeros((60, 60, 3))
ambient_patch2 = np.zeros((60, 60, 3))

# Create output image (just for visualization
# there will be an input image in the middle, 10px black border and ambient color)
output = cv2.copyMakeBorder(img, 70, 70, 70, 70, cv2.BORDER_CONSTANT, value = 0)

for i in range(h_steps):
# Get left and right region of an image
left_roi = img[i * 60 : (i + 1) * 60, 0 : 60]
right_roi = img[i * 60 : (i + 1) * 60, -61 : -1]

left_med = np.median(left_roi, (0, 1)) # This is an actual RGB color for given block (on the left)
right_med = np.median(right_roi, (0, 1)) # and on the right

# Create patch having an ambient color - this is just for visualization
ambient_patch1[:, :] = left_med
ambient_patch2[:, :] = right_med

# Put it in the output image (the additional 70 is because input image is in the middle (shifted by 70px)
output[70 + i * 60 : 70+ (i + 1) * 60, 0 : 60] = ambient_patch1
output[70 + i * 60 : 70+ (i + 1) * 60, -61: -1] = ambient_patch2


for i in range(w_steps):
# Get top and bottom region of an image
top_roi = img[0 : 60, i * 60 : (i + 1) * 60]
bottom_roi = img[-61 : -1, i * 60: (i + 1) * 60]

top_med = np.median(top_roi, (0, 1)) # This is an actual RGB color for given block (on top)
bottom_med = np.median(bottom_roi, (0, 1)) # and bottom

# Create patch having an ambient color - this is just for visualization
ambient_patch1[:, :] = top_med
ambient_patch2[:, :] = bottom_med

# Put it in the output image (the additional 70 is because input image is in the middle (shifted by 70px)
output[0 : 60, 70 + i * 60 : 70 + (i + 1) * 60] = ambient_patch1
output[-61: -1, 70 + i * 60 : 70 + (i + 1) * 60] = ambient_patch2

# Save output image
cv2.imwrite('saul_output.png', output)

结果如下: Output image

希望对您有所帮助!

编辑:还有两个例子: Example1 Example2

关于opencv - Pi 直播视频颜色检测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42004052/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com