- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
一段时间前在 Android 中尝试过之后,我又开始使用 OpenCV。现在,我正在尝试使用 Python 2 的 OpenCV 2。到目前为止,我已经能够使用它来获取实时摄像头馈送,并且在一个单独的项目中,我已经能够在我将提供的地方实现模板匹配父图像和父图像中存在的小图像,并匹配父图像中的子图像,然后输出另一幅图像,在图像匹配上绘制一个红色矩形。
这是模板匹配的代码。没什么特别的,与 OpenCV 网站上的相同:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img_rgb = cv2.imread('mario.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('mario_coin.png',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imwrite('res.png',img_rgb)
然后至于我的实时摄像头源代码,我有这个:
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
到目前为止,这两种代码都运行良好,彼此独立。我尝试的是在相机流代码显示任何内容之前,我尝试在部分中插入模板匹配代码。
这是我想出的:
from picamera.array import PiRGBArray
from picamera import PiCamera
from matplotlib import pyplot as plt
import time
import cv2
import numpy as np
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
template = cv2.imread('mario_coin.png', 0)
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True):
# grab the raw NumPy array representing the image,
# then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# we do something here
# we get the image or something then run some matching
# if we get a match, we draw a square on it or something
## img_rbg = cv2.imread('mario.jpg')
img_rbg = image
## img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
img_gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where(res >= threshold)
for pt in zip(*loc[::-1]):
## cv2.rectangle(img_rbg, pt, (pt[0] + w, pt[1] + h),
## (0,0,255), 2)
cv2.rectangle(image, pt, (pt[0] + w, pt[1] + h),
(0,0,255), 2)
## image = img_rgb
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
我想做的是,我尝试使用来自相机的图像输入,而不是 cv2.imread(sample.png)
,并将其用于模板匹配我之前的算法。
但实际情况是相机打开一秒钟(由指示灯指示),然后关闭,程序停止。
我真的不知道发生了什么。有没有人知道如何使用实时摄像头作为模板匹配的输入?
我正在使用带有 v1.3 相机的 Raspberry Pi 2。
最佳答案
我实际上设法解决了它。我忘了我在这里发布了一个问题。
from picamera.array import PiRGBArray
from picamera import PiCamera
from matplotlib import pyplot as plt
import time
import cv2
import numpy as np
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
template = cv2.imread('mario_coin.png', 0)
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True):
# grab the raw NumPy array representing the image,
# then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# we do something here
# we get the image or something then run some matching
# if we get a match, we draw a square on it or something
img_rbg = image
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread("mario_coin.png", 0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where(res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(image, (pt[1]. pt[0]), (pt[1] + w, pt[0] + h),
(0,0,255), 2)
# show the frame
cv2.imshow("Frame", img_rbg)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
关于Python OpenCV - 使用实时摄像机馈送帧作为输入的模板匹配,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42559985/
我正在开发一个小型社交网络项目,该项目将允许用户相互联系,类似于 Facebook 的“ friend ”理念。该网站有它自己的独特功能,使其与众不同,因此请保留有关社交网络市场饱和的评论。 每次用户
我想使用 HTML5 FileApi 将 SWF 读取到 OBJECT(或 EMBED,是否更好?)。 我当前的代码在 Chrome/Iron(唯一也支持 xmlhttprequest v2 Form
我是一名优秀的程序员,十分优秀!