gpt4 book ai didi

python - 使用 FFmpeg、python 和 opencv 显示流

转载 作者:行者123 更新时间:2023-12-04 22:54:29 24 4
gpt4 key购买 nike

情况:
我有一个连接到树莓派的 basler 相机,我正在尝试将它与 FFmpg 的馈送直播到我的 Windows PC 中的一个 tcp 端口,以监控相机前发生的事情。
有用的东西:
我设法在树莓派上设置了一个 python 脚本,该脚本负责记录帧,将它们馈送到管道并将它们流式传输到 tcp 端口。从那个端口,我可以使用 FFplay 显示流。
我的问题:
如果您前进的方向正确,FFplay 非常适合快速轻松地测试,但我想从流中“读取”每一帧,进行一些处理,然后使用 opencv 显示流。那,我还做不到。
最少代表,这是我在树莓派方面使用的代码:

command = ['ffmpeg',
'-y',
'-i', '-',
'-an',
'-c:v', 'mpeg4',
'-r', '50',
'-f', 'rtsp',
'-rtsp_transport',
'tcp','rtsp://192.168.1.xxxx:5555/live.sdp']

p = subprocess.Popen(command, stdin=subprocess.PIPE)

while camera.IsGrabbing(): # send images as stream until Ctrl-C
grabResult = camera.RetrieveResult(100, pylon.TimeoutHandling_ThrowException)

if grabResult.GrabSucceeded():
image = grabResult.Array
image = resize_compress(image)
p.stdin.write(image)
grabResult.Release()

在我的电脑上,如果我在终端上使用以下 FFplay 命令,它可以工作并实时显示流: ffplay -rtsp_flags listen rtsp://192.168.1.xxxx:5555/live.sdp?tcp在我的 PC 上,如果我使用以下 python 脚本,则流开始,但在 cv2.imshow 中失败功能,因为我不知道如何解码它:
import subprocess
import cv2

command = ['C:/ffmpeg/bin/ffmpeg.exe',
'-rtsp_flags', 'listen',
'-i', 'rtsp://192.168.1.xxxx:5555/live.sdp?tcp?',
'-']

p1 = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE)

while True:
frame = p1.stdout.read()
cv2.imshow('image', frame)
cv2.waitKey(1)
有谁知道我需要在其中任何一个脚本中更改什么才能让我工作?
提前感谢您的任何提示。

最佳答案

您可以从 p1.stdout 读取解码帧,将其转换为 NumPy 数组,并对其进行整形。

  • 更改command获取 rawvideo 中的解码帧格式和 BGR 像素格式:
     command = ['C:/ffmpeg/bin/ffmpeg.exe',
    '-rtsp_flags', 'listen',
    '-i', 'rtsp://192.168.1.xxxx:5555/live.sdp?tcp?',
    '-f', 'image2pipe', # Use image2pipe demuxer
    '-pix_fmt', 'bgr24', # Set BGR pixel format
    '-vcodec', 'rawvideo', # Get rawvideo output format.
    '-']
  • p1.stdout 读取原始视频帧:
     raw_frame = p1.stdout.read(width*height*3)
  • 将读取的字节转换为 NumPy 数组,并将其 reshape 为视频帧尺寸:
     frame = np.fromstring(raw_frame, np.uint8)
    frame = frame.reshape((height, width, 3))

  • 现在你可以显示框架调用 cv2.imshow('image', frame) .
    该解决方案假定您事先知道视频帧大小( widthheight )。
    下面的代码示例,包括读取 width 的部分和 height使用 cv2.VideoCapture ,但我不确定它是否适用于您的情况(由于 '-rtsp_flags', 'listen' 。(如果确实有效,您可以尝试使用 OpenCV 而不是 FFmpeg 进行捕获)。
    以下代码是使用公共(public) RTSP 流进行测试的完整“工作示例”:
    import cv2
    import numpy as np
    import subprocess

    # Use public RTSP Stream for testing
    in_stream = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov'

    if False:
    # Read video width, height and framerate using OpenCV (use it if you don't know the size of the video frames).

    # Use public RTSP Streaming for testing:
    cap = cv2.VideoCapture(in_stream)

    framerate = cap.get(5) #frame rate

    # Get resolution of input video
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    # Release VideoCapture - it was used just for getting video resolution
    cap.release()
    else:
    # Set the size here, if video frame size is known
    width = 240
    height = 160


    command = ['C:/ffmpeg/bin/ffmpeg.exe',
    #'-rtsp_flags', 'listen', # The "listening" feature is not working (probably because the stream is from the web)
    '-rtsp_transport', 'tcp', # Force TCP (for testing)
    '-max_delay', '30000000', # 30 seconds (sometimes needed because the stream is from the web).
    '-i', in_stream,
    '-f', 'image2pipe',
    '-pix_fmt', 'bgr24',
    '-vcodec', 'rawvideo', '-an', '-']

    # Open sub-process that gets in_stream as input and uses stdout as an output PIPE.
    p1 = subprocess.Popen(command, stdout=subprocess.PIPE)

    while True:
    # read width*height*3 bytes from stdout (1 frame)
    raw_frame = p1.stdout.read(width*height*3)

    if len(raw_frame) != (width*height*3):
    print('Error reading frame!!!') # Break the loop in case of an error (too few bytes were read).
    break

    # Convert the bytes read into a NumPy array, and reshape it to video frame dimensions
    frame = np.fromstring(raw_frame, np.uint8)
    frame = frame.reshape((height, width, 3))

    # Show video frame
    cv2.imshow('image', frame)
    cv2.waitKey(1)

    # Wait one more second and terminate the sub-process
    try:
    p1.wait(1)
    except (sp.TimeoutExpired):
    p1.terminate()

    cv2.destroyAllWindows()
    示例框架(只是为了好玩):
    enter image description here

    更新:
    使用 读取宽度和高度FF探针 :
    当您事先不知道视频分辨率时,您可以使用 FFprobe 获取信息。
    这是阅读 width 的代码示例和 height使用 FFprobe:
    import subprocess
    import json

    # Use public RTSP Stream for testing
    in_stream = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov'

    probe_command = ['C:/ffmpeg/bin/ffprobe.exe',
    '-loglevel', 'error',
    '-rtsp_transport', 'tcp', # Force TCP (for testing)]
    '-select_streams', 'v:0', # Select only video stream 0.
    '-show_entries', 'stream=width,height', # Select only width and height entries
    '-of', 'json', # Get output in JSON format
    in_stream]

    # Read video width, height using FFprobe:
    p0 = subprocess.Popen(probe_command, stdout=subprocess.PIPE)
    probe_str = p0.communicate()[0] # Reading content of p0.stdout (output of FFprobe) as string
    p0.wait()
    probe_dct = json.loads(probe_str) # Convert string from JSON format to dictonary.

    # Get width and height from the dictonary
    width = probe_dct['streams'][0]['width']
    height = probe_dct['streams'][0]['height']

    关于python - 使用 FFmpeg、python 和 opencv 显示流,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66332694/

    24 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com