gpt4 book ai didi

c++ - 如何使用v4l2loopback将图像渲染到/dev/video0?

转载 作者:行者123 更新时间:2023-12-02 10:33:01 30 4
gpt4 key购买 nike

我一直在尝试将图像渲染到/ dev / video。我可以显示一些东西,但是有些困惑。

我首先开始尝试渲染普通的RGB24图像(基于本示例https://stackoverflow.com/a/44648382/3818491),但是结果(如下)是一个困惑的图像。

#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <unistd.h>
#include <iostream>
#include <sys/ioctl.h>
#include <linux/videodev2.h>

#include <CImg.h>

#define VIDEO_OUT "/dev/video0" // V4L2 Loopack

#define WIDTH 1280
#define HEIGHT 720

int main() {
using namespace cimg_library;

CImg<uint8_t> canvas(WIDTH, HEIGHT, 1, 3);
const uint8_t red[] = {255, 0, 0};
const uint8_t purple[] = {255, 0, 255};

int fd;
if ((fd = open(VIDEO_OUT, O_RDWR)) == -1) {
std::cerr << "Unable to open video output!\n";
return 1;
}

struct v4l2_format vid_format;
vid_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

if (ioctl(fd, VIDIOC_G_FMT, &vid_format) == -1) {
std::cerr << "Unable to get video format data. Errro: " << errno << '\n';
return 1;
}

size_t framesize = canvas.size();
int width = canvas.width(), height = canvas.height();

vid_format.fmt.pix.width = width;
vid_format.fmt.pix.height = height;
vid_format.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB24;
vid_format.fmt.pix.sizeimage = framesize;
vid_format.fmt.pix.field = V4L2_FIELD_NONE;

if (ioctl(fd, VIDIOC_S_FMT, &vid_format) == -1) {
std::cerr << "Unable to set video format! Errno: " << errno << '\n';
return 1;
}

std::cout << "Stream running!\n";
while (true) {
canvas.draw_plasma();
canvas.draw_rectangle(
100, 100, 100 + 100, 100 + 100, red, 1);
canvas.draw_text(5,5, "Hello World!", purple);
canvas.draw_text(5, 20, "Image freshly rendered with the CImg Library!", red);

write(fd, canvas.data(), framesize);
}
}

rgb24 broken

因此,我检查了(我认为)/ dev / video期望的值,似乎是YUV420P。
v4l2-ctl --list-formats-ext                                                                                                                                                                              130 ↵

ioctl: VIDIOC_ENUM_FMT
Type: Video Capture

[0]: 'YU12' (Planar YUV 4:2:0)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)

因此,我尝试转换该格式的帧(使用 this代码进行快速测试)。

将规格调整为:
    vid_format.fmt.pix.width       = width;
vid_format.fmt.pix.height = height;
vid_format.fmt.pix.pixelformat = V4L2_PIX_FMT_YUV420;
vid_format.fmt.pix.sizeimage = width*height*3/2; // size of yuv buffer
vid_format.fmt.pix.field = V4L2_FIELD_NONE;

结果就是这样(这似乎是由于我收集了yuv420图像的结构,但仍然渲染不正确)。

yuv420p broken

/ dev / video0有什么期望?

最佳答案

经过大量的破解之后,我设法生成了一个有效的YUYV视频/图像以发送到/ dev / video0。

首先,我创建一个缓冲区来保存框架:

// Allocate buffer for the YUUV frame
std::vector<uint8_t> buffer;
buffer.resize(vid_format.fmt.pix.sizeimage);

然后,我将当前的 Canvas 以YUYV格式写入缓冲区。

bool skip = true;
cimg_forXY(canvas, cx, cy) {
size_t row = cy * width * 2;
uint8_t r, g, b, y;
r = canvas(cx, cy, 0);
g = canvas(cx, cy, 1);
b = canvas(cx, cy, 2);

y = std::clamp<uint8_t>(r * .299000 + g * .587000 + b * .114000, 0, 255);
buffer[row + cx * 2] = y;
if (!skip) {
uint8_t u, v;
u = std::clamp<uint8_t>(r * -.168736 + g * -.331264 + b * .500000 + 128, 0, 255);
v = std::clamp<uint8_t>(r * .500000 + g * -.418688 + b * -.081312 + 128, 0, 255);
buffer[row + (cx - 1) * 2 + 1] = u;
buffer[row + (cx - 1) * 2 + 3] = v;
}
skip = !skip;
}

注意:
CImg的 RGBtoYUV具有就地RGB到YUV的转换,但是由于某种原因,在uint8_t Canvas 上调用它只是将其归零。

它还具有 get_YUVtoRGB(分配并返回) CImg<float> Canvas ,我认为您将每个值乘以255即可缩放为一个字节,但是,无论我尝试哪种方法都无法给出正确的颜色。编辑:我可能忘记了+128偏差(尽管我仍然更喜欢不为每个帧重新分配)

我的完整代码在这里(如果有人想做类似的事情) https://gist.github.com/MacDue/36199c3f3ca04bd9fd40a1bc2067ef72

关于c++ - 如何使用v4l2loopback将图像渲染到/dev/video0?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61582767/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com