gpt4 book ai didi

ffmpeg - 使用 libswscale 从 NV12 转换为 RGB/YUV420P

转载 作者:行者123 更新时间:2023-12-04 23:15:14 56 4
gpt4 key购买 nike

我正在开发一个应用程序,它需要将 NV12 帧从 h264_cuvid 解码器转换为 RGB 以修改这些帧。我查了this question但我没有“步幅”值。

我的代码如下:

uint8_t *inData[2] = { videoFrame->data[0], videoFrame->data[0] + videoFrame->width * videoFrame->height };
int inLinesize[2] = { videoFrame->width, videoFrame->width };

sws_scale(convert_yuv_to_rgb, inData, inLinesize, 0, videoFrame->height, aux_frame->data, aux_frame->linesize);

但它不起作用。虽然问题出在颜色上,因为我可以正确看到亮度平面。

最佳答案

我最终使用了基于 this example 的视频过滤器.

char args[512];
int ret;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
AVBufferSinkParams *buffersink_params;
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_RGB32, AV_PIX_FMT_NONE };

/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
inStream.width, inStream.height, inStream.pix_fmt,
inStream.time_base.num, inStream.time_base.den,
inStream.sample_aspect_ratio.num, inStream.sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx_to_rgb_, buffersrc, "in", args, NULL, filter_graph);

if (ret < 0) {
throw SVSException(QString("Could not create filter graph, error: %1").arg(svsAvErrorToFormattedString(ret)));
}

/* buffer video sink: to terminate the filter chain. */
buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;
ret = avfilter_graph_create_filter(&buffersink_ctx_to_rgb_, buffersink, "out", NULL, buffersink_params, filter_graph);

if (ret < 0) {
throw SVSException(QString("Cannot create buffer sink, error: %1").arg(svsAvErrorToFormattedString(ret)));
}

/* Endpoints for the filter graph. */
outputs -> name = av_strdup("in");
outputs -> filter_ctx = buffersrc_ctx_to_rgb_;
outputs -> pad_idx = 0;
outputs -> next = NULL;
/* Endpoints for the filter graph. */
inputs -> name = av_strdup("out");
inputs -> filter_ctx = buffersink_ctx_to_rgb_;
inputs -> pad_idx = 0;
inputs -> next = NULL;

QString filter_description = "format=pix_fmts=rgb32";
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_description.toStdString().c_str(), &inputs, &outputs, NULL)) < 0) {
svsCritical("", QString("Could not add the filter to graph, error: %1").arg(svsAvErrorToFormattedString(ret)))
}

if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
svsCritical("", QString("Could not configure the graph, error: %1").arg(svsAvErrorToFormattedString(ret)))
}

return;

在以类似方式编码之前,我创建了另一个从 RGB 转换为 YUV420P。

关于ffmpeg - 使用 libswscale 从 NV12 转换为 RGB/YUV420P,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45934019/

56 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com