gpt4 book ai didi

c++ - 从 RTP 流解码 h264 帧

转载 作者:太空狗 更新时间:2023-10-29 19:58:42 26 4
gpt4 key购买 nike

我正在使用 live555 和 ffmpeg 库从服务器获取和解码 RTP H264 流;视频流由 ffmpeg 编码,使用 Baseline profile 和

x264_param_default_preset(m_params, "veryfast", "zerolatency")

我读了this topic并在我从网络接收的每一帧中添加 SPS 和 PPS 数据;

void ClientSink::NewFrameHandler(unsigned frameSize, unsigned numTruncatedBytes,
timeval presentationTime, unsigned durationInMicroseconds)
{
...
EncodedFrame tmp;
tmp.m_frame = std::vector<unsigned char>(m_tempBuffer.data(), m_tempBuffer.data() + frameSize);
tmp.m_duration = durationInMicroseconds;
tmp.m_pts = presentationTime;

//Add SPS and PPS data into the frame; TODO: some devices may send SPS and PPs data already into frame;
tmp.m_frame.insert(tmp.m_frame.begin(), m_spsPpsData.cbegin(), m_spsPpsData.cend());

emit newEncodedFrame( SharedEncodedFrame(tmp) );
m_frameCounter++;

this->continuePlaying();
}

我在解码器中收到的这些帧。

bool H264Decoder::decodeFrame(SharedEncodedFrame orig_frame)
{
...
while(m_packet.size > 0)
{
int got_picture;
int len = avcodec_decode_video2(m_decoderContext, m_picture, &got_picture, &m_packet);
if (len < 0)
{
emit criticalError(QString("Decoding error"));
return false;
}
if (got_picture)
{
std::vector<unsigned char> result;
this->storePicture(result);

if ( m_picture->format == AVPixelFormat::AV_PIX_FMT_YUV420P )
{
//QImage img = QImage(result.data(), m_picture->width, m_picture->height, QImage::Format_RGB888);
Frame_t result_rgb;
if (!convert_yuv420p_to_rgb32(result, m_picture->width, m_picture->height, result_rgb))
{
emit criticalError( QString("Failed to convert YUV420p image into rgb32; can't create QImage!"));
return false;
}
unsigned char* copy_img = new unsigned char[result_rgb.size()];
//this needed because QImage shared buffer, which used, and it will crash, if i use this qimage after result_rgb deleting
std::copy(result_rgb.cbegin(), result_rgb.cend(), copy_img);
QImage img = QImage(copy_img, m_picture->width, m_picture->height, QImage::Format_RGB32,
[](void* array)
{
delete[] array;
}, copy_img);
img.save(QString("123.bmp"));
emit newDecodedFrame(img);
}

avcodec_decode_video2 解码帧没有任何错误消息,但解码帧在转换后(从 yuv420p 到 rgb32)无效。 this link 上可用的图像示例

你知道我做错了什么吗?

最佳答案

我怀疑错误出在 convert_yuv420p_to_rgb32() 代码中。试试这个:

static SwsContext *m_swsCtx = NULL;
QImage frame = QImage ( m_picture->width, m_picture->height,
QImage::Format_RGB32 );
m_swsCtx = sws_getCachedContext ( m_swsCtx, m_picture->width,
m_picture->height, PIX_FMT_YUV420P,
m_picture->width, m_picture->height,
AV_PIX_FMT_RGB32, SWS_BICUBIC,
NULL, NULL, NULL );
uint8_t *dstSlice[] = { frame.bits() };
int dstStride = frame.width() * 4;
sws_scale ( m_swsCtx, &m_picture.data, &m_picture.linesize,
0, m_picture->height, dstSlice, &dstStride );

如果您还没有包含/链接 swscale,则需要这样做。

注意:您不需要每一帧都需要 SPS/PPS(在关键帧上就足够了)。但它也没有伤害。

关于c++ - 从 RTP 流解码 h264 帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18857737/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com