gpt4 book ai didi

objective-c - 如何在不泄漏内存的情况下正确关闭 FFmpeg 流和 AVFormatContext?

转载 作者:行者123 更新时间:2023-12-04 22:46:57 29 4
gpt4 key购买 nike

我已经构建了一个使用 FFmpeg 的应用程序连接到远程 IP 摄像机,以便通过 RTSP 2.0 接收视频和音频帧.

该应用程序是使用 Xcode 10-11 构建的和 Objective-C带有自定义FFmpeg构建配置。

架构如下:

MyApp


Document_0

RTSPContainerObject_0
RTSPObject_0

RTSPContainerObject_1
RTSPObject_1

...
Document_1
...

目标 :
  • 收盘后Document_0没有 FFmpeg对象应该被泄露。
  • 关闭过程应该停止帧读取并销毁所有使用 FFmpeg 的对象.

  • 问题:

    enter image description here
  • 不知何故,Xcode 的内存调试器显示了 MyApp 的两个实例。 .

  • 事实:
  • macOS 的事件监视器不显示 MyApp 的两个实例.
  • macOS 的事件监视器没有 FFmpeg 或其他子进程的任何实例。
  • 由于内存快照较晚,该问题与一些剩余内存无关,因为它可以轻松复制。
  • Xcode 的内存调试器显示第二个实例只有 RTSPObject's AVFormatContext并且没有其他对象。
  • 第二个实例有 AVFormatContextRTPSObject仍然有一个指向 AVFormatContext 的指针.

  • 事实:
  • 打开和关闭第二个文档Document_1导致同样的问题并且有两个对象泄漏。这意味着存在一个会产生可扩展问题的错误。越来越多的内存被使用和不可用。

  • 这是我的终止代码:
       - (void)terminate
    {
    // * Video and audio frame provisioning termination *
    [self stopVideoStream];
    [self stopAudioStream];
    // *

    // * Video codec termination *
    avcodec_free_context(&_videoCodecContext); // NULL pointer safe.
    self.videoCodecContext = NULL;
    // *

    // * Audio codec termination *
    avcodec_free_context(&_audioCodecContext); // NULL pointer safe.
    self.audioCodecContext = NULL;
    // *

    if (self.packet)
    {
    // Free the packet that was allocated by av_read_frame.
    av_packet_unref(&packet); // The documentation doesn't mention NULL safety.
    self.packet = NULL;
    }

    if (self.currentAudioPacket)
    {
    av_packet_unref(_currentAudioPacket);
    self.currentAudioPacket = NULL;
    }

    // Free raw frame data.
    av_freep(&_rawFrameData); // NULL pointer safe.

    // Free the swscaler context swsContext.
    self.isFrameConversionContextAllocated = NO;
    sws_freeContext(scallingContext); // NULL pointer safe.

    [self.audioPacketQueue removeAllObjects];

    self.audioPacketQueue = nil;

    self.audioPacketQueueLock = nil;
    self.packetQueueLock = nil;
    self.audioStream = nil;
    BXLogInDomain(kLogDomainSources, kLogLevelVerbose, @"%s:%d: All streams have been terminated!", __FUNCTION__, __LINE__);

    // * Session context termination *
    AVFormatContext *pFormatCtx = self.sessionContext;
    BOOL shouldProceedWithInputSessionTermination = self.isInputStreamOpen && self.shouldTerminateStreams && pFormatCtx;
    NSLog(@"\nTerminating session context...");
    if (shouldProceedWithInputSessionTermination)
    {
    NSLog(@"\nTerminating...");
    //av_write_trailer(pFormatCtx);
    // Discard all internally buffered data.
    avformat_flush(pFormatCtx); // The documentation doesn't mention NULL safety.
    // Close an opened input AVFormatContext and free it and all its contents.
    // WARNING: Closing an non-opened stream will cause avformat_close_input to crash.
    avformat_close_input(&pFormatCtx); // The documentation doesn't mention NULL safety.
    NSLog(@"Logging leftovers - %p, %p %p", self.sessionContext, _sessionContext, pFormatCtx);
    avformat_free_context(pFormatCtx);

    NSLog(@"Logging content = %c", *self.sessionContext);
    //avformat_free_context(pFormatCtx); - Not needed because avformat_close_input is closing it.
    self.sessionContext = NULL;
    }
    // *

    }

    重要提示:终止顺序为:
        New frame will be read.
    -[(RTSPObject)StreamInput currentVideoFrameDurationSec]
    -[(RTSPObject)StreamInput frameDuration:]
    -[(RTSPObject)StreamInput currentCGImageRef]
    -[(RTSPObject)StreamInput convertRawFrameToRGB]
    -[(RTSPObject)StreamInput pixelBufferFromImage:]
    -[(RTSPObject)StreamInput cleanup]
    -[(RTSPObject)StreamInput dealloc]
    -[(RTSPObject)StreamInput stopVideoStream]
    -[(RTSPObject)StreamInput stopAudioStream]

    Terminating session context...
    Terminating...
    Logging leftovers - 0x109ec6400, 0x109ec6400 0x109ec6400
    Logging content = \330
    -[Document dealloc]

    不工作的解决方案:
  • 更改对象释放的顺序(AVFormatContext 已被首先释放,但没有导致任何更改)。
  • 调用RTSPObject's cleanup方法更早给FFmpeg更多时间来处理对象释放。
  • 阅读大量 SO 答案和 FFmpeg文档以查找清理清理过程或更新的代码,这些代码可能会突出显示对象释放未正确发生的原因。

  • 我目前正在阅读 AVFormatContext 上的文档因为我相信我忘记释放一些东西。这相信是基于 AVFormatContext 的内存调试器输出。还在附近。

    这是我的创建代码:
    #pragma mark # Helpers - Start

    - (NSError *)openInputStreamWithVideoStreamId:(int)videoStreamId
    audioStreamId:(int)audioStreamId
    useFirst:(BOOL)useFirstStreamAvailable
    inInit:(BOOL)isInitProcess
    {
    // NSLog(@"%s", __PRETTY_FUNCTION__); // RTSP
    self.status = StreamProvisioningStatusStarting;
    AVCodec *decoderCodec;
    NSString *rtspURL = self.streamURL;
    NSString *errorMessage = nil;
    NSError *error = nil;

    self.sessionContext = NULL;
    self.sessionContext = avformat_alloc_context();

    AVFormatContext *pFormatCtx = self.sessionContext;
    if (!pFormatCtx)
    {
    // Create approp error.
    return error;
    }


    // MUST be called before avformat_open_input().
    av_dict_free(&_sessionOptions);

    self.sessionOptions = 0;
    if (self.usesTcp)
    {
    // "rtsp_transport" - Set RTSP transport protocols.
    // Allowed are: udp_multicast, tcp, udp, http.
    av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);
    }
    av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);

    // Open an input stream and read the header with the demuxer options.
    // WARNING: The stream must be closed with avformat_close_input()
    if (avformat_open_input(&pFormatCtx, rtspURL.UTF8String, NULL, &_sessionOptions) != 0)
    {
    // WARNING: Note that a user-supplied AVFormatContext (pFormatCtx) will be freed on failure.
    self.isInputStreamOpen = NO;
    // Create approp error.
    return error;
    }

    self.isInputStreamOpen = YES;

    // user-supplied AVFormatContext pFormatCtx might have been modified.
    self.sessionContext = pFormatCtx;

    // Retrieve stream information.
    if (avformat_find_stream_info(pFormatCtx,NULL) < 0)
    {
    // Create approp error.
    return error;
    }

    // Find the first video stream
    int streamCount = pFormatCtx->nb_streams;

    if (streamCount == 0)
    {
    // Create approp error.
    return error;
    }

    int noStreamsAvailable = pFormatCtx->streams == NULL;

    if (noStreamsAvailable)
    {
    // Create approp error.
    return error;
    }

    // Result. An Index can change, an identifier shouldn't.
    self.selectedVideoStreamId = STREAM_NOT_FOUND;
    self.selectedAudioStreamId = STREAM_NOT_FOUND;

    // Fallback.
    int firstVideoStreamIndex = STREAM_NOT_FOUND;
    int firstAudioStreamIndex = STREAM_NOT_FOUND;

    self.selectedVideoStreamIndex = STREAM_NOT_FOUND;
    self.selectedAudioStreamIndex = STREAM_NOT_FOUND;

    for (int i = 0; i < streamCount; i++)
    {
    // Looking for video streams.
    AVStream *stream = pFormatCtx->streams[i];
    if (!stream) { continue; }
    AVCodecParameters *codecPar = stream->codecpar;
    if (!codecPar) { continue; }

    if (codecPar->codec_type==AVMEDIA_TYPE_VIDEO)
    {
    if (stream->id == videoStreamId)
    {
    self.selectedVideoStreamId = videoStreamId;
    self.selectedVideoStreamIndex = i;
    }

    if (firstVideoStreamIndex == STREAM_NOT_FOUND)
    {
    firstVideoStreamIndex = i;
    }
    }
    // Looking for audio streams.
    if (codecPar->codec_type==AVMEDIA_TYPE_AUDIO)
    {
    if (stream->id == audioStreamId)
    {
    self.selectedAudioStreamId = audioStreamId;
    self.selectedAudioStreamIndex = i;
    }

    if (firstAudioStreamIndex == STREAM_NOT_FOUND)
    {
    firstAudioStreamIndex = i;
    }
    }
    }

    // Use first video and audio stream available (if possible).

    if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstVideoStreamIndex != STREAM_NOT_FOUND)
    {
    self.selectedVideoStreamIndex = firstVideoStreamIndex;
    self.selectedVideoStreamId = pFormatCtx->streams[firstVideoStreamIndex]->id;
    }

    if (self.selectedAudioStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstAudioStreamIndex != STREAM_NOT_FOUND)
    {
    self.selectedAudioStreamIndex = firstAudioStreamIndex;
    self.selectedAudioStreamId = pFormatCtx->streams[firstAudioStreamIndex]->id;
    }

    if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND)
    {
    // Create approp error.
    return error;
    }

    // See AVCodecID for codec listing.

    // * Video codec setup:
    // 1. Find the decoder for the video stream with the gived codec id.
    AVStream *stream = pFormatCtx->streams[self.selectedVideoStreamIndex];
    if (!stream)
    {
    // Create approp error.
    return error;
    }
    AVCodecParameters *codecPar = stream->codecpar;
    if (!codecPar)
    {
    // Create approp error.
    return error;
    }

    decoderCodec = avcodec_find_decoder(codecPar->codec_id);
    if (decoderCodec == NULL)
    {
    // Create approp error.
    return error;
    }

    // Get a pointer to the codec context for the video stream.
    // WARNING: The resulting AVCodecContext should be freed with avcodec_free_context().
    // Replaced:
    // self.videoCodecContext = pFormatCtx->streams[self.selectedVideoStreamIndex]->codec;
    // With:
    self.videoCodecContext = avcodec_alloc_context3(decoderCodec);
    avcodec_parameters_to_context(self.videoCodecContext,
    codecPar);

    self.videoCodecContext->thread_count = 4;
    NSString *description = [NSString stringWithUTF8String:decoderCodec->long_name];

    // 2. Open codec.
    if (avcodec_open2(self.videoCodecContext, decoderCodec, NULL) < 0)
    {
    // Create approp error.
    return error;
    }

    // * Audio codec setup:
    if (self.selectedAudioStreamIndex > -1)
    {
    [self setupAudioDecoder];
    }

    // Allocate a raw video frame data structure. Contains audio and video data.
    self.rawFrameData = av_frame_alloc();

    self.outputWidth = self.videoCodecContext->width;
    self.outputHeight = self.videoCodecContext->height;

    if (!isInitProcess)
    {
    // Triggering notifications in init process won't change UI since the object is created locally. All
    // objects which need data access to this object will not be able to get it. Thats why we don't notifiy anyone about the changes.
    [NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspVideoStreamSelectionChanged
    object:nil userInfo: self.selectedVideoStream];

    [NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspAudioStreamSelectionChanged
    object:nil userInfo: self.selectedAudioStream];
    }

    return nil;
    }

    更新 1

    初始架构允许使用任何给定的线程。下面的大部分代码将主要在主线程上运行。此解决方案不合适,因为打开流输入可能需要几秒钟,在此期间主线程在等待 FFmpeg 内部的网络响应时被阻塞。 .为了解决这个问题,我实现了以下解决方案:
  • 仅在 background_thread 上允许创建和初始设置(参见下面的代码片段“1”)。
  • current_thread(Any) 上允许更改.
  • current_thread(Any) 上允许终止.

  • 删除 main thread 后检查和 dispatch_asyncs到后台线程,泄漏已停止,我无法再重现该问题:
    // Code that produces the issue.   
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
    // 1 - Create and do initial setup.
    // This block creates the issue.
    [self.rtspObject = [[RTSPObject alloc] initWithURL: ... ];
    [self.rtspObject openInputStreamWithVideoStreamId: ...
    audioStreamId: ...
    useFirst: ...
    inInit: ...];
    });

    我还是不明白为什么Xcode的内存调试器说这个 block 被保留了?

    欢迎任何建议或想法。

    最佳答案

    如果你使用 av_format_open_input 打开一个文件,你必须使用 avformat_close_input 来释放它。使用 free_context 将泄漏所有与 io 相关的分配。

    关于objective-c - 如何在不泄漏内存的情况下正确关闭 FFmpeg 流和 AVFormatContext?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58450311/

    29 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com