gpt4 book ai didi

ios - 如何从 AudioQueueBufferRef 获取 CMSampleBufferRef

转载 作者:塔克拉玛干 更新时间:2023-11-02 20:09:19 28 4
gpt4 key购买 nike

我正在使用专为 iPhone 直播而制作的私有(private)图书馆。在每次记录每一帧时,它都会调用一个delegate函数

void MyAQInputCallback(void *inUserData,                               AudioQueueRef inQueue,                              AudioQueueBufferRef inBuffer,                              const AudioTimeStamp *inStartTime,                              UInt32 inNumPackets,                              const AudioStreamPacketDescription *inPacketDesc);

Now how I can append this inBuffer to my AVAssetWriterInput as usual:

[self.audioWriterInput appendSampleBuffer:sampleBuffer];

我想也许可以通过某种方式将 AudioQueueBufferRef 转换为 CMSampleBufferRef

谢谢。

最佳答案

我不认为两年后您仍在寻找解决方案,但以防万一有人遇到类似情况并发现这个问题(就像我一样),这是我的解决方案。

我的音频队列回调函数调用下面的 appendAudioBuffer 函数,将 AudioQueueBufferRef 及其长度 (mAudioDataByteSize) 传递给它。

void appendAudioBuffer(void* pBuffer, long pLength)
{
// CMSampleBuffers require a CMBlockBuffer to hold the media data; we
// create a blockBuffer here from the AudioQueueBuffer's data.

CMBlockBufferRef blockBuffer;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
pBuffer,
pLength,
kCFAllocatorNull,
NULL,
0,
pLength,
kCMBlockBufferAssureMemoryNowFlag,
&blockBuffer);

// Timestamp of current sample
CFAbsoluteTime currentTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = currentTime - mStartTime;
CMTime timeStamp = CMTimeMake(elapsedTime * mTimeScale, mTimeScale);

// Number of samples in the buffer
long nSamples = pLength / mWaveRecorder->audioFormat()->mBytesPerFrame;

CMSampleBufferRef sampleBuffer;
OSStatus err = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
blockBuffer,
true,
NULL,
NULL,
mAudioFormatDescription,
nSamples,
timeStamp,
NULL,
&sampleBuffer);
// Add the audio sample to the asset writer input
if ([mAudioWriterInput isReadyForMoreMediaData]) {
if(![mAudioWriterInput appendSampleBuffer:sampleBuffer])
// print an error
}
else
// either do nothing and just print an error, or queue the CMSampleBuffer
// somewhere and add it later, when the AVAssetWriterInput is ready


CFRelease(sampleBuffer);
CFRelease(blockBuffer);

}

请注意,当我调用 appendAudioBuffer 时,声音没有被压缩;音频格式指定为 LPCM(这就是为什么我不使用数据包描述符,因为 LPCM 没有)。 AVAssetWriterInput 处理压缩。我最初尝试将 AAC 数据传递给 AVAssetWriter,但这导致了太多的复杂化,我无法让它工作。

关于ios - 如何从 AudioQueueBufferRef 获取 CMSampleBufferRef,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20212320/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com