gpt4 book ai didi

objective-c - AVCaptureOutput 通过 TPCircularBuffer 回调到音频单元

转载 作者:行者123 更新时间:2023-11-30 17:32:52 24 4
gpt4 key购买 nike

我正在构建一个 AUGraph,并尝试通过 AVCaptureAudioDataOutput 委托(delegate)方法从输入设备获取音频。

使用 AVCaptureSession 是所解释问题的结果 here 。我成功地通过 CARingbuffer 使用此方法构建了音频播放,如学习核心音频一书中所述。但是,从 CARingbuffer 获取数据意味着提供有效的采样时间,并且当我停止 AVCaptureSession 时,来自 AVCaptureOutput 的采样时间和单元输入回调不再同步。所以,我现在尝试使用迈克尔·泰森的 TPCircularBuffer ,根据我读到的内容,这似乎非常好。但是,即使我找到了一些示例,我也无法从中获取一些音频(或只是裂纹)。

我的图表如下所示:

AVCaptureSession -> callback -> AUConverter -> ... -> HALOutput

这是我的 AVCaptureOutput 方法的代码

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{

CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *sampleBufferASBD = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);

if (kAudioFormatLinearPCM != sampleBufferASBD->mFormatID) {

NSLog(@"Bad format or bogus ASBD!");
return;

}

if ((sampleBufferASBD->mChannelsPerFrame != _audioStreamDescription.mChannelsPerFrame) || (sampleBufferASBD->mSampleRate != _audioStreamDescription.mSampleRate)) {

_audioStreamDescription = *sampleBufferASBD;
NSLog(@"sample input format changed");

}




CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
NULL,
_currentInputAudioBufferList,
CAAudioBufferList::CalculateByteSize(_audioStreamDescription.mChannelsPerFrame),
kCFAllocatorSystemDefault,
kCFAllocatorSystemDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&_blockBufferOut);


TPCircularBufferProduceBytes(&_circularBuffer, _currentInputAudioBufferList->mBuffers[0].mData, _currentInputAudioBufferList->mBuffers[0].mDataByteSize);

以及渲染回调:

OSStatus PushCurrentInputBufferIntoAudioUnit(void inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{

ozAVHardwareInput *hardWareInput = (ozAVHardwareInput *)inRefCon;
TPCircularBuffer circularBuffer = [hardWareInput circularBuffer];

Float32 *targetBuffer = (Float32 *)ioData->mBuffers[0].mData;

int32_t availableBytes;
TPCircularBufferTail(&circularBuffer, &availableBytes);
UInt32 dataSize = ioData->mBuffers[0].mDataByteSize;

if (availableBytes > ozAudioDataSizeForSeconds(3.)) {

// There is too much audio data to play -> clear buffer & mute output
TPCircularBufferClear(&circularBuffer);

for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);

} else if (availableBytes > ozAudioDataSizeForSeconds(0.5)) {

// SHOULD PLAY
Float32 *cbuffer = (Float32 *)TPCircularBufferTail(&circularBuffer, &availableBytes);
int32_t min = MIN(dataSize, availableBytes);

memcpy(targetBuffer, cbuffer, min);
TPCircularBufferConsume(&circularBuffer, min);
ioData->mBuffers[0].mDataByteSize = min;

} else {

// No data to play -> mute output
for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}

return noErr;

}

TPCIruralBuffer 被输入 AudioBufferList,但没有任何输出,或者有时只有裂纹。

我做错了什么?

最佳答案

音频单元渲染回调应始终返回 inNumberFrames 个样本。检查回调返回了多少数据。

关于objective-c - AVCaptureOutput 通过 TPCircularBuffer 回调到音频单元,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23978741/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com