gpt4 book ai didi

ios - 通过 CMSampleBufferRef 数据传递到音频输出插孔

转载 作者:技术小花猫 更新时间:2023-10-29 10:28:21 25 4
gpt4 key购买 nike

我正在开发一个应用程序,我需要在录制和保存视频的同时通过输出音频插孔进行音频捕获。

我研究了 aurio touch apple 示例代码并实现了音频直通。

我还通过AVCaptureSession实现了视频录制。 以上两个功能单独完成并完美运行。

但是当我合并功能时,由于 AVCapturesession 的 Audio Session ,音频传递无法正常工作。

我还尝试传递从 AVCaptureSession 委托(delegate)方法获取的音频数据。下面是我的代码:

OSStatus err = noErr;


AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
CMItemCount numberOfFrames = CMSampleBufferGetNumSamples(sampleBuffer); // corresponds to the number of CoreAudio audio frames

currentSampleTime += (double)numberOfFrames;

AudioTimeStamp timeStamp;
memset(&timeStamp, 0, sizeof(AudioTimeStamp));
timeStamp.mSampleTime = currentSampleTime;
timeStamp.mFlags |= kAudioTimeStampSampleTimeValid;

AudioUnitRenderActionFlags flags = 0;
aurioTouchAppDelegate *THIS = (aurioTouchAppDelegate *)[[UIApplication sharedApplication]delegate];
err = AudioUnitRender(self.rioUnit, &flags, &timeStamp, 1, numberOfFrames, &audioBufferList);

if (err) { printf("PerformThru: error %d\n", (int)err); }

但是它给出了错误。请尽快告知可以进一步做什么。我查看了这么多文档和代码,但找不到任何解决方案。请帮助..

最佳答案

这里有一些更好的错误处理代码。它返回什么错误?您可以通过在文档中搜索来查找错误描述。

static void CheckError (OSStatus error, const char *operation) {
if (error == noErr) return;

char str[20] = {};
// see if it appears to be a 4 char code
*(UInt32*)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else {
sprintf(str, "%d", (int)error);
}

fprintf(stderr, "Error: %s(%s)\n", operation, str);
exit(1);
}

- (void)yourFunction
{
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
CMItemCount numberOfFrames = CMSampleBufferGetNumSamples(sampleBuffer); // corresponds to the number of CoreAudio audio frames

currentSampleTime += (double)numberOfFrames;

AudioTimeStamp timeStamp;
memset(&timeStamp, 0, sizeof(AudioTimeStamp));
timeStamp.mSampleTime = currentSampleTime;
timeStamp.mFlags |= kAudioTimeStampSampleTimeValid;

AudioUnitRenderActionFlags flags = 0;
aurioTouchAppDelegate *THIS = (aurioTouchAppDelegate *)[[UIApplication sharedApplication]delegate];
CheckError(AudioUnitRender(self.rioUnit, &flags, &timeStamp, 1, numberOfFrames, &audioBufferList),
"Error with AudioUnitRender");
}

关于ios - 通过 CMSampleBufferRef 数据传递到音频输出插孔,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15507479/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com