gpt4 book ai didi

ios - 将 AudioBufferList 转换为 CMBlockBufferRef 时出错

转载 作者:技术小花猫 更新时间:2023-10-29 11:07:59 25 4
gpt4 key购买 nike

我正在尝试使用 AVAssetReader 读取视频文件并将音频传递给 CoreAudio 进行处理(添加效果和内容),然后再使用 AVAssetWriter 将其保存回磁盘。我想指出的是,如果我将输出节点的 AudioComponentDescription 上的 componentSubType 设置为 RemoteIO,则可以通过扬声器正常播放。这让我确信我的 AUGraph 已正确设置,因为我可以听到一切正常。不过,我将子类型设置为 GenericOutput,这样我就可以自己进行渲染并取回调整后的音频。

我正在阅读音频并将 CMSampleBufferRef 传递给 copyBuffer。这会将音频放入一个循环缓冲区中,稍后将读取该缓冲区。

- (void)copyBuffer:(CMSampleBufferRef)buf {  
if (_readyForMoreBytes == NO)
{
return;
}

AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);

UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);

if (!bytesCopied){
/
_readyForMoreBytes = NO;

if (size > kRescueBufferSize){
NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame");
} else {
if (rescueBuffer == nil) {
rescueBuffer = malloc(kRescueBufferSize);
}

rescueBufferSize = size;
memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
}
}

CFRelease(blockBuffer);
if (!self.hasBuffer && bytesCopied > 0)
{
self.hasBuffer = YES;
}
}

接下来我调用 processOutput。这将在 outputUnit 上进行手动渲染。当调用 AudioUnitRender 时,它会调用下面的 playbackCallback,这是我的第一个节点上作为输入回调连接的内容。 playbackCallback 从循环缓冲区中提取数据并将其馈送到传入的 audioBufferList 中。就像我之前说的,如果输出设置为 RemoteIO,这将导致音频在扬声器上正确播放。当 AudioUnitRender 完成时,它返回 noErr 并且 bufferList 对象包含有效数据。 当我调用 CMSampleBufferSetDataBufferFromAudioBufferList 虽然我得到 kCMSampleBufferError_RequiredParameterMissing (-12731)

-(CMSampleBufferRef)processOutput  
{
if(self.offline == NO)
{
return NULL;
}

AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;

UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
UInt32 channelCount = 2;

AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(SInt32);
buffer.mData = calloc(numberFrames,sizeof(SInt32));

bufferList->mBuffers[j] = buffer;

}
CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit");

CMSampleBufferRef sampleBufferRef = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
AudioStreamBasicDescription audioFormat = self.audioFormat;
CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate");
CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate");
CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList");

return sampleBufferRef;
}


static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;

/
memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);

MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;

if (p.hasBuffer){
int32_t availableBytes;
SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);

int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;

int bytesToRead = MIN(availableBytes, requestedBytesSize);
memcpy(outSample, bufferTail, bytesToRead);
TPCircularBufferConsume([p getBuffer], bytesToRead);

if (availableBytes <= requestedBytesSize*2){
[p setReadyForMoreBytes];
}

if (availableBytes <= requestedBytesSize) {
p.hasBuffer = NO;
}
}
return noErr;
}

我传入的 CMSampleBufferRef 看起来有效(下面是来自调试器的对象转储)

CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180  
invalid = NO
dataReady = NO
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc2c
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 512
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
}
dataBuffer = 0x0

缓冲区列表如下所示

Printing description of bufferList:  
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}

这里真的很茫然,希望有人能帮忙。谢谢,

以防万一,我正在 ios 8.3 模拟器中调试它,音频来 self 在 iphone 6 上拍摄的 mp4,然后保存到我的笔记本电脑。

我已阅读以下问题,但仍然无济于事,一切都无法正常工作。

How to convert AudioBufferList to CMSampleBuffer?

Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

core audio offline rendering GenericOutput

更新

我仔细查看了一下,发现当我的 AudioBufferList 在 AudioUnitRender 运行之前看起来像这样:

bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048

mDataByteSize 为 numberFrames*sizeof(SInt32),即 512 * 4。当我查看 playbackCallback 中传递的 AudioBufferList 时,列表如下所示:

bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024

不太确定其他缓冲区的去向,或其他 1024 字节大小...

如果我完成调用 Redner 时如果我做这样的事情

AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;

并将 newbuff 传递给 CMSampleBufferSetDataBufferFromAudioBufferList,错误消失。

如果我尝试将 BufferList 的大小设置为具有 1 个 mNumberBuffers 或其 mDataByteSize 为 numberFrames*sizeof(SInt16) 我在调用 AudioUnitRender 时得到 -50

更新 2

我连接了一个渲染回调,这样我就可以在通过扬声器播放声音时检查输出。我注意到扬声器的输出也有一个带有 2 个缓冲区的 AudioBufferList,输入回调期间的 mDataByteSize 是 1024,在渲染回调中是 2048,这与我在手动调用 AudioUnitRender 时看到的相同。当我检查呈现的 AudioBufferList 中的数据时,我注意到两个缓冲区中的字节是相同的,这意味着我可以忽略第二个缓冲区。但是我不确定如何处理数据在渲染后大小为 2048 而不是在被接收时为 1024 的事实。关于为什么会发生这种情况的任何想法?在通过音频图后它是否更像是原始形式,这就是大小加倍的原因?

最佳答案

听起来您遇到的问题是因为 channel 数量不一致。您在 2048 block 而不是 1024 block 中看到数据的原因是因为它向您反馈两个 channel (立体声)。检查以确保所有音频单元都正确配置为在整个音频图中使用单声道,包括 Pitch Unit 和任何音频格式描述。

要特别注意的一件事是调用 AudioUnitSetProperty 可能会失败 - 所以一定要将它们也包装在 CheckError() 中。

关于ios - 将 AudioBufferList 转换为 CMBlockBufferRef 时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31505111/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com