gpt4 book ai didi

ios - AVAssetWriterInput appendSampleBuffer 成功,但从 CMSampleBufferGetSampleSize 记录错误 kCMSampleBufferError_BufferHasNoSampleSizes

转载 作者:行者123 更新时间:2023-11-29 13:55:14 34 4
gpt4 key购买 nike

从 iOS 12.4 beta 版本开始,在 AVAssetWriterInput 上调用 appendSampleBuffer 会记录以下错误:

CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.12/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153

我们在以前的版本中没有看到这个错误,在 iOS 13 测试版中也没有。有没有其他人遇到过这个问题,可以提供任何信息来帮助我们解决这个问题?

更多详情

我们的应用程序使用两个 AVAssetWriterInput 对象录制视频和音频,一个用于视频输入(附加像素缓冲区),一个用于音频输入 - 附加使用 CMSampleBufferCreate 创建的音频缓冲区>。 (见下面的代码。)

由于我们的音频数据是非交错的,创建后我们将其转换为交错格式,并将其传递给appendSampleBuffer

相关代码

// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
CMTimeMake(1, _asbdFormat.mSampleRate),
currentAudioTime,
kCMTimeInvalid };


OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
0,
NULL,
&buff);

// checking for error... (non returned)

// Converting from non-interleaved to interleaved.
float zero = 0.f;
vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
// Channel L
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
// Channel R
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);

_interleavedABL.mBuffers[0].mDataByteSize = _interleavedASBD.mBytesPerFrame * numFrames;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
&_interleavedABL);

// checking for error... (non returned)

if (_assetWriterAudioInput.readyForMoreMediaData) {

BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer]; // THIS PRODUCES THE ERROR.

// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}

在此之前,下面是 _assetWriterAudioInput 的创建方式:

-(BOOL) initializeAudioWriting
{
BOOL success = YES;

NSDictionary *audioCompressionSettings = // settings dictionary, see below.

if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
_assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;

if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
[_assetWriter addInput:_assetWriterAudioInput];
}
else {
// return error
}
}
else {
// return error
}

return success;
}

audioCompressionSettings 定义为:

+ (NSDictionary*)audioSettingsForRecording
{
AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
double preferredHardwareSampleRate;

if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
{
preferredHardwareSampleRate = [sharedAudioSession sampleRate];
}
else
{
preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
}

AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;


return @{
AVFormatIDKey: @(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: @2,
AVSampleRateKey: @(preferredHardwareSampleRate),
AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
AVEncoderBitRateKey: @160000
};
}

appendSampleBuffer 记录以下错误和调用堆栈(相关部分):

CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.6/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153

0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]

1 My App 0x0000000103212dfc -[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260] ...

如有任何帮助,我们将不胜感激。

编辑:添加以下信息:我们将 0 和 NULL 传递给 CMSampleBufferCreatenumSampleSizeEntriessampleSizeArray 参数 - 根据文档,这是我们在创建缓冲区时必须传递的内容非交错数据(尽管这个文档让我有点困惑)。

我们试过传递 1 和一个指向 size_t 参数的指针,例如:

size_t sampleSize = 4;

但它没有帮助:它记录了以下错误:

figSampleBufferCheckDataSize 发出信号 err=-12731 (kFigSampleBufferError_RequiredParameterMissing)(bbuf 与 sbuf 数据大小不匹配)

而且我们不清楚应该有什么值(如何知道每个样本的样本大小),或者这是否是正确的解决方案。

最佳答案

我想我们有了答案:

传递 numSampleSizeEntriessampleSizeArray如下所示的 CMSampleBufferCreate 参数似乎可以修复它(仍需要全面验证)。

据我了解,原因是我们最后附加了交错缓冲区,它需要有样本大小(至少在 12.4 版本中)。

// _asbdFormat is the AudioStreamBasicDescription.
size_t sampleSize = _asbdFormat.mBytesPerFrame;
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
1,
&sampleSize,
&buff);

关于ios - AVAssetWriterInput appendSampleBuffer 成功,但从 CMSampleBufferGetSampleSize 记录错误 kCMSampleBufferError_BufferHasNoSampleSizes,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56825026/

34 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com