gpt4 book ai didi

avfoundation - 如何将 AudioBufferList 转换为 CMSampleBuffer?

转载 作者:行者123 更新时间:2023-12-01 03:43:10 28 4
gpt4 key购买 nike

我有一个附加到 AVPlayerItem 的 AudioTapProcessor。
这将调用static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)处理时。

我需要转换 AudioBufferListCMSampleBuffer所以我可以使用 AVAssetWriterAudioInput.appendSampleBuffer将其写入电影文件。

那么如何转换AudioBufferListCMSampleBuffer ?我试过了,但得到 -12731 错误:错误 cCMSampleBufferSetDataBufferFromAudioBufferList :Optional("-12731")

func processAudioData(audioData: UnsafeMutablePointer<AudioBufferList>, framesNumber: UInt32) {
var sbuf : Unmanaged<CMSampleBuffer>?
var status : OSStatus?
var format: Unmanaged<CMFormatDescription>?

var formatId = UInt32(kAudioFormatLinearPCM)
var formatFlags = UInt32( kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked )
var audioFormat = AudioStreamBasicDescription(mSampleRate: 44100.00, mFormatID:formatId, mFormatFlags:formatFlags , mBytesPerPacket: 1, mFramesPerPacket: 1, mBytesPerFrame: 16, mChannelsPerFrame: 2, mBitsPerChannel: 2, mReserved: 0)

status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format)
if status != noErr {
println("Error CMAudioFormatDescriptionCreater :\(status?.description)")
return
}


var timing = CMSampleTimingInfo(duration: CMTimeMake(1, 44100), presentationTimeStamp: kCMTimeZero, decodeTimeStamp: kCMTimeInvalid)

status = CMSampleBufferCreate(kCFAllocatorDefault,nil,Boolean(0),nil,nil,format?.takeRetainedValue(), CMItemCount(framesNumber), 1, &timing, 0, nil, &sbuf);
if status != noErr {
println("Error CMSampleBufferCreate :\(status?.description)")
return
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(sbuf?.takeRetainedValue(), kCFAllocatorDefault , kCFAllocatorDefault, 0, audioData)
if status != noErr {
println("Error cCMSampleBufferSetDataBufferFromAudioBufferList :\(status?.description)")
return
}

var currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sbuf?.takeRetainedValue());
println(" audio buffer at time: \(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime))")

if !assetWriterAudioInput!.readyForMoreMediaData {
return
}else if assetWriter.status == .Writing {

if !assetWriterAudioInput!.appendSampleBuffer(sbuf?.takeRetainedValue()) {
println("Problem appending audio buffer at time: \(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime))")
}

}else{
println("assetWriterStatus:\(assetWriter.status.rawValue), Error: \(assetWriter.error.localizedDescription)")
println("Could not write a frame")
}




}

最佳答案

好的,我已经成功解决了这个问题。

问题是我不应该构造 AudioStreamBasicDescription自己构造。但是使用 AudioProcessorTap 的准备回调提供的那个.
static void tap_PrepareCallback(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)
//retain this one

关于avfoundation - 如何将 AudioBufferList 转换为 CMSampleBuffer?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29514701/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com