gpt4 book ai didi

swift - Effect AudioUnit 只调用一次渲染回调

转载 作者:搜寻专家 更新时间:2023-11-01 07:12:08 48 4
gpt4 key购买 nike

我想要完成的是通过 Core Audio 效果单元处理一组音频数据并取回处理过的数据(不播放它——即离线)。我遇到了瓶颈,这可能是我不理解的非常基本的事情。

理想情况下,我想要的是单个音频单元(如延迟效果)通过渲染回调引入原始数据,然后我一遍又一遍地对该单元调用 AudioUnitRender(),保存生成的缓冲区供以后使用。所以:{RENDER CALLBACK}->[EFFECT UNIT]->{Render Loop}->{Data}。但是当我这样做时,无论我在循环中对 AudioUnit 调用 AudioUnitRender() 多少次,渲染回调只会在第一次被调用。

我尝试过的事情:

  1. 有效:在kAudioUnitSubType_DefaultOutput 上设置渲染回调并调用AudioOutputUnitStart()。这工作正常并从扬声器播放了我的音频数据。

  2. 有效:在kAudioUnitSubType_GenericOutput 上设置渲染回调并在循环中调用AudioUnitRender()。这似乎行得通,并且很好地传递了原始数据的未修改副本。

  3. 有效:在 kAudioUnitSubType_Delay 单元上设置渲染回调并将其输出连接到 kAudioUnitSubType_DefaultOutput。调用 AudioOutputUnitStart() 从扬声器中播放我的音频数据,并按预期延迟。

  4. 失败:最后,我在kAudioUnitSubType_Delay 单元上设置了渲染回调,并将其输出连接到kAudioUnitSubType_GenericOutput。在循环中调用 AudioUnitRender() 只会在第一次调用 AudioUnitRender() 时调用渲染回调,就像我尝试直接渲染效果。

我没有从任何可能指向问题的函数调用中得到任何 OSStatus 错误。有人可以帮助我理解为什么效果不会多次调用渲染回调函数,除非它连接到默认输出吗?

谢谢!

下面是我上面测试的相关代码示例。如有必要,我可以提供更多详细信息,但连接单元的设置代码就在那里。

// Test Functions

// [EFFECT ONLY] - FAILS! - ONLY CALLS RENDER CALLBACK ON FIRST CALL TO RENDER
func TestRenderingEffectOnly() {
var testUnit = CreateUnit(type: .TestEffect)
AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
RenderUnit(testUnit)
}


// [DEFAULT OUTPUT ONLY] - WORKS!
func TestDefaultOutputPassthrough() {
var testUnit = CreateUnit(type: .DefaultOutput)
AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
AudioOutputUnitStart(testUnit)
}


// [GENERIC OUTPUT ONLY] - SEEMS TO WORK!
func TestRenderingToGenericOutputOnly() {
var testUnit = CreateUnit(type: .GenericOutput)
AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
RenderUnit(testUnit)
}


// [EFFECT]->[DEFAULT OUTPUT] - WORKS!
func TestEffectToDefaultOutput() {

var effectUnit = CreateUnit(type: .TestEffect)
var outputUnit = CreateUnit(type: .DefaultOutput)

AddRenderCallbackToUnit(&effectUnit, callback: RenderCallback)

var connection = AudioUnitConnection()
connection.sourceAudioUnit = effectUnit
connection.sourceOutputNumber = 0
connection.destInputNumber = 0

let result = AudioUnitSetProperty(outputUnit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, 0, &connection, UInt32(MemoryLayout<AudioUnitConnection>.stride))
NSLog("connection result = \(result)")

AudioOutputUnitStart(outputUnit)
}


// [EFFECT]->[GENERIC OUTPUT] - FAILS! - ONLY CALLS RENDER CALLBACK ON FIRST CALL TO RENDER
func TestRenderingEffectToGenericOutput() {

var effectUnit = CreateUnit(type: .TestEffect)
var outputUnit = CreateUnit(type: .GenericOutput)

AddRenderCallbackToUnit(&effectUnit, callback: RenderCallback)

var connection = AudioUnitConnection()
connection.sourceAudioUnit = effectUnit
connection.sourceOutputNumber = 0
connection.destInputNumber = 0

let result = AudioUnitSetProperty(outputUnit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, 0, &connection, UInt32(MemoryLayout<AudioUnitConnection>.stride))
NSLog("connection result = \(result)")

// Manually render audio
RenderUnit(outputUnit)
}



// SETUP FUNCTIONS


// AudioUnitRender callback. Read in float data from left and right channel into buffer as necessary
let RenderCallback: AURenderCallback = {(inRefCon, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData) -> OSStatus in
NSLog("render \(inNumberFrames) frames")
// Load audio data into ioData here… my data is floating point and plays back ok
return noErr
}


// Configure new audio unit
func CreateUnit(type: UnitType) -> AudioUnit {

var unit: AudioUnit? = nil
var outputcd = AudioComponentDescription()

switch type {

case .DefaultOutput:
outputcd.componentType = kAudioUnitType_Output
outputcd.componentSubType = kAudioUnitSubType_DefaultOutput

case .GenericOutput:
outputcd.componentType = kAudioUnitType_Output
outputcd.componentSubType = kAudioUnitSubType_GenericOutput

case .TestEffect:
outputcd.componentType = kAudioUnitType_Effect
outputcd.componentSubType = kAudioUnitSubType_Delay

}

outputcd.componentManufacturer = kAudioUnitManufacturer_Apple
outputcd.componentFlags = 0
outputcd.componentFlagsMask = 0

let comp = AudioComponentFindNext(nil, &outputcd)

if comp == nil {
print("can't get output unit")
exit(-1)
}

let status = AudioComponentInstanceNew(comp!, &unit)
NSLog("new unit status = \(status)")


// Initialize the unit -- not actually sure *when* is best to do this
AudioUnitInitialize(unit!)

return unit!
}


// Attach a callback to an audio unit
func AddRenderCallbackToUnit(_ unit: inout AudioUnit, callback: @escaping AURenderCallback) {
var input = AURenderCallbackStruct(inputProc: callback, inputProcRefCon: &unit)
AudioUnitSetProperty(unit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, UInt32(MemoryLayout<AURenderCallbackStruct>.size))
}


// Render up to 'numberOfFramesToRender' frames for testing
func RenderUnit(_ unitToRender: AudioUnit) {

let numberOfFramesToRender = UInt32(20_000) // Incoming data length: 14,463,360

let inUnit = unitToRender
var ioActionFlags = AudioUnitRenderActionFlags()
var inTimeStamp = AudioTimeStamp()
let inOutputBusNumber: UInt32 = 0
let inNumberFrames: UInt32 = 512
var ioData = AudioBufferList.allocate(maximumBuffers: 2)

var currentFrame: UInt32 = 0

while currentFrame < numberOfFramesToRender {

currentFrame += inNumberFrames

NSLog("call render…")
let status = AudioUnitRender(inUnit, &ioActionFlags, &inTimeStamp, inOutputBusNumber, inNumberFrames, ioData.unsafeMutablePointer)
if (status != noErr) {
NSLog("render status = \(status)")
break
}

// Read new buffer data here and save it for later…

}
}

最佳答案

您可能需要让您的代码在每次渲染调用之间退出到运行循环。这允许操作系统为音频线程安排一些时间,以便在每个连续的渲染调用之间运行操作系统音频单元。

关于swift - Effect AudioUnit 只调用一次渲染回调,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44274662/

48 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com