gpt4 book ai didi

swift - AudioUnit - 在 Swift 中控制左声道和右声道输出

转载 作者:行者123 更新时间:2023-11-30 11:58:59 25 4
gpt4 key购买 nike

我正在尝试用 swift 同步录制和播放。我需要分别在左声道和右声道播放。我使用 AudioUnit 在一个 channel 中成功录制和播放。但是当我尝试使用两个缓冲区来控制两个 channel 后,它们都静音了。下面是我设置格式的方法:

    var audioFormat = AudioStreamBasicDescription()
audioFormat.mSampleRate = Double(sampleRate)
audioFormat.mFormatID = kAudioFormatLinearPCM
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked
audioFormat.mChannelsPerFrame = 2
audioFormat.mFramesPerPacket = 1
audioFormat.mBitsPerChannel = 32
audioFormat.mBytesPerPacket = 8
audioFormat.mReserved = 0

这是我的输入回调

    private let inputCallback: AURenderCallback = {(
inRefCon,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData) -> OSStatus in
let audioRAP:AudioUnitSample = Unmanaged<AudioUnitSample>.fromOpaque(inRefCon).takeUnretainedValue()
var status = noErr;
var buf = UnsafeMutableRawPointer.allocate(bytes: Int(inNumberFrames * 4),
alignedTo: MemoryLayout<Int8>.alignment)
let bindptr = buf.bindMemory(to: Float.self,
capacity: Int(inNumberFrames * 4))
bindptr.initialize(to: 0)
var buffer: AudioBuffer = AudioBuffer(mNumberChannels: 2,
mDataByteSize: inNumberFrames * 4,
mData: buf)

memset(buffer.mData, 0, Int(buffer.mDataByteSize))
var bufferList: AudioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: buffer)(Int(bufferList.mBuffers.mDataByteSize))")

status = AudioUnitRender(audioRAP.newAudioUnit!,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList)
audioRAP.audioBuffers.append((bufferList.mBuffers,Int(inNumberFrames * 4)))

return status
}

这是我的输出回调:

    private let outputCallback:AURenderCallback = {
(inRefCon,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData) -> OSStatus in
let audioRAP:AudioUnitSample = Unmanaged<AudioUnitSample>.fromOpaque(inRefCon).takeUnretainedValue()
if ioData == nil{
return noErr
}
ioData!.pointee.mNumberBuffers = 2
var bufferCount = ioData!.pointee.mNumberBuffers

var tempBuffer = audioRAP.audioBuffers[0]

var monoSamples = [Float]()
let ptr1 = tempBuffer.0.mData?.assumingMemoryBound(to: Float.self)
monoSamples.removeAll()
monoSamples.append(contentsOf: UnsafeBufferPointer(start: ptr1, count: Int(inNumberFrames)))

let abl = UnsafeMutableAudioBufferListPointer(ioData)
let bufferLeft = abl![0]
let bufferRight = abl![1]
let pointerLeft: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(bufferLeft)
let pointerRight: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(bufferRight)

for frame in 0..<inNumberFrames {
let pointerIndex = pointerLeft.startIndex.advanced(by: Int(frame))
pointerLeft[pointerIndex] = monoSamples[Int(frame)]
}
for frame in 0..<inNumberFrames {
let pointerIndex = pointerRight.startIndex.advanced(by: Int(frame))
pointerRight[pointerIndex] = monoSamples[Int(frame)]
}

tempBuffer.0.mData?.deallocate(bytes:tempBuffer.1, alignedTo: MemoryLayout<Int8>.alignment)
audioRAP.audioBuffers.removeFirst()
return noErr
}

这是音频缓冲区的声明:

    private var audioBuffers = [(AudioBuffer, Int)]()

我是否错过了输出或输入部分的某些内容?任何帮助将不胜感激!

最佳答案

第一个大问题是你的代码正在内部进行内存分配 音频回调。 Apple 文档明确指出,不应在音频上下文内完成内存管理、同步甚至对象消息传递。在音频回调中,您可能只想坚持只将音频样本的数据复制到预分配的缓冲区或从预分配的缓冲区复制数据。其他所有事情(尤其是缓冲区创建和释放)都应该在音频回调之外完成。

关于swift - AudioUnit - 在 Swift 中控制左声道和右声道输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47468717/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com