gpt4 book ai didi

ios - 使用 Superpowered 或 AudioKit 音频库在 Swift 中录制回调函数

转载 作者:搜寻专家 更新时间:2023-10-31 22:10:29 25 4
gpt4 key购买 nike

我的应用(用 swift 编码)根据音频信号进行实时处理。

我需要一个函数,其中左右缓冲区来自输入(来自 USB 麦克风的 2 个 channel ),一个函数带有用于输出的缓冲区(也有 2 个 channel )。

我曾经使用 EZAudio,但我在使用 2 channel 96K 格式时遇到内存问题。当 EZAudio 停止时,我想更改为 Superpowered 或 Audiokit。

我的问题是:我无法在这些库中的任何一个中获取带有缓冲区的函数。

超能力:我在桥 header 中添加了#import "SuperpoweredIOSAudioIO.h"。

我在我的 ViewController 中添加了 SuperpoweredIOSAudioIODelegate。这会自动添加中断、权限和 mapchannels 函数,但不会添加 audioProcessingCallback。

我尝试了以下操作:

audio = SuperpoweredIOSAudioIO(delegate: self, preferredBufferSize: 12, preferredMinimumSamplerate: 96000, audioSessionCategory: AVAudioSessionCategoryPlayAndRecord, channels: 2, audioProcessingCallback: audioProcessingCallback, clientdata: UnsafeMutablePointer)
audio.start()

func audioProcessingCallback(buffers: UnsafeMutablePointer<UnsafeMutablePointer<Float>>, inputChannels: UInt32, outputChannels: UInt32, numberOfSamples: UInt32, sampleRate: UInt32, hostTime: UInt64) -> Bool {
return true
}

但是我得到了错误:

Cannot convert value of type '(UnsafeMutablePointer>, UInt32, UInt32, UInt32, UInt32, UInt64) -> Bool' to expected argument type 'audioProcessingCallback!' (aka 'ImplicitlyUnwrappedOptional<@convention(c) (Optional, Optional>>>, UInt32, UInt32, UInt32, UInt32, UInt64) -> Bool>')

我找不到这个库的任何 Swift 示例...

对于 AudioKit,这是我所做的:

let mic = AKMicrophone()
installTap(mic)
AudioKit.output = mic
AudioKit.start()

func installTap(_ input:AKNode) {
input.avAudioNode.installTap(onBus: 0, bufferSize: 1024, format: AudioKit.format) { [weak self] (buffer, time) -> Void in
self?.signalTracker(didReceivedBuffer: buffer, atTime: time)
}
}

func signalTracker(didReceivedBuffer buffer: AVAudioPCMBuffer, atTime time: AVAudioTime){
let samples = UnsafeBufferPointer(start: buffer.floatChannelData?[0], count:1024)
audioProcess.ProcessDataCaptureWithBuffer(samples, numberOfSamples: UInt32(1024))
}

它可以在我的算法中获取即将到来的缓冲区,但它似乎不是“实时”的,我的意思是,非常慢..(抱歉,很难解释。)

谢谢!

最佳答案

如果您需要进行实时处理,则不应使用 Swift(或 ObjC)。目前在 AudioKit 中执行此操作的方法是创建一个 AUAudioUnit 子类并在其中进行处理。但是,如果您只需要更快的音频分路器,那么 AKLazyTap 是一个很好的解决方案。它与普通的 tap 不同,因为您必须轮询它以获取数据,但此方法允许重复使用缓冲区,因此您可以根据需要尽快调用它。

下面是使用 AKLazyTap 获取峰值的示例:

import UIKit
import AudioKit

class ViewController: UIViewController {

let microphone = AKMicrophone()
var tap: AKLazyTap?

override func viewDidLoad() {
super.viewDidLoad()


AudioKit.output = microphone
AKSettings.ioBufferDuration = 0.002 // This is to decrease latency for faster callbacks.

tap = AKLazyTap(node: microphone.avAudioNode)

guard tap != nil,
let buffer = AVAudioPCMBuffer(pcmFormat: microphone.avAudioNode.outputFormat(forBus: 0), frameCapacity: 44100) else {
fatalError()
}

// Your timer should fire equal to or faster than your buffer duration
Timer.scheduledTimer(withTimeInterval: AKSettings.ioBufferDuration / 2, repeats: true) { _ in

var timeStamp = AudioTimeStamp()
self.tap?.fillNextBuffer(buffer, timeStamp: &timeStamp)

if buffer.frameLength == 0 { return } // This is important, since we're polling for samples, sometimes it's empty, and sometimes it will be double what it was the last call.

let leftMono = UnsafeBufferPointer(start: buffer.floatChannelData?[0], count:Int(buffer.frameLength))
var peak = Float(0)
for sample in leftMono {
peak = max(peak, fabsf(sample))
}
print("number of samples \(buffer.frameLength) peak \(peak)")

}

AudioKit.start()
}
}

关于ios - 使用 Superpowered 或 AudioKit 音频库在 Swift 中录制回调函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47551246/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com