gpt4 book ai didi

ios - AVAudioEngine实时调频

转载 作者:搜寻专家 更新时间:2023-10-31 22:55:25 26 4
gpt4 key购买 nike

我想实时修改传入信号并将其发送到 iOS 设备扬声器。我读过 AVAudioEngine 可用于此类任务。但是找不到我想要实现的目标的文档或示例。

为了测试目的,我做了:

audioEngine = AVAudioEngine()

let unitEffect = AVAudioUnitReverb()
unitEffect.wetDryMix = 50

audioEngine.attach(unitEffect)

audioEngine.connect(audioEngine.inputNode, to: unitEffect, format: nil)
audioEngine.connect(unitEffect, to: audioEngine.outputNode, format: nil)

audioEngine.prepare()

如果按下一个按钮,我会这样做:

do {
try audioEngine.start()
} catch {
print(error)
}

audioEngine.stop()

混响效果应用于信号,我可以听到它的工作原理。所以现在我想摆脱混响和:

  1. 调制输入信号,例如反转信号、调制频率等。是否有一种可以使用的效果集合或以某种方式数学调制频率的可能性?
  2. 在 iOS 设备上启动时,我确实在此处获得混响,但输出仅在顶部电话扬声器上,而不是响亮的底部扬声器。如何改变?

最佳答案

这个 github 存储库确实做到了:https://github.com/dave234/AppleAudioUnit .

只需从那里将 BufferedAudioUnit 添加到您的项目中,然后将其子类化为您的实现,如下所示:

音频处理单元.h:

#import "BufferedAudioUnit.h"

@interface AudioProcessingUnit : BufferedAudioUnit

@end

AudioProcessingUnit.m:

#import "AudioProcessingUnit.h"

@implementation AudioProcessingUnit

-(ProcessEventsBlock)processEventsBlock:(AVAudioFormat *)format {

return ^(AudioBufferList *inBuffer,
AudioBufferList *outBuffer,
const AudioTimeStamp *timestamp,
AVAudioFrameCount frameCount,
const AURenderEvent *realtimeEventListHead) {

for (int i = 0; i < inBuffer->mNumberBuffers; i++) {

float *buffer = inBuffer->mBuffers[i].mData;
for (int j = 0; j < inBuffer->mBuffers[i].mDataByteSize; j++) {
buffer[j] = /*process it here*/;
}

memcpy(outBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mDataByteSize);
}
};
}

@end

并且,在您的 AVAudioEngine 设置中:

let audioComponentDescription = AudioComponentDescription(
componentType: kAudioUnitType_Effect,
componentSubType: kAudioUnitSubType_VoiceProcessingIO,
componentManufacturer: 0x0,
componentFlags: 0,
componentFlagsMask: 0
);

AUAudioUnit.registerSubclass(
AudioProcessingUnit.self,
as: audioComponentDescription,
name: "AudioProcessingUnit",
version: 1
)

AVAudioUnit.instantiate(
with: audioComponentDescription,
options: .init(rawValue: 0)
) { (audioUnit, error) in
guard let audioUnit = audioUnit else {
NSLog("Audio unit is NULL")
return
}

let formatHardwareInput = self.engine.inputNode.inputFormat(forBus: 0)

self.engine.attach(audioUnit)
self.engine.connect(
self.engine.inputNode,
to: audioUnit,
format: formatHardwareInput
)
self.engine.connect(
audioUnit,
to: self.engine.outputNode,
format: formatHardwareInput
)
}

关于ios - AVAudioEngine实时调频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48911800/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com