gpt4 book ai didi

avfoundation - 使用 CallKit 的 VOIP 应用程序中的音频短路

转载 作者:行者123 更新时间:2023-12-01 13:41:00 27 4
gpt4 key购买 nike

我正在使用 SpeakerBox应用程序作为我的 VOIP 应用程序的基础。我已经设法让一切正常工作,但我似乎无法摆脱从麦克风到设备扬声器的音频“短路”。

换句话说,当我打电话时,我可以在扬声器中听到自己的声音以及对方的声音。我该如何更改?

AVAudioSession 设置:

    AVAudioSession *sessionInstance = [AVAudioSession sharedInstance];

NSError *error = nil;
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's audio category");

[sessionInstance setMode:AVAudioSessionModeVoiceChat error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's audio mode");

NSTimeInterval bufferDuration = .005;
[sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's I/O buffer duration");

[sessionInstance setPreferredSampleRate:44100 error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's preferred sample rate");

IO 单元的设置:

- (void)setupIOUnit
{
try {
// Create a new instance of Apple Voice Processing IO

AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;

AudioComponent comp = AudioComponentFindNext(NULL, &desc);
XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of Apple Voice Processing IO");

// Enable input and output on Apple Voice Processing IO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element

UInt32 one = 1;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on Apple Voice Processing IO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on Apple Voice Processing IO");

// Explicitly set the input and output client formats
// sample rate = 44100, num channels = 1, format = 32 bit floating point

CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false);
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");

// Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
// of samples it will be asked to produce on any single given call to AudioUnitRender
UInt32 maxFramesPerSlice = 4096;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");

// Get the property value back from Apple Voice Processing IO. We are going to use this value to allocate buffers accordingly
UInt32 propSize = sizeof(UInt32);
XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on Apple Voice Processing IO");

// We need references to certain data in the render callback
// This simple struct is used to hold that information

cd.rioUnit = _rioUnit;
cd.muteAudio = &_muteAudio;
cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;

// Set the render callback on Apple Voice Processing IO
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = NULL;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on Apple Voice Processing IO");

// Initialize the Apple Voice Processing IO instance
XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize Apple Voice Processing IO instance");
}

catch (CAXException &e) {
NSLog(@"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation);
}
catch (...) {
NSLog(@"Unknown error returned from setupIOUnit");
}

return;
}

启动 IOUnit:

NSError *error = nil;
[[AVAudioSession sharedInstance] setActive:YES error:&error];
if (nil != error) NSLog(@"AVAudioSession set active (TRUE) failed with error: %@", error);

OSStatus err = AudioOutputUnitStart(_rioUnit);
if (err) NSLog(@"couldn't start Apple Voice Processing IO: %d", (int)err);
return err;

停止 IOUnit

NSError *error = nil;
[[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
if (nil != error) NSLog(@"AVAudioSession set active (FALSE) failed with error: %@", error);

OSStatus err = AudioOutputUnitStop(_rioUnit);
if (err) NSLog(@"couldn't stop Apple Voice Processing IO: %d", (int)err);
return err;

我使用 PJSIP 作为我的 SIP 栈,并且有一个 Asterisk 服务器。问题必须出在客户端,因为我们也有一个没有这个问题的基于 Android 的 PJSIP 实现。

最佳答案

我在使用 WebRTC 时遇到了同样的问题。我最终得出的结论是,您不应该在 AudioController.mm 中设置 IOUnit,而是将其留给 PJSIP(在我的例子中是 WebRTC)。

快速修复如下:注释掉AudioController.mm的setupAudioChain中的[self setupIOUnit];didActivate audioSession中的startAudio()ProviderDelegate.swift

关于avfoundation - 使用 CallKit 的 VOIP 应用程序中的音频短路,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40285141/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com