gpt4 book ai didi

ios - 带有 kAudioUnitSubType_VoiceProcessingIO 剪辑的 kAudioUnitType_Output 音频单元

转载 作者:行者123 更新时间:2023-12-01 16:34:14 27 4
gpt4 key购买 nike

我正在开发一个录音和播放应用程序。我正在使用带有 kAudioUnitSubType_VoiceProcessingIO 的 kAudioUnitType_Output 音频。有时它工作正常,但有时有很多剪辑。我认为这是因为环境噪音。我不知道这种削波是否是 AEC 噪音的副作用,或者我的音频单元设置错误:

这是我的设置功能:

struct CallbackData {
AudioUnit rioUnit;

BOOL* audioChainIsBeingReconstructed;

CallbackData(): rioUnit(NULL), audioChainIsBeingReconstructed(NULL){}
} cd;

static OSStatus performRender (void*inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
if (*cd.audioChainIsBeingReconstructed == NO)
{
err = AudioUnitRender(cd.rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);

float *inputFrames = (float*)(ioData->mBuffers->mData);
//engine_process_iOS(inputFrames, ioData->mBuffers->mNumberChannels * inNumberFrames);
}

return err;
}


- (void)setupAudioSession
{
try {
// Configure the audio session
AVAudioSession *sessionInstance = [AVAudioSession sharedInstance];

NSError *error = nil;
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:&error];

NSTimeInterval bufferDuration = .005;
[sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error];

[sessionInstance setPreferredSampleRate:44100 error:&error];

[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(handleInterruption:)
name:AVAudioSessionInterruptionNotification
object:sessionInstance];

[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(handleRouteChange:)
name:AVAudioSessionRouteChangeNotification
object:sessionInstance];

[[NSNotificationCenter defaultCenter] addObserver: self
selector: @selector(handleMediaServerReset:)
name: AVAudioSessionMediaServicesWereResetNotification
object: sessionInstance];


[[AVAudioSession sharedInstance] setActive:YES error:&error];
}

catch (NSException *e) {
NSLog(@"Error returned from setupAudioSession");
}
catch (...) {
NSLog(@"Unknown error returned from setupAudioSession");
}

return;
}

- (void)setupIOUnit
{
try {
// Create a new instance of AURemoteIO

AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO ;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;

AudioComponent comp = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(comp, &_rioUnit);

UInt32 one = 1;
AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one));
AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one));

CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 2, CAStreamBasicDescription::kPCMFormatFloat32, true);

AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat));
AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat));

UInt32 maxFramesPerSlice = 4096;
AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32));

UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize);

cd.rioUnit = _rioUnit;
cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;

// Set the render callback on AURemoteIO
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = NULL;
AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback));

// Initialize the AURemoteIO instance
AudioUnitInitialize(_rioUnit);
//if (err) NSLog(@"couldn't start AURemoteIO: %d", (int)err);

}

catch (NSException *e) {
NSLog(@"Error returned from setupIOUnit");
}
catch (...) {
NSLog(@"Unknown error returned from setupIOUnit");
}

return;
}

这种剪辑的原因可能是什么?

最佳答案

回调中音频缓冲区的样本应该是 SInt16。尝试转换它:

SInt16 *inputFrames = (SInt16*)(ioData->mBuffers[0]->mData);

关于ios - 带有 kAudioUnitSubType_VoiceProcessingIO 剪辑的 kAudioUnitType_Output 音频单元,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29702459/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com