gpt4 book ai didi

ios - AudioUnit 输入缺少周期性​​样本

转载 作者:塔克拉玛干 更新时间:2023-11-02 08:18:18 26 4
gpt4 key购买 nike

我已经实现了一个 AUGraph,其中包含一个 AudioUnit,用于处理来自麦克风和耳机的 IO。我遇到的问题是缺少音频输入 block 。

我相信样本在硬件到软件缓冲区交换期间丢失了。我尝试将 iPhone 的采样率从 44.1 kHz 降低到 20 kHz,看看这是否会给我丢失的数据,但它没有产生我预期的输出。

AUGraph 设置如下:

// Audio component description
AudioComponentDescription desc;
bzero(&desc, sizeof(AudioComponentDescription));
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;

// Stereo ASBD
AudioStreamBasicDescription stereoStreamFormat;
bzero(&stereoStreamFormat, sizeof(AudioStreamBasicDescription));
stereoStreamFormat.mSampleRate = kSampleRate;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
stereoStreamFormat.mBytesPerPacket = 4;
stereoStreamFormat.mBytesPerFrame = 4;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 16;

OSErr err = noErr;
@try {
// Create new AUGraph
err = NewAUGraph(&auGraph);
NSAssert1(err == noErr, @"Error creating AUGraph: %hd", err);

// Add node to AUGraph
err = AUGraphAddNode(auGraph,
&desc,
&ioNode);
NSAssert1(err == noErr, @"Error adding AUNode: %hd", err);

// Open AUGraph
err = AUGraphOpen(auGraph);
NSAssert1(err == noErr, @"Error opening AUGraph: %hd", err);

// Add AUGraph node info
err = AUGraphNodeInfo(auGraph,
ioNode,
&desc,
&_ioUnit);
NSAssert1(err == noErr, @"Error adding noe info to AUGraph: %hd", err);

// Enable input, which is disabled by default.
UInt32 enabled = 1;
err = AudioUnitSetProperty(_ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&enabled,
sizeof(enabled));
NSAssert1(err == noErr, @"Error enabling input: %hd", err);

// Apply format to input of ioUnit
err = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, @"Error setting input ASBD: %hd", err);

// Apply format to output of ioUnit
err = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, @"Error setting output ASBD: %hd", err);

// Set hardware IO callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = hardwareIOCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
err = AUGraphSetNodeInputCallback(auGraph,
ioNode,
kOutputBus,
&callbackStruct);
NSAssert1(err == noErr, @"Error setting IO callback: %hd", err);

// Initialize AudioGraph
err = AUGraphInitialize(auGraph);
NSAssert1(err == noErr, @"Error initializing AUGraph: %hd", err);

// Start audio unit
err = AUGraphStart(auGraph);
NSAssert1(err == noErr, @"Error starting AUGraph: %hd", err);

}
@catch (NSException *exception) {
NSLog(@"Failed with exception: %@", exception);
}

其中kOutputBus定义为0,kInputBus为1,kSampleRate为44100。IO回调函数为:

IO回调函数

static OSStatus hardwareIOCallback(void                         *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Scope reference to GSFSensorIOController class
GSFSensorIOController *sensorIO = (__bridge GSFSensorIOController *) inRefCon;

// Grab the samples and place them in the buffer list
AudioUnit ioUnit = sensorIO.ioUnit;

OSStatus result = AudioUnitRender(ioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
ioData);

if (result != noErr) NSLog(@"Blowing it in interrupt");

// Process input data
[sensorIO processIO:ioData];

// Set up power tone attributes
float freq = 20000.00f;
float sampleRate = kSampleRate;
float phase = sensorIO.sinPhase;
float sinSignal;

double phaseInc = 2 * M_PI * freq / sampleRate;

// Write to output buffers
for(size_t i = 0; i < ioData->mNumberBuffers; ++i) {
AudioBuffer buffer = ioData->mBuffers[i];
for(size_t sampleIdx = 0; sampleIdx < inNumberFrames; ++sampleIdx) {
// Grab sample buffer
SInt16 *sampleBuffer = buffer.mData;

// Generate power tone on left channel
sinSignal = sin(phase);
sampleBuffer[2 * sampleIdx] = (SInt16)((sinSignal * 32767.0f) /2);

// Write to commands to micro on right channel as necessary
if(sensorIO.newDataOut)
sampleBuffer[2*sampleIdx + 1] = (SInt16)((sinSignal * 32767.0f) /2);
else
sampleBuffer[2*sampleIdx + 1] = 0;

phase += phaseInc;
if (phase >= 2 * M_PI * freq) {
phase -= (2 * M_PI * freq);
}
}
}

// Store sine wave phase for next callback
sensorIO.sinPhase = phase;

return result;
}

hardwareIOCallback 中调用的 processIO 函数用于处理输入并为输出创建响应。出于调试目的,我只是让它将输入缓冲区的每个样本推送到 NSMutableArray。

进程 IO

- (void) processIO: (AudioBufferList*) bufferList {
for (int j = 0 ; j < bufferList->mNumberBuffers ; j++) {
AudioBuffer sourceBuffer = bufferList->mBuffers[j];
SInt16 *buffer = (SInt16 *) bufferList->mBuffers[j].mData;

for (int i = 0; i < (sourceBuffer.mDataByteSize / sizeof(sourceBuffer)); i++) {
// DEBUG: Array of raw data points for printing to a file
[self.rawInputData addObject:[NSNumber numberWithInt:buffer[i]]];
}
}
}

然后,在停止 AUGraph 并将所有样本放入数组 rawInputData 后,我将此输入缓冲区的内容写入文件。然后我在 MatLab 中打开这个文件并绘制它。在这里,我看到音频输入缺少数据(在下图中以红色圆圈显示)。

Missing Data

我不知道如何解决这个问题,我真的需要一些帮助来理解和解决这个问题。

最佳答案

您的回调可能太慢了。通常不建议在音频单元回调中使用任何 Objective C 方法(例如添加到可变数组或任何其他可以分配内存的方法)。

关于ios - AudioUnit 输入缺少周期性​​样本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23377624/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com