gpt4 book ai didi

iphone - aurioTouch2录制问题。我需要将数据从一个AudioBufferList添加到另一个

转载 作者:行者123 更新时间:2023-12-03 02:31:26 26 4
gpt4 key购买 nike

我研究aurioTouch2示例代码。但我想将所有内容记录在文件中。 aurioTouch没有提供这种可能性。我试图在FFTBufferManager.cpp中的void FFTBufferManager::GrabAudioData(AudioBufferList *inBL)中使用此代码记录数据

ExtAudioFileRef cafFile;
AudioStreamBasicDescription cafDesc;

cafDesc.mBitsPerChannel = 16;
cafDesc.mBytesPerFrame = 4;
cafDesc.mBytesPerPacket = 4;
cafDesc.mChannelsPerFrame = 2;
cafDesc.mFormatFlags = 0;
cafDesc.mFormatID = 'ima4';
cafDesc.mFramesPerPacket = 1;
cafDesc.mReserved = 0;
cafDesc.mSampleRate = 44100;


CFStringRef refH;
refH = CFStringCreateWithCString(kCFAllocatorDefault, "/var/mobile/Applications/BD596ECF-A6F2-41EB-B4CE-3A9644B1C26A/Documents/voice2.caff", kCFStringEncodingUTF8);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
refH,
kCFURLPOSIXPathStyle,
false);
OSType status = ExtAudioFileCreateWithURL(
destinationURL, // inURL
'caff', // inFileType
&cafDesc, // inStreamDesc
NULL, // inChannelLayout
kAudioFileFlags_EraseFile, // inFlags
&cafFile // outExtAudioFile
); // returns 0xFFFFFFCE
ExtAudioFileWrite(cafFile, mNumberFrames, inBL);

效果很好,但是我使用 AudioBufferList *inBL,这只是所有音频数据的一小部分(大约1秒)。每隔1秒调用一次此功能,以分析来自麦克风的新音频数据。因此,如果我可以将数据从一个AudioBufferList添加到另一个AudioBufferList,那就太好了。

或者可能有人知道其他方法。

最佳答案

您应该设置新的AudioUnit来录制音频(具有其自己的回调函数)。

    OSStatus status;

// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

// Get audio units
status = AudioComponentInstanceNew(inputComponent, &mAudioUnit);

// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));

// Enable IO for playback
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));

// Describe format
AudioStreamBasicDescription audioFormat={0};
audioFormat.mSampleRate = kSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;

// Apply format
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));


// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)self;

status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));

// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));

// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: @"%@/output.caf", documentsDirectory];
NSLog(@">>> %@\n", destinationFilePath);

CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (__bridge CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);

OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &mAudioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, @"Couldn't create file for writing");

setupErr = ExtAudioFileSetProperty(mAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, @"Couldn't create file for format");

setupErr = ExtAudioFileWriteAsync(mAudioFileRef, 0, NULL);
NSAssert(setupErr == noErr, @"Couldn't initialize write buffers for audio file");

CheckError(AudioUnitInitialize(mAudioUnit), "AudioUnitInitialize");
CheckError(AudioOutputUnitStart(mAudioUnit), "AudioOutputUnitStart");

关于iphone - aurioTouch2录制问题。我需要将数据从一个AudioBufferList添加到另一个,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11471094/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com