gpt4 book ai didi

ios - 为什么 AudioOutputUnitStart() 失败并返回代码 -500?

转载 作者:行者123 更新时间:2023-11-29 12:30:16 28 4
gpt4 key购买 nike

我正在尝试使用 SDK 8.1 上的 Core Audio 和 AVAudioSession 开发一个分析来自麦克风的音频流的应用程序。使用苹果文档中的点点滴滴以及有用的博主,我成功地组合了一个程序

  • 设置 AVAudioSession 的属性。
  • 激活 AVAudioSession。
  • 实例化音频组件实例。
  • 设置音频单元的属性,例如。一个回调函数和一个 AudioStreamDescription。
  • 初始化音频单元。

但程序无法开始录音 - AudioOutputUnitStart 函数失败并返回代码 -500。这是为什么? [如果将以下代码另存为 ViewController.m 并在单页 xcode 模板中使用,则它会按原样运行。即使它不是最小的,我也试图让它尽可能小]。

#import "ViewController.h"
@import AVFoundation;
@import AudioUnit;
#define kInputBus 1
AudioComponentInstance *audioUnit = NULL;
float *convertedSampleBuffer = NULL;
int status = 0;
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBufferList *bufferList;
OSStatus status;
status = AudioUnitRender(*audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
bufferList);
printf("%d", status); printf("%s", " is the return code of AudioUnitRender from the recordingCallback.\n");
// DoStuffWithTheRecordedAudio(bufferList);
return noErr;
}
int myAudio() {
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setCategory: AVAudioSessionCategoryRecord error: nil];
[mySession setMode: AVAudioSessionModeMeasurement error: nil];
[mySession setPreferredSampleRate:44100 error:nil];
[mySession setPreferredIOBufferDuration:0.02 error:nil];
[mySession setActive: YES error: nil];
audioUnit = (AudioUnit*)malloc(sizeof(AudioUnit));
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(comp, audioUnit);
printf("%d", status); printf("%s", " is the return code of Instantiating a new audio component instance.\n");
UInt32 enable = 1;
status = AudioUnitSetProperty(*audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &enable, sizeof(enable));
printf("%d", status); printf("%s", " is the return code of EnablingIO on the audiounit.\n");
AudioStreamBasicDescription streamDescription = {0};
streamDescription.mSampleRate = 44100;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
streamDescription.mFramesPerPacket = 1;
streamDescription.mChannelsPerFrame = 1;
streamDescription.mBitsPerChannel = 16;
streamDescription.mBytesPerPacket = 2;
streamDescription.mBytesPerFrame = 2;
status = AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
printf("%d", status); printf("%s", " is the return code of setting the AudioStreamDescription.\n");
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = NULL;
status = AudioUnitSetProperty(*audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
printf("%d", status); printf("%s", " is the return code of setting the recording callback on the audiounit\n");
status = AudioUnitInitialize(*audioUnit);
printf("%d", status); printf("%s", " is the return code of initializing the audiounit.\n");
status = AudioOutputUnitStart(*audioUnit);
printf("%d", status); printf("%s", " is the return code of Starting the audioUnit\n");
return noErr;
}

@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
myAudio();
[NSThread sleepForTimeInterval:1];
exit(0);
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
@end

所有 printf 语句都返回 0,除了最后一个返回 -500。

最佳答案

题中提供的代码有两个错误:

  1. 应在音频单元启动后创建 Audio Session 。
  2. @jn_pdx 指出必须分配 bufferList。

这是一个完整的 ViewController.m,它可以录音,如果评论 block 被激活,也可以播放(如果你的目标是播放并激活 Audio Session block ,然后改变类别从 RecordPlayAndRecord。感谢 Michael Tyson 发布我构建的基础。

#import "ViewController.h"

@interface ViewController ()
@end
@implementation ViewController


@import AVFoundation;
@import AudioUnit;


#define kOutputBus 0
#define kInputBus 1

static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {

// This works, but can be optimised by moving this code out of the Callback function.

AudioBufferList *bufferList;
bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 1024 * 2;
bufferList->mBuffers[0].mData = calloc(1024, 2);

OSStatus status;

status = AudioUnitRender(inRefCon,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
bufferList);
if (status != noErr) {
NSLog(@"Error %ld", status);
} else {
NSLog(@"No Errors!");
printf("%d, ",(int)*((SInt16 *)bufferList->mBuffers[0].mData));
}

// Now, we have the samples we just read sitting in buffers in bufferList
// DoStuffWithTheRecordedAudio(bufferList);
return noErr;
}

//static OSStatus playbackCallback(void *inRefCon,
// AudioUnitRenderActionFlags *ioActionFlags,
// const AudioTimeStamp *inTimeStamp,
// UInt32 inBusNumber,
// UInt32 inNumberFrames,
// AudioBufferList *ioData) {
// // Notes: ioData contains buffers (may be more than one!)
// // Fill them up as much as you can. Remember to set the size value in each buffer to match how
// // much data is in the buffer.
// return noErr;
//}

- (void)myAudio {



OSStatus status;
AudioComponentInstance audioUnit;

// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);

// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));

//// Enable IO for playback
//status = AudioUnitSetProperty(audioUnit,
// kAudioOutputUnitProperty_EnableIO,
// kAudioUnitScope_Output,
// kOutputBus,
// &flag,
// sizeof(flag));

// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;

// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
//status = AudioUnitSetProperty(audioUnit,
// kAudioUnitProperty_StreamFormat,
// kAudioUnitScope_Input,
// kOutputBus,
// &audioFormat,
// sizeof(audioFormat));
//

// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = audioUnit;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));

//// Set output callback
//callbackStruct.inputProc = playbackCallback;
//callbackStruct.inputProcRefCon = audioUnit;
//status = AudioUnitSetProperty(audioUnit,
// kAudioUnitProperty_SetRenderCallback,
// kAudioUnitScope_Global,
// kOutputBus,
// &callbackStruct,
// sizeof(callbackStruct));

// Initialise
status = AudioUnitInitialize(audioUnit);

// Start
status = AudioOutputUnitStart(audioUnit);
NSLog(@"Starting returned code %ld", status);


// // It is not necessary to have a session, but if you have one, it must came after the setup of the audiounit.
// NSError *error = nil;
// // Configure & activate audio session
//
// AVAudioSession *session = [AVAudioSession sharedInstance];
//
// if (![session setCategory:AVAudioSessionCategoryRecord error:&error]) NSLog(@"Error configuring session category: %@", error);
// if (![session setMode:AVAudioSessionModeMeasurement error:&error]) NSLog(@"Error configuring session mode: %@", error);
// if (![session setActive:YES error:&error]) NSLog(@"Error activating audio session: %@", error);
//
// NSLog(@"Session activated. sample rate %f", session.sampleRate);
// NSLog(@"Number of channels %d", session.inputNumberOfChannels);


}

- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self myAudio];
[NSThread sleepForTimeInterval:1];
exit(0);
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}

@end

关于ios - 为什么 AudioOutputUnitStart() 失败并返回代码 -500?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27892712/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com