gpt4 book ai didi

ios - 执行 SFSpeechAudioBufferRecognitionRequest 时出错 Domain=kAFAssistantErrorDomain Code=216

转载 作者:行者123 更新时间:2023-11-29 11:32:13 25 4
gpt4 key购买 nike

我在使用 objective-c 语言实现 SFSpeechAudioBufferRecognitionRequest 时遇到错误。这是我的代码..它在一天前就开始工作了。错误是 Domain=kAFAssistantErrorDomain Code=216 "(null)"

- (void)startListening {

// Initialize the AVAudioEngine
audioEngine = [[AVAudioEngine alloc] init];

// Make sure there's not a recognition task already running
if (recognitionTask) {
[recognitionTask cancel];
recognitionTask = nil;
}

// Starts an AVAudio Session
NSError *error;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];

// Starts a recognition process, in the block it logs the input or stops the audio
// process if there's an error.
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
inputNode = audioEngine.inputNode;
recognitionRequest.shouldReportPartialResults = NO;
recognitionRequest.taskHint = SFSpeechRecognitionTaskHintDictation;
[self startWaveAudio];

// Sets the recording format
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:4096 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[recognitionRequest appendAudioPCMBuffer:buffer];
}];

// Starts the audio engine, i.e. it starts listening.
[audioEngine prepare];
[audioEngine startAndReturnError:&error];


__block BOOL isFinal = NO;

recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {

[self stopWaveAudio];

if (result) {
// Whatever you say in the microphone after pressing the button should be being logged
// in the console.
NSLog(@"RESULT:%@",result.bestTranscription.formattedString);

for (SFTranscription *tra in result.transcriptions) {
NSLog(@"Multiple Results : %@", tra.formattedString);
}

if(isFinal == NO) {
[self calculateResultOfSpeechWithResultString:result.bestTranscription.formattedString];
}
isFinal = !result.isFinal;
}
if (error || isFinal) {
NSLog(@"Error Description : %@", error);
[self stopRecording];

}
}];
}

- (IBAction)tap2TlkBtnPrsd:(UIButton *)sender {
if (audioEngine.isRunning) {
[self stopRecording];
} else {
[self startListening];
}

isMicOn = !isMicOn;
micPrompt = NO;

}

-(void)stopRecording {

// dispatch_async(dispatch_get_main_queue(), ^{

if(audioEngine.isRunning){
[inputNode removeTapOnBus:0];
[inputNode reset];
[audioEngine stop];
[recognitionRequest endAudio];
[recognitionTask cancel];
recognitionTask = nil;
recognitionRequest = nil;
}
// });


}

并且尝试了不同的方式,比如在请求语音后附加音频缓冲区..

如果可能的话,谁能告诉我我怎样才能实现这样的场景,比如用户会拼出这个词,结果只会是那个词?

最佳答案

我在取消识别任务的时候同样出现Error=216。 SFSpeechRecognitionResultisFinal 属性只有在识别器认为说话者已经结束时才为真。因此,当您第一次尝试执行 isFinal = !result.isFinal; 时,它是 False 并且您的 isFinal 标志调用 block where is stopRecording(),用[recognitionTask cancel];取消它。

因此,如果您只想要第一个转录(单词),您可以调用 bestTranscription 的第一段的 substring 属性,然后调用 [识别任务完成];.

...
if (result) {
// First transcription
NSLog(@"RESULT:%@",[[result.bestTranscription.segments.firstObject] substring]);
[recognitionTask finish];
[self stopRecording];
}
if (error) {
NSLog(@"Error Description : %@", error);
[recognitionTask cancel];
[self stopRecording];
}
...
-(void)stopRecording {

if(audioEngine.isRunning){
[inputNode removeTapOnBus:0];
[inputNode reset];
[audioEngine stop];
[recognitionRequest endAudio];
recognitionTask = nil;
recognitionRequest = nil;
}

}

关于ios - 执行 SFSpeechAudioBufferRecognitionRequest 时出错 Domain=kAFAssistantErrorDomain Code=216,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52422192/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com