gpt4 book ai didi

Android 连续语音识别返回 ERROR_NO_MATCH 的速度太快

转载 作者:塔克拉玛干 更新时间:2023-11-02 08:40:31 25 4
gpt4 key购买 nike

我尝试实现连续的 SpeechRecognition 机制。当我开始语音识别时,我在 logcat 中收到以下消息:

06-05 12:22:32.892 11753-11753/com.aaa.bbb D/SpeechManager: startSpeechRecognition: 
06-05 12:22:33.022 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7
06-05 12:22:33.352 11753-11753/com.aaa.bbb D/SpeechManager: onReadyForSpeech:
06-05 12:22:33.792 11753-11753/com.aaa.bbb D/SpeechManager: onBeginningOfSpeech: Beginning
06-05 12:22:34.492 11753-11753/com.aaa.bbb D/SpeechManager: onEndOfSpeech: Ending
06-05 12:22:34.612 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7

这个错误 7 表示 ERROR_NO_MATCH。如您所见,它几乎立即被调用。这不是不当行为吗?

以下是 startSpeechRecognition 和第一个错误 7 之间的完整日志:

06-05 12:22:32.892 11753-11753/com.aaa.bbb D/SpeechManager: startSpeechRecognition: 
06-05 12:22:32.932 4600-4600/? I/GRecognitionServiceImpl: #startListening [en-US]

--------- beginning of system
06-05 12:22:32.932 3510-7335/? V/AlarmManager: remove PendingIntent] PendingIntent{6307291: PendingIntentRecord{2af25f6 com.google.android.googlequicksearchbox startService}}
06-05 12:22:32.932 4600-4600/? W/LocationOracle: Best location was null
06-05 12:22:32.932 3510-4511/? D/AudioService: getStreamVolume 3 index 90
06-05 12:22:32.942 3510-7335/? D/SensorService: SensorEventConnection::SocketBufferSize, SystemSocketBufferSize - 102400, 2097152
06-05 12:22:32.942 3510-7360/? D/Sensors: requested delay = 66667000, modified delay = 0
06-05 12:22:32.942 3510-7360/? I/Sensors: Proximity old sensor_state 16384, new sensor_state : 16512 en : 1
06-05 12:22:32.952 4600-4600/? D/SensorManager: registerListener :: 5, TMD4903 Proximity Sensor, 66667, 0,
06-05 12:22:32.952 4600-11932/? D/SensorManager: Proximity, val = 8.0 [far]
06-05 12:22:32.952 3510-5478/? I/Sensors: Acc old sensor_state 16512, new sensor_state : 16513 en : 1
06-05 12:22:32.952 3510-4705/? I/Sensors: Mag old sensor_state 16513, new sensor_state : 16529 en : 1
06-05 12:22:32.952 3510-4037/? I/AppOps: sendInfoToFLP, code=41 , uid=10068 , packageName=com.google.android.googlequicksearchbox , type=startOp
06-05 12:22:32.962 3510-4511/? D/SensorService: GravitySensor2 setDelay ns = 66667000 mindelay = 66667000
06-05 12:22:32.962 3510-4511/? I/Sensors: RotationVectorSensor old sensor_state 16529, new sensor_state : 147601 en : 1
06-05 12:22:32.972 3510-3617/? V/BroadcastQueue: [background] Process cur broadcast BroadcastRecord{f9fab82 u0 com.google.android.apps.gsa.search.core.location.GMS_CORE_LOCATION qIdx=4}, state= (APP_RECEIVE) DELIVERED for app ProcessRecord{cb66323 4600:com.google.android.googlequicksearchbox:search/u0a68}
06-05 12:22:32.972 3510-4040/? D/NetworkPolicy: isUidForegroundLocked: 10068, mScreenOn: true, uidstate: 2, mProxSensorScreenOff: false
06-05 12:22:32.982 3510-7360/? D/AudioService: getStreamVolume 3 index 90
06-05 12:22:32.982 3510-3971/? I/Sensors: ProximitySensor - 8(cm)
06-05 12:22:32.992 4600-11315/? I/MicrophoneInputStream: mic_starting com.google.android.apps.gsa.speech.audio.ah@ef02224
06-05 12:22:32.992 3140-3989/? I/APM::AudioPolicyManager: getInputForAttr() source 6, samplingRate 16000, format 1, channelMask 10,session 84, flags 0
06-05 12:22:32.992 3140-3989/? V/audio_hw_primary: adev_open_input_stream: request sample_rate:16000
06-05 12:22:32.992 3140-3989/? V/audio_hw_primary: in->requested_rate:16000, pcm_config_in.rate:48000 in->config.channels=2
06-05 12:22:32.992 3140-3989/? D/audio_hw_primary: adev_open_input_stream: call echoReference_init(12)
06-05 12:22:32.992 3140-3989/? V/echo_reference_processing: echoReference_init +
06-05 12:22:32.992 3140-3989/? I/audio_hw_primary: adev_open_input_stream: input is null, set new input stream
06-05 12:22:32.992 4600-11932/? D/SensorManager: Proximity, val = 8.0 [far]
06-05 12:22:32.992 3510-3555/? I/MediaFocusControl: AudioFocus requestAudioFocus() from android.media.AudioManager$8c7dfbdcom.google.android.apps.gsa.speech.audio.c.a$1$c7409b2 req=4flags=0x0
06-05 12:22:32.992 3140-11937/? I/AudioFlinger: AudioFlinger's thread 0xecac0000 ready to run
06-05 12:22:33.012 4600-11317/? W/CronetAsyncHttpEngine: Upload request without a content type.
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get()
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread)
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get()
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread)
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get()
06-05 12:22:33.012 3510-4533/? D/BatteryService: !@BatteryListener : batteryPropertiesChanged!
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread)
06-05 12:22:33.012 3510-4533/? D/BatteryService: level:80, scale:100, status:2, health:2, present:true, voltage: 4093, temperature: 337, technology: Li-ion, AC powered:false, USB powered:true, POGO powered:false, Wireless powered:false, icon:17303446, invalid charger:0, maxChargingCurrent:0
06-05 12:22:33.012 3510-4533/? D/BatteryService: online:4, current avg:48, charge type:1, power sharing:false, high voltage charger:false, capacity:280000, batterySWSelfDischarging:false, current_now:240
06-05 12:22:33.012 3510-3510/? D/BatteryService: Sending ACTION_BATTERY_CHANGED.
06-05 12:22:33.022 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7

这是我的代码:

public class SpeechManager {

private static final String TAG = "SpeechManager";
private final MainActivity mActivity;
private final SpeechRecognizer mSpeechRecognizer;
private boolean mTurnedOn = false;
private final Intent mRecognitionIntent;
private final Handler mHandler;

public SpeechManager(@NonNull MainActivity activity) {
mActivity = activity;
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(mActivity.getApplicationContext());
mSpeechRecognizer.setRecognitionListener(new MySpeechRecognizer());

mHandler = new Handler(Looper.getMainLooper());

mRecognitionIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
// mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, false);
mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en-US");
}

public void startSpeechRecognition() {
Log.d(TAG, "startSpeechRecognition: ");
mTurnedOn = true;
mSpeechRecognizer.startListening(mRecognitionIntent);
}

public void stopSpeechRecognition() {
Log.d(TAG, "stopSpeechRecognition: ");
if (mTurnedOn) {
mTurnedOn = false;
mSpeechRecognizer.stopListening();
}
}

public void destroy() {
Log.d(TAG, "destroy: ");
mSpeechRecognizer.destroy();
}

private class MySpeechRecognizer implements RecognitionListener {
@Override
public void onReadyForSpeech(Bundle params) {
Log.d(TAG, "onReadyForSpeech: ");
}

@Override
public void onBeginningOfSpeech() {
Log.d(TAG, "onBeginningOfSpeech: Beginning");
}

@Override
public void onRmsChanged(float rmsdB) {
}

@Override
public void onBufferReceived(byte[] buffer) {
Log.d(TAG, "onBufferReceived: ");
}

@Override
public void onEndOfSpeech() {
Log.d(TAG, "onEndOfSpeech: Ending");
}

@Override
public void onError(int error) {
Log.d(TAG, "onError: Error " + error);
if (error == SpeechRecognizer.ERROR_NETWORK || error == SpeechRecognizer.ERROR_CLIENT) {
mTurnedOn = false;
return;
}

if (mTurnedOn)
mHandler.postDelayed(new Runnable() {
@Override
public void run() {
// mSpeechRecognizer.cancel();
startSpeechRecognition();
}
}, 100);
}

@Override
public void onResults(Bundle results) {
Log.d(TAG, "onResults: ");
ArrayList<String> partialResults = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
if (partialResults != null && partialResults.size() > 0) {
for (String str : partialResults) {
Log.d(TAG, "onResults: " + str);
if (str.equalsIgnoreCase(mActivity.getString(R.string.turn_off_recognition))) {
FlashManager.getInstance().turnOff();
mTurnedOn = false;
return;
}
}
}
mHandler.postDelayed(new Runnable() {
@Override
public void run() {
startSpeechRecognition();
}
}, 100);
}

@Override
public void onPartialResults(Bundle partialResults) {
Log.d(TAG, "onPartialResults: ");
}

@Override
public void onEvent(int eventType, Bundle params) {
Log.d(TAG, "onEvent: " + eventType);
}
}
}

我的设备是三星 Note5。我该如何解决?

最佳答案

这是一个已知错误,我已就此提交报告。您可以重现问题 using this simple gist .

绕过它的唯一方法是每次都重新创建 SpeechRecognizer 对象。 查看编辑。如要点中所述,这会导致其他问题,但不会对您的应用造成问题。

Google 最终会找到一种方法来阻止持续收听,因为这不是 API 的设计目的。你最好看看 PocketSphinx作为长期选择。

EDIT 22.06.16 - 在最新的 Google 版本中,行为变得更糟。一个新的解决方案链接自要点,该要点将 RecognitionListener 子类化为仅对“真正的”回调作出 react 。

编辑 01.07.16" - 请参阅 this question 了解另一个新错误

关于Android 连续语音识别返回 ERROR_NO_MATCH 的速度太快,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37640926/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com