gpt4 book ai didi

android - 如何仅使用语音命令导航 Google Glass GDK Immersion 应用程序?

转载 作者:塔克拉玛干 更新时间:2023-11-01 22:10:08 26 4
gpt4 key购买 nike

我将如何编写语音触发器来导航 Google Glass Cards?

This is how I see it happening:

1) "Ok Glass, Start My Program"

2) Application begins and shows the first card

3) User can say "Next Card" to move to the next card
(somewhat the equivalent of swiping forward when in the timeline)

4) User can say "Previous Card" to go back

我需要显示的卡片是简单的文本和图像,我想知道是否可以设置某种类型的监听器来在显示卡片时收听语音命令。


我研究过 Glass voice command nearest match from given list但无法运行代码,尽管我拥有所有库。

旁注:重要的是用户在使用语音命令时仍能看到卡片。而且他的手很忙,所以点击/滑动不是一个选项。

关于如何仅使用语音控制在我的 Immersion 应用程序中控制时间线的任何想法?将不胜感激!

我正在跟踪 https://code.google.com/p/google-glass-api/issues/detail?id=273以及。


我正在进行的研究让我回顾了 Google Glass Developer 以使用 Google 建议的聆听手势的方式:https://developers.google.com/glass/develop/gdk/input/touch#detecting_gestures_with_a_gesture_detector

我们如何使用语音命令激活这些手势?


Android 刚刚发布 beta 版可穿戴设备升级版 http://developer.android.com/wear/notifications/remote-input.html ,有没有办法可以用这个来回答我的问题?感觉我们还差 1 步之遥,因为我们可以调用该服务,但在我们交谈时不能将其作为后台服务“休眠”和“唤醒”。

最佳答案

这个东西在onCreate方法中定义

mAudioManager = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE); 
// mAudioManager.setStreamSolo(AudioManager.STREAM_VOICE_CALL, true);

sr = SpeechRecognizer.createSpeechRecognizer(context);
sr.setRecognitionListener(new listener(context));

// intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,context.getPackageName());
sr.startListening(intent);
Log.i("111111","11111111"+"in");

这个监听器类只是添加到你的类中

class  listener implements RecognitionListener          
{
Context context1;
public listener(Context context)
{
//Log.i("onError startListening","enter"+"nam");
context1=context;
}
public void onReadyForSpeech(Bundle params)
{
//Log.d(TAG, "onReadyForSpeech");
}
public void onBeginningOfSpeech()
{
//Log.d(TAG, "onBeginningOfSpeech");
}
public void onRmsChanged(float rmsdB)
{
//Log.d(TAG, "onRmsChanged");
}
public void onBufferReceived(byte[] buffer)
{
//Log.d(TAG, "onBufferReceived");
}
public void onEndOfSpeech()
{
//Log.d(TAG, "onEndofSpeech");
sr.startListening(intent);
}
public void onError(int error)
{
//Log.d(TAG, "error " + error);
//7 -No recognition result matched.
//9 - vInsufficient permissions
//6 - No speech input
//8 RecognitionService busy.
//5 Other client side errors.
//3 Audio recording error.
// mText.setText("error " + error);

if(error==6 || error==7 || error==4 || error==1 || error==2 || error==5 || error==3 || error==8 || error==9 )
{
sr.startListening(intent);
//Log.i("onError startListening","onError startListening"+error);
}
}
public void onResults(Bundle results)
{
//Log.v(TAG,"onResults" + results);
ArrayList data = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
for (int i = 0; i < data.size(); i++)
{
//Log.d(TAG, "result " + data.get(i));
//str += data.get(i);

//Toast.makeText(context1, "results: "+data.get(0).toString(), Toast.LENGTH_LONG).show();
//Log.v("my", "output"+"results: "+data.get(0).toString());

//sr.startListening(intent);
}
}
public void onPartialResults(Bundle partialResults)
{
//Log.d(TAG, "onPartialResults");
}
public void onEvent(int eventType, Bundle params)
{
//Log.d(TAG, "onEvent " + eventType);
}
}

关于android - 如何仅使用语音命令导航 Google Glass GDK Immersion 应用程序?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21652321/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com