Using the SpeechRecognizer API directly - onResults () returns null

I am trying to run an example in this post .

Since I am not trying to implement this in a service, but rather in a standard activity, I did not encounter the problems described in the above post .

However, I keep getting “No Voice Results” as implemented in this article when getStringArrayList (RecognizerIntent.EXTRA_RESULTS) returns null.

It’s clear that I don’t have enough information about what needs to be done in addition to

recognizer.setRecognitionListener(listener);
recognizer.startListening(intent);    

What am I missing?

Is it possible that besides startListening () I also need to runActivityForResult ()? If so, I already tried this, but it triggered all the activity of Google Voice Search (which I am trying to avoid, just as @ vladimir.vivien wrote here ). This creates even more problems due to the simultaneous launch of two recognizers ...

At first I thought that what was missing is actually sending to Google servers, but when I look at the output of LogCat from the beginning of the speech recognition session to the end (see below), I see that it actually creates a TCP session with http: / /www.google.com/m/voice-search .

So the obvious question is what am I missing?

04-18 07:02:17.770: INFO/RecognitionController(623): startRecognition(#Intent;action=android.speech.action.RECOGNIZE_SPEECH;S.android.speech.extra.LANGUAGE_MODEL=free_form;S.android.speech.extra.PROMPT=LEARNSR;S.calling_package=com.example.learnsr.SrActivity;end)
04-18 07:02:17.770: INFO/RecognitionController(623): State change: STARTING -> STARTING
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:17.780: INFO/AudioService(164):  AudioFocus  requestAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:17.780: DEBUG/AudioFlinger(121): setParameters(): io 3, keyvalue routing=262144;vr_mode=1, tid 155, calling tid 121
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): do input routing device 40000
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.790: INFO/RecognitionController(623): State change: STARTING -> RECOGNIZING
04-18 07:02:17.790: INFO/ServerConnectorImpl(623): Starting TCP session, url=http://www.google.com/m/voice-search
04-18 07:02:17.930: DEBUG/ServerConnectorImpl(623): Created session a7918495c042db1746d3e09514baf621
04-18 07:02:17.930: INFO/ServerConnectorImpl(623): Creating TCP connection to 74.125.115.126:19294
04-18 07:02:17.980: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:17.980: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:18.070: INFO/ServerConnectorImpl(623): startRecognize RecognitionParameters{session=a7918495c042db1746d3e09514baf621,request=1}
04-18 07:02:18.390: INFO/RecognitionController(623): onReadyForSpeech, noise level:10.29969, snr:-0.42756215
04-18 07:02:19.760: DEBUG/dalvikvm(659): GC_EXPLICIT freed 5907 objects / 353648 bytes in 67ms
04-18 07:02:21.030: INFO/AudioHardwareQSD(121): AudioHardware pcm playback is going to standby.
04-18 07:02:24.090: INFO/RecognitionController(623): onBeginningOfSpeech
04-18 07:02:24.760: DEBUG/dalvikvm(669): GC_EXPLICIT freed 1141 objects / 74296 bytes in 48ms
04-18 07:02:25.080: INFO/RecognitionController(623): onEndOfSpeech
04-18 07:02:25.080: INFO/AudioService(164):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.140: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.200: INFO/RecognitionController(623): State change: RECOGNIZING -> RECOGNIZED
04-18 07:02:25.200: INFO/RecognitionController(623): Final state: RECOGNIZED
04-18 07:02:25.260: INFO/ServerConnectorImpl(623): ClientReport{session_id=a7918495c042db1746d3e09514baf621,request_id=1,application_id=intent-speech-api,client_perceived_request_status=0,request_ack_latency_ms=118,total_latency_ms=7122,user_perceived_latency_ms=116,network_type=1,endpoint_trigger_type=3,}
04-18 07:02:25.260: INFO/AudioService(164):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.270: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:25.270: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:25.270: INFO/RecognitionController(623): State change: RECOGNIZED -> PAUSED
04-18 07:02:25.270: INFO/AudioService(164):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.270: INFO/ClientReportSender(623): Sending 1 client reports over HTTP
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:25.280: DEBUG/AudioFlinger(121): setParameters(): io 3, keyvalue routing=0, tid 155, calling tid 121
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.280: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:25.280: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
+5
source share
3

SpeechRecognizer.RESULTS_RECOGNITION , onResults(). ?

RecognizerIntent.EXTRA_RESULTS RECOGNIZE_SPEECH.

+2

this code works fine:

package com.example.android.voicerecognitionservice;

import java.util.ArrayList;

import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.media.AudioManager;
import android.os.Bundle;
import android.speech.RecognitionListener;
import android.speech.RecognizerIntent;
import android.speech.SpeechRecognizer;
import android.widget.TextView;
import android.widget.Toast;

public class VoiceRecognitionSettings extends Activity implements RecognitionListener {
  /** Text display */
  private TextView blurb;

  /** Parameters for recognition */
  private Intent recognizerIntent;

  /** The ear */
  private SpeechRecognizer recognizer;

  @Override
  public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.speech);

    blurb = (TextView) findViewById(R.id.text1);

  //  muteSystemAudio();

    recognizer = SpeechRecognizer.createSpeechRecognizer(this);
    recognizer.setRecognitionListener(this);

    recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.example.android.voicerecognitionservice");
    recognizerIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true);

    recognizer.startListening(recognizerIntent);
  }

  @Override
  public void onBeginningOfSpeech() {
    blurb.append("[");
  }

  @Override
  public void onBufferReceived(byte[] arg0) {
  }

  @Override
  public void onEndOfSpeech() {
    blurb.append("] ");
  }

  @Override
  public void onError(int arg0) {
  }

  @Override
  public void onEvent(int arg0,
                      Bundle arg1) {
  }

  @Override
  public void onPartialResults(Bundle arg0) {
  }

  @Override
  public void onReadyForSpeech(Bundle arg0) {
    blurb.append("> ");
  }

@Override
  public void onResults(Bundle bundle) {
    ArrayList<String> results = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
    blurb.append(results.toString() + "\n");

//    if (results!=null){
//        Toast.makeText(VoiceRecognitionSettings.this,results.toString()+"55", Toast.LENGTH_LONG).show();
// 
//    }else{
//        Toast.makeText(VoiceRecognitionSettings.this,"vide", Toast.LENGTH_LONG).show();
//
//    }
    recognizer.startListening(recognizerIntent);


  }

  @Override
  public void onRmsChanged(float arg0) {
  }

  public void muteSystemAudio(){
        AudioManager amanager=(AudioManager)getSystemService(Context.AUDIO_SERVICE);
        amanager.setStreamMute(AudioManager.STREAM_SYSTEM, true);
    }
}

just try it out

0
source

All Articles