Skip to content

Polyfill Web Speech API with Cognitive Services Bing Speech for both speech-to-text and text-to-speech service.

License

Notifications You must be signed in to change notification settings

Domratchev/web-speech-cognitive-services

 
 

Repository files navigation

web-speech-cognitive-services

npm version Build Status

Polyfill Web Speech API with Cognitive Services Bing Speech for both speech-to-text and text-to-speech service.

This scaffold is provided by react-component-template.

Demo

Try out our demo at https://compulim.github.io/web-speech-cognitive-services?s=your-subscription-key.

We use react-dictate-button and react-say to quickly setup the playground.

Background

Web Speech API is not widely adopted on popular browsers and platforms. Polyfilling the API using cloud services is a great way to enable wider adoption. Nonetheless, Web Speech API in Google Chrome is also backed by cloud services.

Microsoft Azure Cognitive Services Bing Speech service provide speech recognition with great accuracy. But unfortunately, the APIs are not based on Web Speech API.

This package will polyfill Web Speech API by turning Cognitive Services Bing Speech API into Web Speech API. We test this package with popular combination of platforms and browsers.

How to use

First, run npm install web-speech-cognitive-services for latest production build. Or npm install web-speech-cognitive-services@master for latest development build.

Then, install peer dependency by running npm install microsoft-speech-browser-sdk.

Speech recognition (speech-to-text)

import { createFetchTokenUsingSubscriptionKey, SpeechRecognition } from 'web-speech-cognitive-services';

const recognition = new SpeechRecognition();

recognition.lang = 'en-US';
recognition.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');

recognition.onresult = ({ results }) => {
  console.log(results);
};

recognition.start();

Note: most browsers requires HTTPS or localhost for WebRTC.

Integrating with React

You can use react-dictate-button to integrate speech recognition functionality to your React app.

import { createFetchTokenUsingSubscriptionKey, SpeechGrammarList, SpeechRecognition } from 'web-speech-cognitive-services';
import DictateButton from 'react-dictate-button';

const extra = { fetchToken: createFetchTokenUsingSubscriptionKey('your subscription key') };

export default props =>
  <DictateButton
    extra={ extra }
    onDictate={ ({ result }) => alert(result.transcript) }
    speechGrammarList={ SpeechGrammarList }
    speechRecognition={ SpeechRecognition }
  >
    Start dictation
  </DictateButton>

You can also look at our playground page to see how it works.

Speech priming (a.k.a. grammars)

You can prime the speech recognition by giving a list of words.

Since Cognitive Services does not works with weighted grammars, we built another SpeechGrammarList to better fit the scenario.

import { createFetchTokenUsingSubscriptionKey, SpeechGrammarList, SpeechRecognition } from 'web-speech-cognitive-services';

const recognition = new SpeechRecognition();

recognition.grammars = new SpeechGrammarList();
recognition.grammars.words = ['Tuen Mun', 'Yuen Long'];
recognition.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');

recognition.onresult = ({ results }) => {
  console.log(results);
};

recognition.start();

Note: you can also pass grammars to react-dictate-button via extra props.

Speech synthesis (text-to-speech)

import { createFetchTokenUsingSubscriptionKey, speechSynthesis, SpeechSynthesisUtterance } from 'web-speech-cognitive-services';

const fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');
const utterance = new SpeechSynthesisUtterance('Hello, World!');

speechSynthesis.fetchToken = fetchToken;

// Need to wait until token exchange is complete before speak
await fetchToken();
await speechSynthesis.speak(utterance);

Note: speechSynthesis is camel-casing because it is an instance.

pitch, rate, voice, and volume are supported. Only onstart, onerror, and onend events are supported.

Integrating with React

You can use react-say to integrate speech synthesis functionality to your React app.

import { createFetchTokenUsingSubscriptionKey, speechSynthesis, SpeechSynthesisUtterance } from 'web-speech-cognitive-services';
import React from 'react';
import Say from 'react-say';

export default class extends React.Component {
  constructor(props) {
    super(props);

    speechSynthesis.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');

    // We call it here to preload the token, the token is cached
    speechSynthesis.fetchToken();

    this.state = { ready: false };
  }

  async componentDidMount() {
    await speechSynthesis.fetchToken();

    this.setState(() => ({ ready: true }));
  }

  render() {
    return (
      this.state.ready &&
        <Say
          speechSynthesis={ speechSynthesis }
          speechSynthesisUtterance={ SpeechSynthesisUtterance }
          text="Hello, World!"
        />
    );
  }
}

Test matrix

For detailed test matrix, please refer to SPEC-RECOGNITION.md or SPEC-SYNTHESIS.md.

Known issues

  • Speech recognition
    • Interim results do not return confidence, final result do have confidence
      • We always return 0.5 for interim results
    • Cognitive Services support grammar list but not in JSGF format, more work to be done in this area
      • Although Google Chrome support grammar list, it seems the grammar list is not used at all
    • Continuous mode does not work
  • Speech synthesis
    • onboundary, onmark, onpause, and onresume are not supported/fired

Roadmap

To-do

  • Add babel-runtime, microsoft-speech-browser-sdk, and simple-update-in

Plan

  • General
  • Speech recognition
    • Add grammar list
    • Add tests for lifecycle events
    • Support stop() function
      • Currently, only abort() is supported
    • Investigate continuous mode
    • Enable Opus (OGG) encoding
      • Currently, there is a problem with microsoft-speech-browser-sdk@0.0.12, tracking on this issue
    • Support custom speech
    • Support new Speech-to-Text service
  • Speech synthesis
    • Event: add pause/resume support
    • Properties: add paused/pending/speaking support
    • Support new Text-to-Speech service
      • Custom voice fonts

Contributions

Like us? Star us.

Want to make it better? File us an issue.

Don't like something you see? Submit a pull request.

About

Polyfill Web Speech API with Cognitive Services Bing Speech for both speech-to-text and text-to-speech service.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 94.0%
  • HTML 4.9%
  • Shell 1.1%