Speech Recognition

Speech Recognition (2.9+)

Enable Speech Recognition in Dashboard

Add CAF to ChatMapper

Speech Matching

Example

2.9 has speech recognition support using Google Chrome’s built in feature. HTML5 Speech input api in Chrome uses Google's private endpoint, so the recognition is done on Google servers. This means speech data is sent to Google’ servers and is handled under their data privacy policy.

Where native HTML5 speech recognition is not support in-browser, a fallback is used with Google cloud speech-to-text.

We have not found a list of all languages supported but can make the assumption that it is the same as here https://cloud.google.com/speech-to-text/docs/languages

Browsers that support HTML5 speech recognition include:

  • Chrome Desktop
  • Chrome Android
  • Samsung Internet
  • Oculus Browser on GearVR

Oculus Go and Oculus Quest do not support native HTML5 speech recognition (June 10th 2019) therefore these devices use the fallback recognition service.

Enable Speech Recognition in Dashboard

In the dashboard https://app.learnbrite.com/dashboard  (Experimental section)

Add CAF to ChatMapper

Then add a Custom Asset Fields (CAF) to Conversations tab in ChatMapper

Project Menu > Project Settings > Custom Asset Fields tab > Conversations tab

Add

speechRecognition_enabled (bool) and set it to true

speechRecognition_language (text) en-US is the default value and in that case, adding the parameter is not required.

To use another language use the language code https://cloud.google.com/speech-to-text/docs/languages Example: “it-IT” for Italian. “he-IL” or “he” for Hebrew.

Speech Matching

This will enable the speech recognition on every choice. The text to be recognized will be matched against the Dialogue Text from the node or the Menu Text if the Dialogue Text field is empty.

Perhaps what you want to display visually is different to what you want to recognize against. In this case you want to use an alternative text to be recognised. Add the Custom Asset Field (CAF) speechRecognition_command to the Dialog Nodes.

The text in the speechRecognition_command for that node will now be used to match against rather than the Dialogue Text or Menu Text.

Example

Question “What do you want for dinner?” has multiple answers, one of them is pizza, but the node has “[a]Pizza” as Menu Text and “I want pizza, please” as Dialogue Text. None of these will work as a command to be recognized by speech recognition so you would add the custom asset field speechRecognition_command to the node and set it to “pizza”.

© 2019 LearnBrite – Commercial In Confidence

Trademarks & Copyrights are property of their respective owners. Pictures are indicative only & may not reflect final production.

How useful was this article?

Click on a star to rate it!

We are sorry that this article was not useful for you!

Let us improve this article!