We are excited about our new speak command, which allows Alexa developers to interact directly with Alexa from the command-line as if they are talking to it with their voice. It uses the real AVS (Alexa Voice Service), and behaves like a real device.

It is a great complement to our utter and intend commands.

Quick background on our command-line testing tools:

  • Speak – use the real Alexa Voice Service (via Text-To-Speech) to talk to your skill
  • Utter and intend – generate JSON payloads that mimic what comes from Alexa

 

Our intend and utter commands use our emulator, while speak uses our new AVS integration. Both are powerful tools for testing Alexa skills. We recommend:

  • Using speak for ensuring skill behavior under “real” conditions
  • Using utter and intend for deeper testing of skill logic (it provides access to the full skill payload)

 

Look for our best practices for skill-testing guide soon, with more in-depth guidance on these subjects.

To get setup with the new speak command, first create an account on the Bespoken Dashboard:

Bespoken Dashboard

Create a new source and navigate to the our new “Validation” page:

Bespoken Validation

Once there, click on the Virtual Device Manager and create a Virtual Device. This device behaves like a real-live Alexa device, linked to your Alexa account, but with the difference that we interact with it programmatically (instead of talking to it).

With this token, you can now use your speak command on the command-line!

If you have not already installed the bst command-line, do it now – just enter in a terminal:
npm install bespoken-tools -g

The first time you run speak, enter:
bst speak --token TOKEN_FROM_VALIDATION_PAGE hi

After you enter the token the first time, it will be saved off to your `.bst/config` file so you will not need to enter it again.

Now try testing your skill – I am going to run it against one of our own, We Study Billionaires:

It is possible to do sequences of interactions, though keep in mind, the skill session behaves similar to a real device – there is a limited time to reply before the session ends. So best to be ready with your next command if performance multi-step interaction (or even put it into a script).

Besides returning a transcript of what Alexa says, card information when present is also displayed:

Very cool, right?

That’s all for now – look for more voice testing and validation tools from us, as well as more information on best practices for using them! And feel free to reach me on Slack or Twitter – jpkbst!