Monitoring Skill Performance

So you’ve got your skill done and you’re ready to test, submit for certification, set your skill loose in the wild, & hope for the best. Not so fast!

Wouldn’t it be nice to see which intents are being used, monitor the performance of each intent, and see which intents could be improved upon?

In this post, I’ll show you how I did it with five lines of code per intent. The code within this post is based on Big Nerd Ranch’s series of posts about developing Alexa skills locally with Node.js.

Intro

I created a skill called Date Ninja, which does date calculations using the Moment.js library. With the iOS apps I’ve shipped, I’ve always used Crashlytics to monitor performance of releases, see how my app is being used, and identify issues affecting users. I looked around for a similar analytic tool for Alexa, but I found nothing on the shelf–or so I thought.

Implementing Google Analytics

I initially found implementing Google Analytics to be a little tricky, but I stumbled into an open source library called universal analytics which made the process a snap. If you’re not already using react-ga in your project, I’d skip it, as it you need to add a bunch of dependencies just to get react-ga working. Another option, at Mark Carpenter’s suggestion, is this code Google has laying around.

universal-analytics

Implementing universal analytics was very easy.

From terminal in your project’s root directory:

// --save puts it in your package.json file.
npm install universal-analytics --save

In your index.js file

Add the following declaration towards the top:

var ua = require('universal-analytics');

Over to http://analytics.google.com

  1. Create new account for your skill.

  2. Create properties on the skill. I would suggest one property per intent.

  3. Save the ID code (ex: UA-********-*) in a text file for reference later.

Back to your index.js file

You’ve already got the ua var declared, so now you’ll just be initializing it within each of your intents.

universal-analytics has a lot of different signatures for reporting events. I used this one:

intentTrackingID.event("Event Category", "Event Action").send()

For my project, I planned for three possible outcomes for each intent.

  1. blank value
  2. it works
  3. it fails

Here’s what my sample intent looks like:

app.intent('myIntent', {
  'slots': {
    'INPUTDATE': 'AMAZON.DATE'
  },
  'utterances': ['my utterance']
},
  function(req, res) {
    var inputDate = req.slot('INPUTDATE');
    **var intentTrackingID = ua('UA-********-*');** // my Google property tracking ID for this intent
    var reprompt = 'my reprompt phrase.';

    if (_.isEmpty(inputDate)) {
      var prompt = 'another prompt';
      res.say(prompt).reprompt(reprompt).shouldEndSession(false).send();
      **// blank value outcome reported to Google Analytics**
      **intentTrackingID.event("invalid request","blank value").send();**
      return true;
    } else {
      var myHelper = new MyHelper();

      try {
        var myVar = myHelper.myIntent(inputDate);
        var formattedOutput = myHelper.formatMyIntentResponse(inputDate,myVar).toString();
        res.say(formattedOutput).send();
        **// it works outcome reported to Google Analytics**
        **var requestedData = ("inputDate: " + inputDate + " myVar: " + myVar).toString();**
        **intentTrackingID.event("success", requestedData).send();**

      } catch (error) {
        console.log("error", error);
        var prompt = 'Something is mucked up';
        res.say(prompt).reprompt(reprompt).shouldEndSession(false).send();
        **// it fails outcome**
        **intentTrackingID.event("error", error.toString()).send();**
      }
      return false;
    }
  }
);

I tried to keep the code somewhat generic for each of intents so I can cut and paste between them. Essentially, it’s 5 lines of code per intent to wire them to Google Analytics.

Here they are:

// Declare the intentTrackingID's Google Tracking ID
// Make sure this is a locally-scoped var within each intent function.
var intentTrackingID = ua('UA-********-*');

// report a blank value
intentTrackingID.event("invalid request","blank value").send();

// report a success
var requestedData = ("inputDate: " + inputDate + " myVar: " + myVar).toString();
intentTrackingID.event("success", requestedData).send();

// report a failure
intentTrackingID.event("error", error.toString()).send();

A side-note on reporting a success. I want to see what data’s being plugged into my skill. What I did is assembled the data into a string inside the ()’s, then the .toString() portion of var requestedData returns something my .event function can understand. Without the .toString() part, the event won’t get passed along to Google Analytics.

Commit as you go along.

For each intent you successfully wire up, save and test to see it gets posted to Google Analytics before starting work on the next one. Once your intent is wired up, go to Google Analytics, click your property, then click “Real Time”, then click “Events”. There’s a lag I’ve observed of 1-5 seconds between intents being reported. Whatever you put in your “Event Category” and “Event Description” will populate on the webpage when you’ve got it configured. It’ll look something like this:

Imgur

User Testing

Don’t submit for certification without testing all of your intents on Amazon’s test server. As much as you think you’ve dotted all the i’s and crossed the t’s, you’ll discover minor issues when you test on Amazon’s servers that don’t crop up when testing on a local node server.

Make a punch list of issues

Make a list in a text editor of issues you bump into as you test all of your functions. I sorted mine into:

  • Outstanding issues
  • Resolved issues

Run through your outstanding issues, fix them, and throw them into the resolved issues list. Make sure to save error codes returned from Amazon’s server and the phrase that caused it when you find a problem. Run through your list, then re-upload when you’ve knocked out everything. Retest what didn’t work previously with the (previously?) problematic invocation phrase.

Several issues I encountered on the first upload:

1. Error: Unable to parse the provided SSML. The provided text is not valid SSML.

Resolution I got this when testing two functions. Chances are you have an issue with your SSML tag, like a missing comma or a missing /. Try comparing the difference between a function where the SSML output works and the problematic function. You can use a website like DiffNow to compare the text between the two functions.

Adding SSML formats into your returns is an invitation for a typo, so I refactored my code and created a method in my helperClass.js that does it for me. Here’s the snippet I used to prevent typos. To use it, just feed a string that needs formatting to the initializer with this function:

function addSayAsDateTag(stringDate) {
	// You can add on whatever interpret-as you want to this
	return ('<say-as interpret-as="date">'+stringDate+'</say-as>').toString();
}

2. Alexa returned the wrong date (which was ~4 hours off)

Resolution In the Amazon’s ASK developer forum I discovered that Amazon’s testing servers go off UTC time, which the physical devices go off the user’s local time. Nothing to do here, thankfully.

3. Missing variable names in speech phrases

I used lodash templates to return phrases. Lodash made it very simple to create a template to feed speech output. I found a couple of instances where I had a slot configured for a day variable, but I didn’t have “day” after the variable slot. Make sure you fill in variable names after your variables when outputting speech text.

4. Unable to generate request for your skill

If you encounter this, save whatever invocation phrase failed and revamp your utterance to capture that intent. Alexa-utterances does a great job automating the process of utterance creation.

Beta Feedback

I used moment.js extensively in my skill, which is to get information about calendar dates that is normally a PITA to track down. One of my testers said, “I wish it could calculate years, months, and days between dates”. A little digging and I discovered someone wrote a moment.js plugin that does just that.

One of the challenges I encountered incorporating the new feature was not having intents “collide”. Basically if you have two functions doing different flavors of the same thing, for each new function you add, you’ll need to test utterances for both the new function as well as the original “flavor” to ensure Lambda doesn’t get confused. It’s not rocket science, but it is a T you’ve got to cross.

Thoughts Bloc’s Alexa Project

Over the past year, I’ve learned an enormous amount about Objective-C and Swift through Bloc. Bloc’s methodology coupled with its mentorship definitely filled in the blanks where I previously got stuck learning on my own. It enabled me to accomplish my goal of learning to program in iOS.

I was intrigued by Mark Carpenter’s Echo presentation, but when I heard, “Amazon’s Echo can be programmed in Java, JavaScript, and Python” I felt like this:

How I felt

My Bloc mentor told me most of programming is learning how to learn. Fast forward a couple of weeks–I’ve done three skills for the Amazon Echo starting at zero with JavaScript. Once you’ve got a language or two you’re proficient in, you discover that new ones are all pretty similar to what you already know in terms of functionality and usage.

Thoughts on Amazon Echo

I think the Amazon Echo family of products is analogous to the early 2000s when Apple rolled out iTunes and iPods. Just as Apple wasn’t the first company to manufacture an MP3 player, Amazon’s not the first to do voice synthesis. Amazon is doing for ambient computing what Apple did for online music distribution–it’s taken something that’s been around and addressed the issues that kept voice-controlled computing from expanding beyond gear heads to Joe Sixpack.

Siri has speech recognition on it, but the only time I’ve used it is to see if I can get the same silly answers others get when I yank her chain. Why don’t I use it much? Because it’s a PITA! I’ve gotta find my phone. I’ve gotta hold the button down until Siri beeps. Then there’s a little lagtime for Bluetooth in my car, so Siri only hears part of what I’m saying, so I’ve gotta do it all over. Pretty much every existing implementation of voice-controlled computing, pre-Echo, revolved around pressing a button.

Chances are if I’m too lazy to press a button, I probably don’t want to lift a finger to use voice synthesis. Amazon’s built its business addressing peoples’ lassitude and their demand for convenience. I initially thought, “How does the Echo fit in to Amazon’s business”. After a couple of weeks monkeying with Alexa and Lambda, it’s crystal clear.

Amazon’s laid the foundation for an ambient computing ecosystem of products that revolve around it. Consumers buy the hardware. Amazon’s created APIs and the Alexa Portal for developers. Amazon’s also addressed business demands to provide a back end for supplying data through Amazon Web Services (AWS). The product is at a price point where I think it’s going to generate quite a demand cycle and the ecosystem is there to support its growth. It’s very exciting to see where this technology will lead in the coming years.