Thoughts on the beta Alexa Skill Builder certification
Julien Simon
Posted on January 28, 2019
Earlier this month, we announced a new AWS certification on Alexa skill development. As usual, the exam is first released a beta exam, which you can only take until February 1st. Well, I did take it today (because why not) and although I won’t go into specific details, here are some remarks that may help you prepare.
A little context first. I felt pretty good about the beta Machine Learning certification that I took at re:Invent, because whatever my ML skills are worth, I do spend most of time working on the stuff. Things are different here: my Alexa experience is mostly limited to my Johnny Pi project, with insufficient attempts at keeping up with the new stuff (screens! screens!) and no real effort to prepare for this exam (because… only 24 hours a day?). That’s probably important in understanding some of the bewildered comments below. That’s also my best excuse for quite likely failing the exam. At least now I know what I don’t know, and you can also thank me for lowering the bar :D
The exam has 85 questions and lasts for 3 hours (I was done in 90 minutes or so). Here are the topics listed in the exam guide (PDF).
Domain 1: Voice-First Design Practices and Capabilities (14%)
1.1 Describe how users interact with skills
1.2 Map features and capabilities to use cases
Opinion : funny enough, I don’t remember getting a lot of these (?). Blame randomization, I guess.
Domain 2: Skill Design (24%)
2.1 Design and develop an interaction model
2.2 Design a multi-turn conversation
2.3 Use built-in intents and slots
2.4 Handle unexpected conversational requests or responses
2.5 Design multi-modal skills using one or more service interfaces (for example, audio, video, and gadgets)
Opinion : lots of questions on the interaction model, obviously. Intents, slots (built-in and custom), synonyms, etc. Unsurprisingly, this is the largest domain and you’ll get pounded: leave no stone unturned on interaction models. I had several questions on how to deal with misunderstood intents — one of the critical aspects of building a high-performing skill — so you definitely want to be comfortable with FallbackIntent. Also, make sure you have experience with screen-based devices such as the Echo Spot : I got a lot of questions on designing GUI-enabled skills (cards, etc.). Having spent most of my time with Echos and Echo Dots, I really did have a hard time here (and since forever, GUI programming is where I stop). Having said that, I didn’t get any questions on the Alexa Presentation Language, which is probably too new to be featured. Here’s a nice re:Invent video if you’re curious.
Domain 3: Skill Architecture (14%)
3.1 Identify AWS services for extending Alexa skill functionality (Amazon CloudFront, Amazon S3, Amazon CloudWatch, and Amazon DynamoDB)
3.2 Use AWS Lambda to build Alexa skills
3.3 Follow AWS and Alexa security and privacy best practices
Opinion : this one’s about connecting your skill to AWS services, so I was back in known territory! I did get questions on all the services mentioned above plus IAM, but nothing nasty : newcomers to AWS with a basic understanding of what they do and why they matter in the Alexa context will do just fine. There was a stronger focus on Lambda for obvious reasons, and I’d recommend spending a little more time with the service itself (timeouts, logs, etc.).
Domain 4: Skill Development (20%)
4.1 Implement in-skill purchasing and Amazon Pay for Alexa Skills
4.2 Use Speech Synthesis Markup Language for expression and MP3 audio
4.3 Implement state management
4.4 Implement Alexa service interfaces (audio player, video player, and screens)
4.5 Parse Alexa JSON requests and provide responses
Opinion : No Amazon Pay this time, but I did get a few questions on in-skill purchasing (one-shot and subscriptions). SSML popped up too, and my experience with Amazon Polly was definitely useful: play with this stuff and know the major features (prosody, etc.). Of course, state management is important when you want to build elaborate skills (managing sessions, persisting data to an AWS backend). And yes, I got code-level GUI questions (cards, etc.) and I hated every single one of them, although can some of them can definitely be figured out by applying sound judgement.
Domain 5: Test, Validate and Troubleshoot (18%)
5.1 Debug and troubleshoot using Amazon CloudWatch or other tools
5.2 Use the Alexa developer testing tools
5.3 Perform beta testing
5.4 Troubleshoot errors in the interaction model
Opinion : Lambda, Lambda everywhere! Lots of questions on troubleshooting functions (timeouts, logs, permissions, etc.), so once again spend a time studying the service on its own. Some questions on interaction models with weird/buggy JSON, but if you spend your time writing skills, you’ve probably written much worse (I know I did).
Domain 6: Publishing, Operations, and Lifecycle Management (10%)
6.1 Describe the skill publishing process
6.2 Add and remove users in the developer console
6.3 Perform analysis of skill analytics in the developer console
6.4 Differentiate among the statuses/versions of skills (for example, In Development, In Certification, and Live)
Opinion : lots of questions on the skill lifecycle , which you won’t have fully experience unless you’re really pushed skills to production (review process, updates to live skills, etc.): you should go through this a couple of times, or you’ll be missing some info. A couple of questions on analytics , so again, make sure you have a couple of live skills, or at least spend some time studying that part of the console in detail (statistics on intents, occurrences, etc.).
A couple of blind spots that were not mentioned at all in the exam guide (naughty, naughty): the ASK CLI (nothing bad, just play with it for a bit) and personalization (accessing customer information, account linking, etc.). This is not unusual for a beta certification, I’d expect the exam guide to be more accurate once the certification is final… and these topics are important, so make sure to study them too.
Overall, I think that the beta exam really hits the target (probably more that the ML exam, which felt a little less polished). I had a miserable time with the screen/GUI questions, but that’s just me: proper skill developers should do just fine, provided that they spend a little time working with backends too.
Trolling aside, why should voice assistants have screens? Seriously? Now I’ve got a proper reason NOT to support that trend ;)
Did you also take the exam? Please share your comments here, but refrain from sharing detailed information as I’ll have to delete it.
As always, thanks for reading.
Posted on January 28, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.