The Energy Drink Episodes 3: The Step Function Awakens
Lee Priest
Posted on December 5, 2023
This post is the third part of a ‘Let’s CDK’ series. You can catch up on part 1 here and part 2 here. This post is part of a series that will introduce you to the AWS Cloud Development Kit (CDK) and how it can be used to create an application. This series will introduce you to the core concepts of AWS CDK, how you can deploy to your AWS account, and how services like AWS Lambda Functions, DynamoDB, Step Functions and more can all be used when working with CDK.
Where were we?
In our last episode, we created a logical flow diagram for our app so we are sure of the purpose of the app and the flow we’ll need to achieve it. We also learned about the AWS CDK constructs for API Gateway and created our own custom L3 construct. We then created a stack and initialised our new custom construct to create our REST API. Feel free to have a read-through of the previous episode if you need any refresher before you continue on.
What are we doing today, Lee?
I’m glad you asked! If we look at the current state of our app, we can see that we’ve got a Stack, a Nested Stack and a REST API. It’s cool that we have those, but if we were to try and call the API, nothing would be invoked, and nothing would really happen. Let’s look at changing that.
Let’s take another look at our logical flow diagram:
At the moment, we are technically able to receive a request, but we don’t have any way of running logic against the incoming payload. Before diving in and throwing typescript around to create some handlers, let’s first think about what logic we want to run. Remember, this app we are building is an ‘Energy drink selector’, so we want to ensure we include relevant logic gates. Potential logic includes:
- Payload validation: We want to be sure that the payload we receive is valid before we try to do too much with it.
- Sugar vs. Sugar-free: The user wants to be able to have a drink recommendation based on their preference for sugar or sugar-free.
What Could We Use?
So, we have our logic needs decided. What AWS services could we use to enable us to run our logic checks? We could pass the payload straight from API Gateway to a Lambda function and then run our logic within that function. This would definitely work, but there are a couple of things that we may want to consider about this approach:
Having the payload validation in a Lambda function is not an issue. However, considering that we may have different validation needs dependent on the payload we receive could impact the size of the Lambda function and cause unnecessary bloat. We could also look at shifting the validation to a point before the lambda function is invoked and have an early return. This would save on unnecessary compute and would also be nicer on our wallets.
The logic for sugar vs. sugar-free would need to be contained within this single Lambda function. Sure, we could add this all in and get it to work, but we need to consider the long-term maintainability of the Lambda function. We should aim to keep our Lambda functions and all of our infrastructure as concise as possible.
Considering the above points, an alternative to a Lambda function could be using AWS Step Functions.
AWS Step Functions
If you have ever spoken to me, read anything I've written or listened to any talks I’ve done in relation to Serverless or infrastructure as code, there is a high likelihood that I have confessed my love for Step Functions. Even when unprompted. Putting my biases aside, however, there are some legitimate reasons we can consider using them in our app. If you are new to Step Functions or just fancy a refresher, have a read about them here.
AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps.
In a nutshell, AWS Step Functions lets us create a workflow that we can configure to run various things. It allows us to run checks using flow steps, run actions using baked-in integrations with services like Dynamo DB and 200+ more AWS services, and use pre-built patterns to process S3 objects.
We could leverage many features baked into Step Functions to tackle the considerations we had above regarding using a single Lambda function.
- Payload validation: A ‘Choice’ step would allow us to perform basic validation on the incoming payload. Things like checking that values exist and are of the correct type is something that’s easily possible. We can route to an error handler and exit early for any failed validation.
- Specific routing: Another use of a ‘Choice’ step would be to route the execution based on part of the Step Function input. For example, if the incoming payload had sugar: false set, we could route down a specific path and eliminate running unnecessary compute and other steps.
- We can still use Lambda Functions: For any logic that is a bit more in-depth, we can invoke a lambda function and pass in the payload via the Step Function. Combined with the “specific routing” mentioned above, we can create very concise Lamba functions with dedicated logic for that particular route.
- Storage first pattern: We could also easily employ a storage first pattern using the built-in Dynamo DB integration. Jeremy Daly has a great article detailing the storage first pattern. Using this pattern will allow us to capture the raw input to the step function and potentially handle things like retries and a number of other things if needed.
With everything I’ve mentioned above and the article's title, you have probably worked out that we will use a Step Function rather than a single Lambda function to handle the ‘Run logic against request payload’ part of our logical flow.
Something to Consider
Step Functions come in two different ‘flavours’, or for the official term, workflow types. These two different types are standard and express.
- Standard: This type of Step Function workflow is ideal for long-running workflows and can run for up to one year. They are billed by state transition and can use things like task tokens to enable the ‘pausing’ of an execution while another task is completed and a response is received.
- Express: This type of Step Function workflow is ideal for high-volume, event-processing workflows and can run for up to five minutes. Express Step Functions are billed by the number of executions, duration of execution and the memory consumed while the execution ran. Express Step Function workflows can return synchronous responses when configured with
StartSyncExecution
.
Let’s now consider which type of workflow suits our needs the best. The payload we expect will be small and won't need much processing done to it. We won’t have any other microservices or third-party integrations to contact, so we won’t need a task token. We need to ensure that we can send a synchronous response back to the client that called so we can return their drink recommendation. Given the descriptions above, it looks like an Express workflow is the way to go!
Wait a Minute…
Before we create our Express Step Function, we need to make sure that we have the resources it will need to run, or at least some placeholders that we can flesh out later. As we are going to run a storage-first pattern, we will need to set up a Dynamo DB table. We will also need a couple of Lambda Functions to be able to run some logic against the incoming payload.
Let’s jump back into our code and open up energy-drink-selector-stack.ts
in the lib
directory. We currently have our API Gateway stack and a GET
method setup that’s not really doing much. We can remove that GET
method, as we’ll be hooking up our step function later.
Adding Our Dynamo DB Table
If you aren’t too familiar with Dynamo DB, I would definitely suggest having a read over the documentation. We aren’t going to be getting too in-depth with Dynamo DB, but we will be creating a table and running a PutItem
call to store our incoming payload.
In order to create a new Dynamo DB table using CDK, we can refer to the documentation and see that there is a Table
construct that we can use. Let’s use that Table
construct to create our table:
const energyDrinkTable = new Table(this, 'EnergyDrinkDynamoTable', {
tableName: 'EnergyDrinkTable',
billingMode: BillingMode.PAY_PER_REQUEST,
partitionKey: {
name: 'PK',
type: AttributeType.STRING
},
sortKey: {
name: 'SK',
type: AttributeType.STRING
}
});
In the code snippet above, there are a few things we can see:
- We initialise the
Table
construct and pass it thethis
value as its scope. Remember, this refers to ourEnergyDrinkSelectorStack
- For the second argument, we pass a logical ID
- We set the tableName property to ‘EnergyDrinkTable’ to avoid using an autogenerated name provided by CDK. Just make sure that this value is always unique to avoid deployment headaches and other issues.
- We then specify both a partition key and a sort key as
PK
andSK
So now our EnergyDrinkSelectorStack
should look something like this:
import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { APIGatewayStack } from '../stacks/APIGatewayStack';
import { AttributeType, Table, BillingMode } from 'aws-cdk-lib/aws-dynamodb';
export class EnergyDrinkSelectorStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const apiGatewayStack = new APIGatewayStack(this, 'EnergyDrinkSelectorAPIGatewayStack');
const energyDrinkTable = new Table(this, 'EnergyDrinkDynamoTable', {
tableName: 'EnergyDrinkTable',
billingMode: BillingMode.PAY_PER_REQUEST,
partitionKey: {
name: 'PK',
type: AttributeType.STRING
},
sortKey: {
name: 'SK',
type: AttributeType.STRING
}
});
}
}
And with that, we now have our Dynamo DB table configured! When we next deploy, CDK will synthesise a CloudFormation template and create the Dynamo DB table for us.
Let’s Add Some Placeholders For Now…
So we have our shiny new Dynamo DB table configured. Let’s now turn our attention to the Lambda Functions. For the time being, we can set up some placeholder functions that won't do much but will allow us to configure our Step Function. We can always come back to the Lambda Functions later and flesh them out with whatever logic we need.
At the root of our project, let's add a src
directory and, within that, a functions
directory. This is where all of our Lambda Functions will live. Let’s also create a directory per Lambda Function. So, in our case, let’s create a sugar
and sugarFree
directory inside our functions
directory. So our structure should look something like this:
src/
│
└───functions/
│
├───sugar/
│
└───sugarFree/
As we can’t do too much with empty directories, let’s add a file to each one. Add a sugar.ts
and a sugarFree.ts
file in their respective directories. For the actual code in the .ts
files, we are just going to have a very basic placeholder that will return a message. We can use the following code for our handler. Just make sure to update the name (sugar to sugarFree) and the message to match the file they are in:
export const sugar = async () => {
return {
statusCode: 200,
body: {
message: "Hello, Sugar World!"
},
};
}
Okay, we now have our Dynamo DB table and placeholder handler functions. We have everything we need to start building our Step Function!
Let’s Get Stepping!
There are a few different ways that you can build out Step Functions using CDK. One way we could approach it is by taking the JSON generated by the Workflow Studio and passing it as the definitionBody
to the StateMachine
construct. There is nothing wrong with this approach, and you can even set up some cool integrations in VS Code to visualise your step function.
The way we are going to approach our Step Function creation is to build out our definition ourselves using the various classes and constructs made available to use via the aws-cdk-lib
library. Our first step on this journey is to create our own L3 construct so we can create our own express step function.
Create the Construct
Remember our constructs
directory from the previous episode? Let’s head back there and create a new directory called StepFunctions
. Inside that directory, create a new file called ExpressStepFunction.ts
. Within our newly created file, let’s first add the imports we’ll be needing. These are:
import { Stack, Duration } from 'aws-cdk-lib';
import { LogGroup } from 'aws-cdk-lib/aws-logs';
import { Construct } from 'constructs';
import {
StateMachine,
DefinitionBody,
IChainable,
StateMachineType,
LogLevel
} from 'aws-cdk-lib/aws-stepfunctions';
A quick look at the imports above shows that we are importing the Stack
construct and the Duration
class. The Stack
will be used when assigning the type of the scope for our custom construct. Duration
will be used to set the timeout duration. Importing LogGroup
will allow us to create a new log group so we have some observability on our step function. As we’re creating our own custom construct, we need to extend the base Construct
provided by the constructs library. Our final line of imports sees us importing all the fun things needed to initialise the StateMachine
and ensure it has the correct typings and parameters set.
We can re-use the bare bones template we used when creating our APIGateway
construct:
export class ExpressStepFunction extends Construct {
constructor(scope: Stack, id: string, props: ExpressStepFunctionProps) {
super(scope, id);
// This is where we're gonna have some fun!
}
}
Before we start having the proper fun of fleshing out the construct, we need to define the ExpressStepFunctionProps
type. Above the export, let’s add the following type:
type ExpressStepFunctionProps = {
serviceName: string;
stage: string;
definition: IChainable;
timeout?: number;
}
We have seen the serviceName
and stage
props before in our ApiGateway
construct. But we now have two new parameters:
- definition: Typed as IChainable. This will be the actual definition of the steps the step function should take. As the name suggests, the definition will consist of a bunch of chainable states.
- timeout: Typed as a number. This number will be passed to the
Duration
we imported earlier, allowing us to specify a default or a specific timeout value for a given step function.
Now, we can have some fun fleshing out the construct. Let’s first destructure some variables from our props
object and set up a new LogGroup
for the Step Function. We can do this as follows:
const {
serviceName,
stage,
definition,
timeout = 5
} = props;
const logGroup = new LogGroup(
this,
`${serviceName}-express-stepfn-logs-${stage}`
);
Using the LogGroup
class, we created a new Cloudwatch log group that we can pass to the Step Function. I find it pretty cool that creating a log group is so easy, and hopefully, you do too! All we need to do is pass the scope, which is our Step Function construct, and the name of the log group, and CDK will work its magic at synth time.
note: If you get an error when trying to deploy relating to the LogGroup name, you may need to prefix the log group name. You can read more here.
Creating the actual state machine is also fairly straightforward. We can use the StateMachine
construct and a few arguments, and we’re good to go:
this.stateMachine = new StateMachine(
this,
`${serviceName}-express-stepfn-${stage}`,
{
definitionBody: DefinitionBody.fromChainable(definition),
timeout: Duration.minutes(timeout),
stateMachineType: StateMachineType.EXPRESS,
stateMachineName: `${serviceName}-express-stepfn-${stage}`,
logs: {
destination: logGroup,
level: LogLevel.ALL,
includeExecutionData: true
}
}
);
Let’s break down the StateMachine
call above and the arguments passed to it:
- this: The scope of the
StateMachine
, which, in this case, is our newConstruct
- ${serviceName}-express-stepfn-${stage}: This is the string we’re passing as the logical ID. We use template strings to allow us to use variables to build the name dynamically. This could be expanded to take another variable like
stepFunctionName
or something similar to allow for more granularity. But for our current needs, it’s fine as is. - definitionBody: The ‘meat’ of our Step Function. This will be a chain of steps we pass that will be used to build out the steps in the Step Function.
- timeout: The maximum
Duration
of time the Step Function can run for. We default this to 5, the default for an Express workflow. - stateMachineType: This is where we explicitly specify the Step Function type to be
express
. We do this using theStateMachineType.EXPRESS
enum. - stateMachineName: Not to be confused with the logical ID. This is the name of the step function. So, if we’re looking in the AWS console, we can easily find it, as it won't be an auto-generated name given by CDK.
- logs: This object allows us to specify our logging configuration for the Step Function. We pass the log group we created earlier as the destination for the logs. We also specify the level to be the enum LogLevel.ALL meaning that everything will get logged, not just errors or nothing at all. Setting
includeExecutionData
to true allows us to see the execution data in our logs.
An important thing to note when setting the logs object in your real-world applications:
Please be careful and don’t include any sensitive or PII data in your logs!
Let’s Get Defining!
We have our shiny new L3 Step Function construct, and now we need some actual step definitions to put it to good use. Let’s create a new directory in our src
directory called stepFunctions
. In the future, we may have some other fun things related to Step Functions in the directory, so let’s create a directory called definitions
in this stepFunctions
directory. With our directories created, let’s create a file called energyDrinkSelector.ts
.
As we’ve done before, let’s start by looking at the imports we’ll need to create our definition:
import {
Pass,
Choice,
Fail,
JsonPath,
Condition
} from "aws-cdk-lib/aws-stepfunctions";
import {
LambdaInvoke,
DynamoPutItem,
DynamoAttributeValue
} from "aws-cdk-lib/aws-stepfunctions-tasks";
import { Stack } from 'aws-cdk-lib';
import { ITable } from 'aws-cdk-lib/aws-dynamodb';
import { IFunction } from "aws-cdk-lib/aws-lambda";
I won’t go through all of them, but you can see that we are importing various things from aws-cdk-lib/aws-stepfunctions
and aws-cdk-lib/aws-stepfunctions-tasks
. These refer to the flow steps (Pass, Choice, Fail, Condition) and integration tasks (LambdaInvoke, DynamoPutItem) we can use within Step Functions.
As we are still within the realm of Typescript, we need to define our type for the props the definition creation function we will create will receive:
type energyDrinkSelectorDefinitionProps = {
stack: Stack
energyDrinkTable: ITable,
sugarFreeLambdaFunction: IFunction,
sugarLambdaFunction: IFunction
}
We have the usual stack
prop so that we can scope things correctly. We also are going to be passing the Dynamo DB table and Lambda Functions we created earlier.
The Definition Creator Function
Next, we can set up a function to create and return our definition. Let’s start by defining the function and the props it will receive based on the type we just defined:
export const energyDrinkSelectorDefinition = ({
stack,
energyDrinkTable,
sugarFreeLambdaFunction,
sugarLambdaFunction
}: energyDrinkSelectorDefinitionProps) => {
// Looking mighty empty in here...
};
The above won’t get us too far, so let’s start adding things. The first thing we can do is create a call to Dynamo DB to store the ‘raw’ input received by the Step Function:
const storeRawItem = new DynamoPutItem(
stack,
'StoreRawItem',
{
item: {
PK: DynamoAttributeValue.fromString(JsonPath.uuid()),
SK: DynamoAttributeValue.fromString('RAWITEM'),
body: DynamoAttributeValue.fromString(JsonPath.jsonToString(JsonPath.objectAt('$'))),
},
table: energyDrinkTable,
resultPath: '$.dynamoResult'
}
);
The above may look a little odd, especially in the item
object, but we can break it down to remove the fear. Initialising the DynamoPutItem
class, we pass the stack
to ensure correct scoping, a logical ID and a props object. Within the props object, we can specify the fields of the item we want to create. As this call will be storing the ‘raw’ input, we are creating a ‘body’ parameter on the item object that will store the entire input passed to the step function execution.
All that scary-looking code on the body line, looking at youJsonPath
, is how we can access the state of the execution and format it to a type that Dynamo DB is happy with. We use the objectAt
method to find the object at the location $
. In step function land, the $
represents the root of the execution state. So, accessing the object at this location will allow us to get the entire execution state. The jsonToString
method is also used to make sure everything plays nicely with the DynamoAttributeValue
and the item object matches what the DynamoPutItem
method expects.
We pass the energyDrinkTable
reference to the table
parameter on the props object. This will ensure that when the execution runs, it stores the item in the correct Dynamo DB table. We will also use resultPath
with our storage call and set that to store the result of the DynamoPutItem
call in an object on the execution state called dynamoResult
. This dynamoResult
object could then be accessed at any point in the execution as it is made part of the execution state.
Creating the Definition Chain
Now that we have our storage action setup, we can start looking at creating the actual step function definition chain. The definition chain is where we will define things like our Choice
, Pass
and Fail
states. We will also set our catch blocks for catching errors and specify when our Lambda Functions should be invoked. We can create our definition chain by doing the following:
const validatePayload = new Choice(stack, 'Validate Payload');
const isSugarFree = new Choice(stack, 'Sugar vs Sugar Free');
const sugarFreePassState = new Pass(stack, 'Sugar Free Error', {});
const sugarPassState = new Pass(stack, 'Sugar Error', {});
const failState = new Fail(stack, 'Fail', {});
const definition = storeRawItem.addCatch(failState).next(
validatePayload
.when(Condition.isPresent('$.sugar'), isSugarFree
.when(Condition.booleanEquals('$.sugar', true), new LambdaInvoke(stack, 'Sugar Logic', {
lambdaFunction: sugarLambdaFunction,
}).addCatch(sugarPassState, { errors: ['States.ALL'] }))
.otherwise(new LambdaInvoke(stack, 'Sugar Free Logic', {
lambdaFunction: sugarFreeLambdaFunction,
}).addCatch(sugarFreePassState, { errors: ['States.ALL'] }))
)
.otherwise(failState)
);
return definition;
We can see in the example above that we create the states that we will be needing for the execution and pass them the stack
as their scope. We also pass in the name of the state.
After creating the states we need, we create the Step Function definition. The storeRawItem
is called first to ensure we follow our storage first pattern, and then a .next
allows us to chain on the next step in the Step Function. The validatePayload
Choice
state is run and checks for the existence of a sugar variable being present in the execution state. If the sugar
variable is found, the chain continues on to the isSugarFree
check. The .otherWise
is used when conditions aren’t met. So, in this case, there is a .otherWise
to go to the failState
if the sugar
variable is not present, and another to go to the LambdaInvoke
of the sugar-free logic function if the sugar variable is found not to be set to true
.
If all has gone to plan, you should end up with a definition create function file that looks something like this:
import { Pass, Choice, Fail, JsonPath, Condition } from "aws-cdk-lib/aws-stepfunctions";
import { LambdaInvoke, DynamoPutItem, DynamoAttributeValue } from "aws-cdk-lib/aws-stepfunctions-tasks";
import { Stack, Token } from 'aws-cdk-lib';
import { ITable } from 'aws-cdk-lib/aws-dynamodb';
import { IFunction } from "aws-cdk-lib/aws-lambda";
type energyDrinkSelectorDefinitionProps = {
stack: Stack
energyDrinkTable: ITable,
sugarFreeLambdaFunction: IFunction,
sugarLambdaFunction: IFunction
}
export const energyDrinkSelectorDefinition = ({
stack,
energyDrinkTable,
sugarFreeLambdaFunction,
sugarLambdaFunction
}: energyDrinkSelectorDefinitionProps) => {
const rawItemSK = JsonPath.stringAt('$.sugar') === 'true' ? 'SUGAR' : 'SUGAR_FREE';
const storeRawItem = new DynamoPutItem(stack, 'StoreRawItem', {
item: {
PK: DynamoAttributeValue.fromString(JsonPath.format(`RAWITEM#{}`, JsonPath.uuid())),
SK: DynamoAttributeValue.fromString(rawItemSK),
body: DynamoAttributeValue.fromString(JsonPath.jsonToString(JsonPath.objectAt('$'))),
},
table: energyDrinkTable,
resultPath: '$.dynamoResult'
});
const validatePayload = new Choice(stack, 'Validate Payload');
const isSugarFree = new Choice(stack, 'Sugar vs Sugar Free');
const sugarFreePassState = new Pass(stack, 'Sugar Free Error', {});
const sugarPassState = new Pass(stack, 'Sugar Error', {});
const failState = new Fail(stack, 'Fail', {});
const definition = storeRawItem.addCatch(failState).next(
validatePayload
.when(Condition.isPresent('$.sugar'), isSugarFree
.when(Condition.booleanEquals('$.sugar', true), new LambdaInvoke(stack, 'Sugar Logic', {
lambdaFunction: sugarLambdaFunction,
}).addCatch(sugarPassState, { errors: ['States.ALL'] }))
.otherwise(new LambdaInvoke(stack, 'Sugar Free Logic', {
lambdaFunction: sugarFreeLambdaFunction,
}).addCatch(sugarFreePassState, { errors: ['States.ALL'] }))
)
.otherwise(failState)
)
return definition;
};
Let’s Stack That Step
Now that we have our Step Function definition, we can turn our attention to creating a stack where our Step Function can live. In the stacks
directory of the project, create a new file called EnergyDrinkSelectorStepFunctionStack.ts
. Continuing the tradition we have established so far, let’s take a look at the imports we’ll need for our EnergyDrinkSelectorStepFunctionStack
:
import path = require("path");
import { Construct } from "constructs";
import { NestedStack, NestedStackProps } from "aws-cdk-lib";
import { StateMachine } from "aws-cdk-lib/aws-stepfunctions";
import { NodejsFunction } from "aws-cdk-lib/aws-lambda-nodejs";
import { Architecture, Runtime } from "aws-cdk-lib/aws-lambda";
import { ITable } from "aws-cdk-lib/aws-dynamodb";
import { ExpressStepFunction } from "../constructs/StepFunctions/ExpressStepFunction";
import { energyDrinkSelectorDefinition } from "../src/stepFunctions/definitions/energyDrinkSelector";
Let’s take a quick look at some of the imports above that we haven’t used yet in this series:
- path: We’ll need this to build out some paths that are needed for our Lambda Functions
- NodejsFunction: This class can be used to build a Node.js Lambda Function bundled with esbuild. There are a few reasons to use this over the Function class. One of the primary ones for us is that the NodejsFunction simplifies the development process when working with typescript.
- Architecture & Runtime: These are used to specify what architecture and runtime we want our Lambda Functions to use
Next, let’s create the stack skeleton as we have previously:
type EnergyDrinkSelectorStepFunctionStackProps = NestedStackProps & {
table: ITable;
}
export class EnergyDrinkSelectorStepFunctionStack extends NestedStack {
constructor(scope: Construct, id: string, props: EnergyDrinkSelectorStepFunctionStackProps) {
super(scope, id, props);
const { table } = props;
// fun things live in here
}
};
In the code above, we first create a type for our props object that is passed into the stack. We take the NestedStackProps
object and extend it to also contain a table
parameter. Hopefully, this stack skeleton is looking quite familiar. We have the EnergyDrinkSelectorStepFunctionStack
extending the NestedStack
and the familiar constructor and super calls.
As we have realised, having the skeleton is great, but we want to have a bit more fun than just building a skeleton stack. Let’s look at adding some meat to this skeleton by defining our express step function and any props it may need. Remember when we created those placeholder Lambda Function handler files? Well, now we get to make use of them. Seeing as we’ll need them in our express step function and the function we created to build out the Step Function definition requires them as props, we will need to build out our Lambda Functions to pass down. We are able to do this as follows:
const sugarFreeLambdaFunction = new NodejsFunction(this, 'SugarFreeLambdaFunction', {
entry: path.join(__dirname, '../src/functions/sugarFree/sugarFree.ts'),
runtime: Runtime.NODEJS_18_X,
architecture: Architecture.ARM_64,
handler: 'sugarFree',
bundling: {
sourceMap: true,
minify: true,
tsconfig: path.join(__dirname, '../tsconfig.json'),
},
});
const sugarLambdaFunction = new NodejsFunction(this, 'SugarLambdaFunction', {
entry: path.join(__dirname, '../src/functions/sugar/sugar.ts'),
runtime: Runtime.NODEJS_18_X,
architecture: Architecture.ARM_64,
handler: 'sugar',
bundling: {
sourceMap: true,
minify: true,
tsconfig: path.join(__dirname, '../tsconfig.json'),
},
});
Breaking down the code above, we can see that we’re initialising the NodejsFunction
class and passing it the scope (this
which refers to our stack) and a logical ID. Then, in the props object, we can see we are passing an entry path. This entry path is the path to the Lambda Function handler we created earlier.
The runtime and architecture parameters allow us to choose the runtime and architecture we want the Lambda Function to run on.
The handler
parameter finds the actual handler function inside the file we specified in the entry path. So, in our case, we need to specify sugarFree
or sugar
as these are the function names specified in the handler files.
The bundling object allows us to control how the Lambda Function is bundled, and we are able to pass our tsconfig.json
file so our typescript configuration can be understood.
const energyDrinkSelectorStepFunction = new ExpressStepFunction(this, 'EnergyDrinkSelectorExpress', {
serviceName: 'energy-drink-selector',
stage: 'dev',
definition: energyDrinkSelectorDefinition({
stack: this,
energyDrinkTable: table,
sugarLambdaFunction: sugarLambdaFunction,
sugarFreeLambdaFunction: sugarFreeLambdaFunction
})
});
this.stepFunction = energyDrinkSelectorStepFunction.stateMachine;
Initialising our custom ExpressStepFunction construct, we pass it the scope, logical ID, and props object. The serviceName
and stage
parameters are pretty straightforward as they’re just strings. We get to have some fun creating the definition as we can leverage our energyDrinkSelectorDefinition
function. We pass it the stack, the table we destructured from our stack props object and our Lambda Functions. We then assign the stateMachine
generated from the construct to a public variable in the stack to make it accessible to other things that may need it. As we’re assigning to a public variable, we need to ensure that we have declared it. We can add the following just after the start of the stack definition before the constructor call:
public readonly stepFunction: StateMachine;
So we should now have an EnergyDrinkSelectorStepFunctionStack
that looks something like this:
import path = require("path");
import { Construct } from "constructs";
import { NestedStack, NestedStackProps } from "aws-cdk-lib";
import { StateMachine } from "aws-cdk-lib/aws-stepfunctions";
import { NodejsFunction } from "aws-cdk-lib/aws-lambda-nodejs";
import { Architecture, Runtime } from "aws-cdk-lib/aws-lambda";
import { ITable } from "aws-cdk-lib/aws-dynamodb";
import { ExpressStepFunction } from "../constructs/StepFunctions/ExpressStepFunction";
import { energyDrinkSelectorDefinition } from "../src/stepFunctions/definitions/energyDrinkSelector";
type EnergyDrinkSelectorStepFunctionStackProps = NestedStackProps & {
table: ITable;
}
export class EnergyDrinkSelectorStepFunctionStack extends NestedStack {
public readonly stepFunction: StateMachine;
constructor(scope: Construct, id: string, props: EnergyDrinkSelectorStepFunctionStackProps) {
super(scope, id, props);
const { table } = props;
const sugarFreeLambdaFunction = new NodejsFunction(this, 'SugarFreeLambdaFunction', {
entry: path.join(__dirname, '../src/functions/sugarFree/sugarFree.ts'),
runtime: Runtime.NODEJS_18_X,
architecture: Architecture.ARM_64,
handler: 'sugarFree',
bundling: {
sourceMap: true,
minify: true,
tsconfig: path.join(__dirname, '../tsconfig.json'),
},
});
const sugarLambdaFunction = new NodejsFunction(this, 'SugarLambdaFunction', {
entry: path.join(__dirname, '../src/functions/sugar/sugar.ts'),
runtime: Runtime.NODEJS_18_X,
architecture: Architecture.ARM_64,
handler: 'sugar',
bundling: {
sourceMap: true,
minify: true,
tsconfig: path.join(__dirname, '../tsconfig.json'),
},
});
const energyDrinkSelectorStepFunction = new ExpressStepFunction(this, 'EnergyDrinkSelectorExpress', {
serviceName: 'energy-drink-selector',
stage: 'dev',
definition: energyDrinkSelectorDefinition({
stack: this,
energyDrinkTable: table,
sugarLambdaFunction: sugarLambdaFunction,
sugarFreeLambdaFunction: sugarFreeLambdaFunction
}),
});
this.stepFunction = energyDrinkSelectorStepFunction.stateMachine;
}
};
We’re almost ready to deploy
We now have the vast majority of elements we need to deploy our shiny new express Step Function successfully. What is left now is to ensure that these new elements are added to our EnergyDrinkSelectorStack
so they are picked up when we run our deployment.
Let’s start by adding our new Step Function stack to the EnergyDrinkSelectorStack
. We can do this by adding an import to the energy-drink-selector-stack.ts
in the lib directory:
import { EnergyDrinkSelectorStepFunctionStack } from '../stacks/EnergyDrinkSelectorStepFunctionStack';
Once we have the import, we can then add the following line of code underneath our energyDrinkTable
variable:
const stepFunctionStack = new EnergyDrinkSelectorStepFunctionStack(this, 'EnergyDrinkSelectorStepFunctionStack', { table: energyDrinkTable });
By adding this line of code, we could technically run the cdk deploy
command, and everything would be synthesised and deploy fine. However, there are a few things we still need to consider.
As we want to trigger the Step Function via API Gateway, we need to ensure that API Gateway has permission to actually invoke the Step Function execution. So, we need to create an IAM role that defines what API Gateway is allowed to do. Of course, AWS CDK has constructs we can use to do this. Let’s get these imported into our energy-drink-selector-stack.ts
file:
import { PolicyDocument, PolicyStatement, Role, ServicePrincipal } from 'aws-cdk-lib/aws-iam';
import { AwsIntegration } from "aws-cdk-lib/aws-apigateway";
Using the constructs from the above imports, we can create a new IAM role by adding the following code:
const apiGatewayStepFunctionRole = new Role(this, 'APIGatewayStepFunctionRole', {
assumedBy: new ServicePrincipal('apigateway.amazonaws.com'),
inlinePolicies: {
'StepFunctionsStartSyncExecutionPolicy': new PolicyDocument({
statements: [
new PolicyStatement({
actions: ['states:StartSyncExecution'],
resources: [stepFunctionStack.stepFunction.stateMachineArn],
}),
],
}),
},
});
We can break down the code snippet above as follows:
- new Role: This initialises the construct and we pass the scope (our stack) and a Role Name of ‘APIGatewayStepFunctionRole’
- assumedBy: This allows us to set the principal that is allowed to assume the IAM role. In our case, here it is the API Gateway service so we pass the string of apigateway.amazonaws.com
- inlinePolicies: This object allows us to create a new IAM policy inline in our role definition. We give the policy the name of ‘StepFunctionsStartSyncExecutionPolicy’ and then set the PolicyDocument and PolicyStatement.
- PolicyDocument: Used to define the permissions and statements within the inline policy
- PolicyStatement: Defines a single permission statement. In our case, it grants the action of ‘states:StartSyncExecution’ to the resources that match the ARN of our express Step Function (stepFunctionStack.stepFunction.stateMachineArn).
Now that we have the role, we need to attach it to our Step Function. We can do this by using the grantStartExecution
method:
stepFunctionStack.stepFunction.grantStartExecution(apiGatewayStepFunctionRole);
The final two things we need to do to allow API Gateway to play nicely with the express Step Function we have created are:
- Add an integration between the API Gateway and the step function using the
StartSyncExecution
action. - Add a POST method to our API to post payloads to it and trigger the Step Function.
To create the integration, we can use the AwsIntegration class. We can specify the service and action we want to use in the integration and things like the request and response templates. It looks a little something like this:
const integration = new AwsIntegration({
service: 'states',
action: 'StartSyncExecution',
integrationHttpMethod: 'POST',
options: {
credentialsRole: apiGatewayStepFunctionRole,
requestTemplates: {
'application/json': `
{
"stateMachineArn": "${stepFunctionStack.stepFunction.stateMachineArn}",
"input": "$util.escapeJavaScript($input.json('$'))"
}
`,
},
integrationResponses: [
{
statusCode: '200',
responseTemplates: {
'application/json': `$input.path('$.output')`,
},
},
{
statusCode: '400',
selectionPattern: '4\\d{2}',
},
{
statusCode: '500',
selectionPattern: '5\\d{2}',
},
],
},
});
Within the options
object, there are a few things we need to be aware of:
- credentialsRole: This specifies the IAM role we want the API Gateway to assume. We pass it
apiGatewayStepFunctionRole
to ensure it has the necessary permissions for interacting with our Step Function - requestTemplates: The code here may look a little weird and scary. We are using Velocity Template Language (VTL) to dynamically insert the ARN of our Step Function and we also escape and include the JSON content of the incoming response in the ‘input’ field. This ensures that the Step Function correctly receives our payload
- integrationResponses: This defines how the integration handles the different HTTP response status codes. For the 200 status code, we specify that the response body should be taken from the output field in the JSON response
Adding the new POST method is relatively straightforward, and we can do this in a few lines of code:
apiGatewayStack.restAPI.root.addMethod('POST', integration, {
methodResponses: [
{ statusCode: '200' },
{ statusCode: '400' },
{ statusCode: '500' },
],
});
We access the root of our API and use the addMethod
method to create a new POST request endpoint at the root of the API. We pass the integration we created to ensure that the configuration is used. An array of methodResponses
is also specified, telling the API what responses we should expect from this endpoint.
Go Forth and Deploy!
Everything is now in place for us to be able to deploy and have a working API, a new Dynamo DB table, two new Lambda Functions and an Express step function! Feel free to explore the AWS console to see all the newly created resources.
You should be able to also give your sparkly new resources a test. You can test using your preferred method (e.g. in the AWS console, using an app like Postman) with a payload as follows:
{
"sugar": true
}
Aaaaand relax
If you’re still here after covering everything we’ve covered in this episode, I tip my hat to you. We have covered a ton of stuff in this episode, so let’s have a quick recap:
- Storage first!: We created a new Dynamo DB table to store the raw payloads we receive. This gives us a great audit trail to visit if we need to debug any issues.
- Lambda placeholders: Two Lambda functions were created; these are just placeholders for now, but we will come back to them! These were then used within our Step Function
- Express Step Function: We created a custom express Step Function construct, a definition of chained steps and then used the StateMachine construct to tie it all together
- Stacks!: A new stack was created containing the Lambda Functions and the Express Step function needed in our app
- API Gateway updates: Updates were made to our API to ensure it had permission to invoke the Step Function using the StartSyncExecution. This was done by creating a new IAM role and an inline IAM policy.
- Invoke!: Finally, we ran through how to invoke the Step Function by sending requests to our API Gateway through the AWS console or Postman.
I think we have all earned ourselves a break after consuming so much knowledge and having so much fun with CDK in this episode. Go take a well-earned break and come back for the next episode, where we’ll be dealing with those placeholder Lambda Functions.
TL;DR
If you want to peek at the code and don’t want to read my ramblings, navigate to this Github repo and spy on everything we’ll be creating in the series.
Posted on December 5, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.