Creating Function as a Service in Kubernetes with Argo Events

danquack

Daniel Quackenbush

Posted on January 10, 2024

Creating Function as a Service in Kubernetes with Argo Events

Containers have changed service creation in software development, and with the advent of AWS Lambda, Function as a Service (FaaS) emerged, further reshaping the approach to service execution. However, within Kubernetes, the seamless integration of single-event processing posed a challenge for teams striving to harness the advantages without the burden of constructing complete services.

Argo Workflows + Argo Events is one solution in the domain that operates natively in Kubernetes, providing a robust orchestration framework. One of its key features, Sensors, acts as the crucial link between external systems and Kubernetes environments. Leveraging these sensors to subscribe to an SQS queue facilitates a seamless connection between public cloud infrastructure and Kubernetes. This integration empowers users to establish effortless data and action exchanges, enhancing system adaptability and scalability.

Exploring an SQS Workflow

One way to test out Argo Events + Workflows in action is by exploring how SQS Messages can be processed. Leveraging sensors to subscribe to an AWS Simple Queue Service (SQS) queue establishes a seamless connection between public cloud infrastructure and Kubernetes. Messages placed on a queue will multiplex, processing that data across functions to transform the data and then insert a record into my datastore. Since the queue volume is variable, having a functional process allows for both horizontal scale and for the application to be zero.

Workflow

Argo Components

Argo Events + Argo Workflows have a few CRDs to create the message processing to the pod. The relationship between the two is Argo Events will provision and listen for events from the given source and then use Argo Workflows to execute. Where Argo Events' power comes from is that it can also invoke other triggers, including custom, but this post won't go into those specifics.

EventBus

The event bus is the transport layer from the Event to the Sensor.

apiVersion: argoproj.io/v1alpha1
kind: EventBus
metadata:
  name: default
  namespace: argocd
spec:
  jetstream:
    version: latest
Enter fullscreen mode Exit fullscreen mode

EventSource

Within Kubernetes, configuring a Service Account with IRSA allows us to subscribe to the specific queue. Within AWS, a queue-subscriber role has permission to review messages on the queue.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-queue
  namespace: argocd
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::0123456789:role/queue-subscriber
Enter fullscreen mode Exit fullscreen mode

The event source is defined with the name, as well as the associated service account. In this example, an event source is being provisioned to listen for all messages on sqs. When the queue listener pod spins up, it will use the assumed role to listen for messages.

apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: aws-sqs
  namespace: argocd
spec:
  template:
    serviceAccountName: test-queue
  sqs:
    test_queue:
      region: "us-east-1"
      queue: "test-queue"
      waitTimeSeconds: 20
Enter fullscreen mode Exit fullscreen mode

Service Account/RBAC

The Sensor Workflow will require an operational role to create workflows. To provision, you can apply the example sensor yaml

Sensor

A sensor connects the EventSource through the EventBus to ultimately the destination of choice. In the case of Argo Workflows, the triggers definition will define a container for which to invoke.
The aws-sqs sensor resource contains multiple parts. The first is the dependency section, which we will reference later on.

apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: aws-sqs
  namespace: argocd
spec:
  template:
    serviceAccountName: operate-workflow-sa
  dependencies:
    - name: test-dep
      eventSourceName: aws-sqs
      eventName: test_queue
Enter fullscreen mode Exit fullscreen mode

The structure of an SQS message is essential because we may want to pass a few fields to our workflow later on.

{
    "context": {
        "type": "type_of_event_source",
        "specversion": "cloud_events_version",
        "source": "name_of_the_event_source",
        "id": "unique_event_id",
        "time": "event_time",
        "datacontenttype": "type_of_data",
        "subject": "name_of_the_configuration_within_event_source"
    },
    "data": {
        "messageId": "message id",
        "messageAttributes": "message attributes", 
        "body": "Body is the message data",
    }
}
Enter fullscreen mode Exit fullscreen mode

With an Argo workflow trigger, we will templatize creating a Workflow CRD to take the SQS queue, which we defined in our dependencies, and define it as a parameter. The body data key will be abstracted for later use as the value of the first input parameter.

...
# Continuation under spec
  triggers:
    - template:
        name: sqs-workflow
        k8s:
          operation: create
          source:
            resource:
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: aws-sqs-workflow-
                namespace: argocd
              spec:
              ...
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body
              dest: spec.arguments.parameters.0.value
Enter fullscreen mode Exit fullscreen mode

Within the spec, we key that parameter with the name of the message, then templatize that key input parameter as a command line argument for our container service.

# Continuation under spec.triggers.0.template.spec
                entrypoint: main
                arguments:
                  parameters:
                  - name: message
                    value: overridden
                templates:
                - name: main
                  inputs:
                    parameters:
                    - name: message
                  container:
                    image: docker.io/org/container:latest
                    args: 
                      - "-m"
                      - "{{inputs.parameters.message}}"
Enter fullscreen mode Exit fullscreen mode

End to End Process

When a message gets submitted to the SQS queue, the event source pod will pick it up and will trigger the Sensor. Once triggered, an Argo Workflow CRD is created where end to end, a message is delivered from SQS and processed inside your cluster.
End to End Workflow

Summary

The integration of Argo Events and Workflows connects FaaS to Kubernetes, providing numerous benefits through this orchestration duo.The power of Sensors as intermediaries, that links external systems and Kubernetes environments, offers a simple pathway to exchange data and actions. This connectivity is exemplified through integration with AWS Simple Queue Service (SQS), which enhances adaptability and seamlessly scales systems.

💖 💪 🙅 🚩
danquack
Daniel Quackenbush

Posted on January 10, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related