How to Develop an AI Application: Step-by-Step using Orkes Conductor

livw

livw

Posted on November 21, 2024

How to Develop an AI Application: Step-by-Step using Orkes Conductor

This is Part 1 of the AI App Development series, which will demonstrate how to build a simple AI application using Conductor. Stay tuned for Part 2 for more complex use cases.


The potential of using AI for enterprise use cases is vast, but building an AI-powered application from scratch involves a deeply technical tech stack. By leveraging an orchestration platform like Orkes Conductor, you can easily govern these moving parts into a well-coordinated flow, be it during development, testing, or production phases.

This introductory tutorial will demonstrate how to develop enterprise-ready AI applications using Conductor. Let's start with a simple article summarizer. While straightforward, the workflow can be abstracted and adapted for practical use cases across industries, such as generating movie synopses for a streaming platform or extracting key highlights from quarterly earnings reports.

Building an AI application with Conductor

As an orchestration engine, Conductor powers code-based flows like cloud infrastructure management, shipping and order tracking, media delivery pipelines, LLM chains, and so on. Conductor oversees the workflow execution and manages the plumbing matters of a distributed environment, such as data flow, timeouts, retries, and compensation flows, so that applications can be more quickly brought to an enterprise-ready state. These capabilities are instrumental for building AI-enabled applications, where velocity and agility are paramount to success.

At a high level, building with Conductor involves three simple steps:

  1. Get access to Conductor. This is where you will build the application flow.
  2. Build the AI-powered application flow.
  3. Write the application frontend and backend. Conductor can easily be integrated with any programming language, allowing you to trigger Conductor flows in your backend using our SDKs.

Get access to Conductor

To begin, create an account in an Orkes Conductor cluster. For this tutorial, you can use the free Orkes Playground to follow along.

Create the LLM-enabled application flow

Conductor provides an out-of-the-box suite of LLM system tasks that are convenient to use in most cases. For more complex AI tasks, developers can opt to create their own task workers in any language.

Step 1: Create your application flow using Orkes’ visual workflow editor

In a straightforward article summarizer, the application flow involves two tasks:

  1. Retrieve the article from a given URL.
  2. Prompt an LLM to provide a summary of the article.

Diagram of user calling an application frontend that runs on a Conductor backend, which orchestrates the two tasks.
Application flow using Conductor as the orchestration engine.

For Task 1, we can use the Get Document task, which can retrieve text from various content types. In this case, text from an HTML file.

Screenshot of Conductor UI, showing the Get Document task configuration.

For Task 2, we can use the Text Complete task to call an LLM with a prompt. Both tasks are system tasks that eliminate the need to write custom code to integrate with LLM providers.

Screenshot of Conductor UI, showing the Text Complete task configuration.

To create the article summarizer flow:

  1. Go to Orkes Playground.
  2. In the left navigation menu, go to Definitions > Workflow. Screenshot of Conductor UI, showing the left navigation menu for Definitions > Workflow.
  3. Select Define Workflow in the top right. The visual workflow editor appears.
  4. Select the Code tab on the right and paste the following JSON code:

    {
    "name": "studyPartner",
    "description": "AI application that summarizes an article",
    "version": 1,
    "tasks": [
    {
     "name": "get_article",
     "taskReferenceName": "get_article_ref",
     "inputParameters": {
       "url": "\${workflow.input.url}",
       "mediaType": "text/html"
     },
     "type": "GET_DOCUMENT",
     "cacheConfig": {
       "key": "\${url}-\${mediaType}",
       "ttlInSecond": 360
     }
    },
    {
     "name": "summarize_article",
     "taskReferenceName": "summarize_article_ref",
     "inputParameters": {
       "promptVariables": {
         "text": "\${get_article_ref.output.result}"
       },
       "llmProvider": "providerNameHere",
       "model": "modelNameHere",
       "promptName": "promptNameHere",
       "temperature": "\${workflow.input.temperature}",
       "topP": "\${workflow.input.topP}"
     },
     "type": "LLM_TEXT_COMPLETE"
    }
    ],
    "inputParameters": [
    "url",
    "temperature",
    "topP"
    ],
    "schemaVersion": 2,
    "timeoutPolicy": "ALERT_ONLY",
    "timeoutSeconds": 0
    }
    
  5. Change the workflow Name to something unique.

  6. Select Save > Confirm.

Your workflow should look like this:

Screenshot of article summarizer workflow, containing the Get Document and Text Complete tasks.
Article summarizer workflow.

Now that your workflow is ready, it’s time to get it up and running by adding your LLM integration.

Step 2: Add your preferred LLM integration

Orkes offers dozens of integrations with all major LLM providers—OpenAI, Anthropic, Google, Amazon, Cohere, Mistral, Hugging Face, and so on.

List of AI/LLM and vector database integrations in Orkes Conductor: Azure Open AI, Open AI, Cohere, Google Vertex AI, Google Gemini AI, Anthropic Claude, Hugging Face, AWS Bedrock Anthropic, AWS Bedrock Cohere, AWS Bedrock Llama2, AWS Bedrock Titan, Mistral, Pinecone, Weaviate, Postgres Vector Database, MongoDB.
AI-related integrations in Orkes Conductor.

To add an integration:

  1. Grab your API key from your LLM provider.
  2. In the left navigation menu of Orkes Playground, go to Integrations.
  3. Select New Integration and select your preferred LLM provider.
  4. Enter the required fields, such as the Integration Name, Description, Access Credentials, and API Endpoint. The required fields differ by LLM provider, so you can refer to the Integration Docs for guidance.
  5. Make sure to enter a unique value for the Integration Name, such as “OpenAI_yourNameHere”.
  6. Select Save.

With the LLM integration added, you can start adding the specific models offered by the LLM provider.

Each model has different capabilities or is tuned for a different use case. Which model you choose depends on your use case — for our article summarizer, a general conversational model with text capabilities will suffice.

To add a model:

  1. In the Integrations page, select the + button next to your newly-added integration.
  2. Select New model.
  3. Enter the Model name and Description. Ensure that the Active toggle is switched on.
  4. Select Save.

Done! With the right prompt, you can now start using the LLMs in your workflows. In the next few steps, you will add the LLMs to the prompt template and workflow before using it.

Step 3: Create a prompt template using Orkes’ AI prompt builder

A prompt is necessary to get the model to summarize an article. Since we are building an AI article summarizer and not a general-purpose chatbot, the prompts can be templatized and automatically fire with the necessary context. Orkes’ AI prompt builder allows you to do exactly that: create and test prompt templates with multiple models.

Screenshot of Orkes' Conductor Ai Prompt Builder screen.
Create and test prompt templates with any LLM in Orkes.

To create a prompt template:

  1. In Orkes Playground, go to Definitions > AI Prompts.
  2. Select Add AI prompt.
  3. Enter a unique Prompt Name, such as “summarizeText_yourNameHere”.
  4. In Model(s), select the models which the prompt can be used with.
  5. Enter a Description of what the prompt does. For example, “Takes an article content and summarizes it.”
  6. Enter your Prompt Template, which can be as simple as the following:

    Summarize ${text}.
    

    Here, ${text} is a variable input. At runtime, this variable will be replaced with the article content — for example, “Summarize NASA's Europa Clipper spacecraft lifted off Monday from Kennedy Space Center in Florida aboard a SpaceX Falcon Heavy rocket, [...]”.

  7. Once done, select Save > Confirm save.

Now, you can start testing your prompt. To do that, pick a specific model to test and tune the LLM parameters, like temperature, stop words, and topP. Then paste in the variable substitute for ${text} and run the prompt to get the LLM response.

Screenshot of Orkes' Conductor AI Prompt Builder, with the steps for testing prompts highlighted
Test your prompts with your chosen model, text variables, and prompt variables.

We’ll explore more methods for engineering better responses in an upcoming blog post. For now, let’s put together the finishing touches for your article summarizer flow.

Step 4: Put it all together

Recall the JSON code that you copied to create your workflow? Now that you have added your LLM models and created your prompt, it’s time to put these resources into the JSON code (ie your workflow definition).

To put it all together:

  1. In Orkes Playground, go to Definitions > Workflow and select the workflow you have created previously.
  2. In the summarize_article task, replace the following values:

    • Replace providerNameHere with your chosen LLM provider.
    • Replace modelNameHere with your chosen model.
    • Replace promptNameHere with your prompt template name. Make sure to add back the following variable for the prompt:

      "text": "${get_article_ref.output.result}"
      
  3. Select Save > Confirm.

Done! Give your workflow a test run:

  1. From the visual workflow editor, select the Run tab.
  2. Enter the Input params and select Run workflow.

    // example input params
    {
     "url": "https://arstechnica.com/space/2024/10/nasa-launches-mission-to-explore-the-frozen-frontier-of-jupiters-moon-europa/",
     "temperature": "0.1",
     "topP": "0"
    }
    

Upon running the workflow, you will be directed to the workflow execution page, where you can track the progress of your application flow. If you select the Workflow Input/Output tab, you should see the summary of the article you requested.

Screenshot of the Workflow Input/Output tab in the workflow execution screen in Conductor.
The article summarizer returns the requested summary.

Write the application frontend and backend

With the application flow created, the next step is to build the application itself. Use any framework (React, Next.js, Angular, and so on) to build the frontend and backend. For the backend, you will also use Conductor’s SDKs to execute and track workflows. Here is an example snippet of a React-based backend that uses the JavaScript SDK to execute the studyPartner summarizer workflow created earlier.

Example

import { useState, useEffect, useRef } from "react";
import {
  orkesConductorClient,
  WorkflowExecutor,
  TaskType,
} from "@io-orkes/conductor-javascript";
import getConfig from "next/config";

const { publicRuntimeConfig } = getConfig();

const getSummary = async (articleUrl, temp, topP) => {
  const client = await clientPromise;
  const executor = new WorkflowExecutor(client); // Create the executor instance

  const executionId = await executor.startWorkflow({ // Start the workflow
    name: publicRuntimeConfig.workflows.studyPartner,
    version: 1,
    input: {
      url: articleUrl,
      temperature: temp,
      topP,
    },
    correlationId: "user123",
  });

  setExecutionId(executionId); // Persist executionId in state
};
Enter fullscreen mode Exit fullscreen mode

Connecting your application with Conductor

While writing your backend, make sure to get authorized access to Conductor so that your backend can fire the workflow without any issues.

Step 1: Get access tokens for your application

In the Conductor UI, go to Applications in the left navigation menu to create your application abstraction layer and generate the access tokens. To do so,

  1. In Applications, select (+) Create application and enter a name for your application.
  2. In the Access Keys section, select (+) Create access key to generate a unique Key Id and Key Secret, and note it down.

Important: The Key Secret is shown only once, so make sure to copy and store your credentials securely for future reference.

Step 2: Configure access

Set the Key Id and Secret in your project environment and point to the appropriate Conductor server. If you are using Orkes Playground, the server should be https://play.orkes.io/api.

Example

export CONDUCTOR_SERVER_URL=<SERVER_URL>
export CONDUCTOR_AUTH_KEY=<KEY_ID>
export CONDUCTOR_AUTH_SECRET=<KEY_SECRET>
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure permissions

Finally, configure the permissions for the application layer you have previously created so that your application project can access the necessary resources. To set the required permissions:

  1. In Application in the Conductor UI, select your application.
  2. In the Permissions section, select + Add Permission.
  3. Add Execute and Read permissions to the following resources:
    • Your article summarizer workflow
    • The LLM models used in your workflow
    • The prompts used in your workflow

Congratulations! You have successfully created an AI article summarizer. Using Orkes’ AI prompt builder, you can optimize the LLM responses to fit your needs by implementing prompt engineering techniques and testing your prompts to improve the LLM responses.

Going beyond

With Orkes Conductor, you have created an AI application in no time at all. Now that you have the basics down, you can try your hand at creating more complex workflows, like a document classifier or automatic subtitle generator, or leveling up your summarizer workflow for more advanced uses, like summarizing video or audio content. Custom task workers can be easily built for advanced AI tasks using Conductor’s SDKs.

Using Orkes Conductor to build applications spells faster time-to-market, enterprise-grade durability and security, and full governance in a distributed program. From pre-built tasks to fully custom business logic, developers get the best of both worlds: speed and flexibility. Simply brainstorm the high-level flow, pinpoint which tasks are needed, and start building.

Wrap up

As an open-source orchestration platform, Conductor can be used in diverse cases beyond AI orchestration, such as infrastructure automation, data transformation pipelines, digital user journeys, microservice coordination, and more.

Want detailed examples? Check out other tutorials and use cases:

Stay tuned for more AI-based tutorials coming soon.


Orkes Cloud is a fully managed and hosted Conductor service that can scale seamlessly to meet your needs. When you use Conductor via Orkes Cloud, your engineers don’t need to worry about setting up, tuning, patching, and managing high-performance Conductor clusters. Try it out with our 14-day free trial for Orkes Cloud.

💖 💪 🙅 🚩
livw
livw

Posted on November 21, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related