AWS Project: Building with Generative AI on AWS using PartyRock, Amazon Bedrock, and Amazon Q

asif_khan

Asif Khan

Posted on November 10, 2024

AWS Project: Building with Generative AI on AWS using PartyRock, Amazon Bedrock, and Amazon Q

Introduction

This project explores three unique applications of Generative AI on AWS by leveraging PartyRock, Amazon Bedrock, and Amazon Titan. Through three distinct experiments, I dive into no-code app creation, foundational AI model integration, and retrieval-augmented generation (RAG) workflows. Each showcases how to use AWS services to build powerful, scalable AI applications. From developing a book recommendation chatbot to building context-aware response systems, this project highlights a versatile approach to harnessing AI on AWS.

Tech Stack

  • PartyRock: Simplifies no-code AI app development with pre-configured widgets.

  • Amazon Bedrock: Provides access to foundational models for text, chat, and image generation.

  • Claude 3 Sonnet: Chat functionality.

  • Amazon Titan: Text generation.

  • Titan Image Generator: Image creation based on prompts.

  • FAISS: Supports similarity searches and vector storage for RAG.

  • Amazon Titan Text Embeddings: Converts text to vectorized formats, essential for building document-based AI models.

Prerequisites

  • AWS Account: Required to access Bedrock, PartyRock, and Titan.

  • AWS CLI: For managing resources and configurations.

  • Basic AWS Knowledge: Familiarity with IAM roles, Bedrock, and foundational AI models.

  • No-Code Access: PartyRock simplifies app development, so coding experience is not mandatory.

Problem Statement or Use Case

Problem: Accessing reliable, real-time information and creating interactive applications often requires complex backend integrations. While generative AI enables real-time information generation, context-aware responses, and image creation, integrating these features can be challenging.

Solution: This project demonstrates three distinct implementations of generative AI:

  1. No-code Book Recommendation Chatbot: Built with PartyRock, this chatbot provides personalized book recommendations.

  2. Foundation Model Integration: Amazon Bedrock enables real-time text, chat, and image generation.

  3. Document-based Retrieval-Augmented Generation (RAG): Combines Amazon Titan, FAISS, and Claude 3 Sonnet to deliver contextually relevant answers based on stored knowledge, illustrating how RAG applications can offer effective AI solutions in knowledge-heavy environments.

Real-World Relevance: These approaches are directly applicable to industries like customer service, education, e-commerce, and recommendation systems. The RAG model, in particular, benefits use cases where personalized or context-driven content generation is critical — such as in customer support bots or intelligent virtual assistants.

Step-by-Step Implementation

Project 1: Build Generative AI Applications with PartyRock

In this section, we’ll learn how to use PartyRock to generate AI apps without any code.

What is PartyRock

PartyRock is a shareable Generative AI app building playground, that allows you to experiment with prompt engineering in a hands-on and fun way. In just a few clicks, you can build, share, and remix apps, to get inspired while playing with Generative AI. For example, you can:

  • Build an app to generate dad jokes on the topic of your choice.

  • Create and play a virtual trivia game online with friends from around the world.

  • Create an AI storyteller to guide your next fantasy roleplaying campaign.

By building and playing with PartyRock apps, you learn about the fundamental techniques and capabilities needed to get started with Generative AI, including understanding how a foundation model responds to a given prompt, experimenting with different text-based prompts, and chaining prompts together.

Any builder can experiment with PartyRock by creating a profile using a social login from Amazon.com, Apple, or Google. PartyRock is separate from the AWS console and does not require an AWS account to get started.

Exercise 1: Building a PartyRock Application

To highlight the power of PartyRock we are going to build an application that can provide book recommendations based on your mood.

  1. To begin head over to the PartyRock website .

  2. Login with a social login from Amazon.com, Apple, or Google

  3. Click on Build your own app and enter in the following prompt Provide book recommendations based on your mood and a chat bot to talk about the books then click Generate app

Using the app

PartyRock was able to create the interface needed to take in a user input, provide recommendations and create a chatbot just from your prompt. Now play around with the app by entering your mood and then asking the chat bot for more information about the book. Try entering in Happy

. Afterwards you can ask the chat bot Can you tell more about one of the books that was listed.

You can also share your app by clicking the Make public and Share button.

Updating your app

In PartyRock each UI element is a Widget, which displays content, takes in input, connects to other widgets, and creates output. Widgets that take in input allow users to interact with the app. Widgets that create output use prompts and references to other widgets to generate something like an image or text.

Types of widgets

There are 3 different types of AI-powered widgets:

  • Image generation

  • Chatbot

  • Text generation

You can edit AI-powered widgets to connect them to other widgets and make their output change.

There are also 3 other widgets:

  • User input

  • Static text

  • Document upload

The user input widget allows users to change output when you connect it to AI-powered widgets. The static text widget provides a place for text descriptions.

For more details check out the PartyRock Guide

Exercise 2: Playtime with PartyRock

Can you update the prompts in your app, play with settings, or chain outputs together. Be creative and explore what PartyRock can do. Try adding a widget that can draw an image from the book.

Remix an application

With PartyRock you can Remix applications, which allows you to make a copy of an app in your account. From there you can build and edit to make new changes. Remix your own apps to create new variations, or remix public apps from friends or from sample apps. Try to remix one of the apps from the PartyRock Discover page

Create a snapshot

Got a funny response from an app you’re using? You can share a snapshot with friends. Make sure the app is in public mode, then choose Snapshot in the top right corner of the app page. The URL that includes the current input and output of your app is then copied to your clipboard so you can share it with others.

Wrap Up

With PartyRock, you can demo and propose ideas that leverage Generative AI. When you want to create apps for production, you can implement those ideas with Amazon Bedrock.

Project 2: Use Foundation Models in Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like Stability AI, Anthropic, and Meta, via a single API, along with a broad set of capabilities you need to build Generative AI applications with security, privacy, and responsible AI.

Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy Generative AI capabilities into your applications using the AWS services you are already familiar with.

In this module we will see how we can use Amazon Bedrock through the console and the API for generating text and images.

Model Access

Before we can start building with Bedrock, we will need to grant model access to our account.

  1. Head to the model access page

  2. Select the Enable specific models button.

  3. Select the checkboxes listed below to activate the models. If running from your own account, there is no cost to activate the models — you only pay for what you use during the labs. See here for more information on supported models

  • Amazon (select Amazon to automatically select all Amazon Titan models)

  • Anthropic > Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Haiku

  • Meta > Llama 3.1 405B Instruct, Llama 3.1 70B Instruct, Llama 3.1 8B Instruct

  • Mistral AI

  • Stability AI > SDXL 1.0

Click Request model access to activate the models in your account.

  1. Monitor the model access status. It may take a few minutes for the models to move from In Progress to Access granted status. You can use the Refresh button to periodically check for updates.

  2. Verify that the model access status is Access granted for the previously selected models.

Using the Amazon Bedrock Playground

The Amazon Bedrock Playground provides a way to quickly experiment with different foundation models inside the AWS Console. You can compare model outputs, load example prompts, and even export API requests. Currently 3 modes are supported:

  1. **Chat :** Experiment on a vast range of language processing tasks in a turn-by-turn interface.

  2. **Text :** Experiment using fast iterations on a vast range of language processing tasks.

  3. **Image :** Easily generate compelling images by providing text prompts to pre-trained models.

You can access the playground from the links above or from the Amazon Bedrock Console under the Playgrounds side menu. Take a few minutes to play around with some examples.

Playground Examples

Here are some examples you can try in each playground

Chat

  1. To start click the Select model button to open the Model Selection Popup.

  2. From here pick Anthropic Claude 3 Sonnet

  1. Click the Load examples button and select Advanced Q&A with Citations example.

  2. When the example is loaded you can click Run to start the chat.

  1. The sidebar has model configurations you can play with. Try changing the temperature to 1. This makes the model more creative in its responses.

Text

In the example below we selected Amazon Titan Text G1 — Express as our model and loaded the JSON creation example. Try changing the model by selecting Change and selecting Mistral -> Mistral Large 2 and run the prompt again after clearing the output.

Notice how the output is very different. It is important to test out different foundation models to see which one fits your use case.

Image

In the example below we selected Titan Image Generator G1 as our model and loaded the Generate images from a text prompt example.

Try changing the prompt strength to 10, and trying different prompts such as:

  • unicorns in a magical forest. Lots of trees and animals around. The mood is bright, and there is lots of natural lighting

  • Downtown City, with lots of skyscrapers. At night time, lots of lights in the buildings.

When you are finished let’s see how we can bring the power of Amazon Bedrock to applications using the API.

Project 3: Chat with your Documents

The ability to ingest documents and then have an LLM answer questions using relevant context is known as Retrieval Augmented Generation (RAG). This module focuses on building these popular Generative AI solutions, exploring various methods to “chat with your documents”.

Image description

Retrieval Augmented Generation with Amazon Bedrock

Before diving into the RAG workflow, it’s crucial to understand embeddings. An embedding is a way of representing documents as vectors in a high-dimensional space. These vectors capture the essence of the document’s content in a form that machines can process. By converting text into embeddings, we enable the computer to ‘understand’ and compare different pieces of text based on their contextual similarities.

Vector Databases: Organizing and Accessing Embeddings

Once we have embeddings, the next step is to store and organize them for efficient retrieval. This is where vector databases come in. A vector database allows us to store and query embeddings, facilitating quick and relevant retrieval of documents based on their vector representations. In essence, it acts as a bridge between the raw data and the actionable insights we seek from our language models. For this module we will be using Faiss .

Leveraging Amazon Bedrock for RAG

With the foundational knowledge of embeddings and vector databases, we’re now ready to apply these concepts using Amazon Bedrock. In this part of the module, we will demonstrate how to use these Large Langue models (LLMs) to perform the RAG workflow effectively. This involves processing input queries, retrieving relevant document embeddings from the vector database, and using the LLMs to synthesize this information into coherent and contextually relevant responses. We will also leverage LangChain which is framework designed to simplify the creation of applications using LLMs.

Exercise 1: Getting Started with RAG

To begin open rag_examples/base_rag.py. Let’s walkthrough this code to show how the RAG works.

For this example, we will be using sentences as our documents: On line 15 we have an array of sentences defined:

sentences = [
    # Pets
    "Your dog is so cute.",
    "How cute your dog is!",
    "You have such a cute dog!",
    # Cities in the US
    "New York City is the place where I work.",
    "I work in New York City.",
    # Color
    "What color do you like the most?",
    "What is your favorite color?",
]
Enter fullscreen mode Exit fullscreen mode

Now let’s take a look at the rag_with_bedrock function (line 60) and walkthrough how to perform the RAG workflow.

  1. Setting Up Embeddings with Bedrock: First, we initialize our embedding function by calling BedrockEmbeddings. We are using the Amazon Titan Text Embeddings model which will convert text into a format for similarity comparisons.

    embeddings = BedrockEmbeddings(
    client=bedrock_runtime,
    model_id="amazon.titan-embed-text-v1",
    )

  2. Performing Similarity Search on the Vector Store: We then initialize a local vector store using FAISS.from_texts. This function takes our sentences and uses the embedding functions to creating a searchable database of vectorized documents. With that we can then take a query, vectorize it, and then find similar documents.

    local_vector_store = FAISS.from_texts(sentences, embeddings)
    docs = local_vector_store.similarity_search(query)

  3. Calling the RAG Prompt: We compile the content of the retrieved documents to form a context string. We then create a prompt that includes the context and the query. Finally, we call the call_claude function with our prompt get our answer.

    context = ""

    for doc in docs:
        context += doc.page_content
    
    prompt = f"""Use the following pieces of context to answer the question at the end.
    
    {context}
    
    Question: {query}
    Answer:"""
    
    return call_claude_sonnet(prompt)
    

Now try running the code by entering the following command in the Terminal and pressing enter.

python3 rag_examples/base_rag.py

.

You can change the query on line 83. Play around to see how the model is able to use the context to answer the questions. For example try What city do I work in?

Exercise 2: Chat with a PDF

There is also an example of how you chat with a PDF. Inside rag_examples/chat_with_pdf.py we have the chunk_doc_to_text function that will ingest the PDF and chunk every 1000 characters to store in the vector database. This process can take a while depending on the server, so we have already chunked the data which is stored in the folder local_index.

In this example we stored all the text from the AWS Well Architected Framework which highlights best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems in the cloud.

Now try running the code by entering the following command in the Terminal and pressing enter.

python3 rag_examples/chat_with_pdf.py

.

You can change the query on line 93. Play around to see how the model is able to use the context to figure out the correct answer. For example try What are some good use cases for non-SQL databases?

. You can even try off topic questions such as what are popular ice cream flavors.

Wrap up

Now that you have gotten a taste of using Amazon Bedrock for RAG, let’s explore how we can create scalable RAG workflows.

Using Amazon Bedrock Knowledge Bases

Knowledge Bases for Amazon Bedrock provides you a managed Retrieval Augmented Generation (RAG) service to query uploaded data. By pointing to the location of the data in Amazon S3, the service automatically fetches the documents, divides them into blocks of text, converts the text into embeddings, and stores the embeddings in a vector database. There is also an API, which allows us to build applications with the Knowledge Base.

For this module we will be creating a Knowledge Base with a subset of the AWS Well-Architected Framework .

Exercise 1: Creating a Knowledge Base in the AWS Console

  1. To begin navigate to the Knowledge Base Console

  1. Select the orange Create Knowledge base button.

  2. You can use the default name, or enter in your own. Then, select “Next” at the bottom right of the screen.

  3. Select the Browse S3 button and select the bucket with awsdocsbucket in its name. Then press Next.

  4. Select the Titan Embeddings Embeddings model, and leave the default selection for Vector store. Then, select "Next".

  5. On the next screen, scroll down and select “Create Knowledge Base”.

Creating the Knowledge Base takes a few minutes. Do not leave the page

Querying a Knowledge Base

When your Knowledge Base is ready you can test it in the console.

  1. Click the Sync button to start the data sync.

This will take around 1 minute.

  1. Click the Select Model button and choose Claude 3 Sonnet then press Apply

  2. From here you can enter in questions in the chat window where it says Enter your message here. For example we can Can you explain what a VPC is?

  3. Click Run and the model will respond and you can see the sources in the Knowledge Base by selecting Show result details.

Try asking the Knowledge Base other questions!

Exercise 2: Using the Knowledge Base API

You can also query the Knowledge Base through the API. There are two supported methods:

  1. retrieve: Returns documents related to query

  2. retrieve_and_generate: Does RAG workflow with the model.

To try them out:

  1. Head back to your IDE and open rag_examples/kb_rag.py.

  2. Update KB_ID with the id for your Knowledge Base. It is in the Knowledge base overview section for the Knowledge Base you created.

  1. Run the code with python3 rag_examples/kb_rag.py.

  2. Try playing with QUERY on line 4 to see what type of responses you get.

The code is performing the RAG workflow by converting the query into an embedding and returning the relevant documents.

Wrap up

Now that we have created a Knowledge Base, lets learn how we can embed it inside an Amazon Bedrock Agent.

Debugging Lambda Functions with Amazon Q

AWS Lambda is a serverless compute service that enables you to run applications and services without provisioning or managing servers. It automatically handles the underlying compute resources, allowing you to focus on your code and scale effortlessly.

In this section, you will be updating the data_process_action AWS Lambda function with multiple intentional errors. Your goal is to use Amazon Q to debug and fix these issues.

Getting Started

Open the data_process_action in the Lambda console.

To begin, lets create a test event to invoke the Lambda Function:

  1. Click the Test button to invoke the function.

  2. In the configure test event popup enter in an event name such as test-event.

  3. Use the following test event JSON to mimic the agent calling the function

    {
    "agent": {
    "alias": "TSTALIASID",
    "name": "Agent-AWS",
    "version": "DRAFT",
    "id": "ADI6ICMMZZ"
    },
    "sessionId": "975786472213626",
    "httpMethod": "GET",
    "sessionAttributes": {},
    "inputText": "Can you get the number of records in the databse",
    "promptSessionAttributes": {},
    "apiPath": "/get_num_records",
    "messageVersion": "1.0",
    "actionGroup": "agent_action_group"
    }

  4. Click Test again to invoke the function.

Expect an error on this first run, as we’re missing some dependencies. But don’t worry, Amazon Q in the console is here to help:

  1. Click the Q icon in the right navigation bar to chat with Amazon Q.

  2. Ask, How can I add the official prebuilt AWS pandas lambda layer to my lambda function without using the CLI?

  3. Follow Q’s guidance to integrate the pandas Lambda Layer.

Troubleshoot with Amazon Q

Amazon Q in the console window is good at answering general questions about AWS, but if we need more specific guidance we can use “Troubleshoot with Amazon Q”.

In the Lambda console, under the Test tab, invoke the function again by pressing the Test button. We now have a different error to debug. From here click the Troubleshoot with Amazon Q button for assistance.

Click Help me resolve button to prompt Q to provide a solution. Can you follow Q’s instructions to update the S3_OBJECT environment variable? The file is clickstream_data.csv

. Can you spot and fix the other code issues with Q’s help?

After addressing each problem:

  • Test your Lambda function again to check if the error is resolved.

  • Continue using ‘Troubleshoot with Q’ for each subsequent error until all issues are fixed.

  • Successfully running the function should result in correct data processing and no errors.

Need help? Here’s what to do next

Here’s the full code for reference

Testing the Agent

Now we can go back to the Agent and ask this again Can you help with the data processing task of getting the number of records in the production database?

. This time the Agent will be able to provide the correct answer. We can inspect the trace to see how the Agent is able to “think” through on how to get the correct answer.

Agents API

You can also invoke your agent through the API.

To try it out:

  1. Head back to the VSCode Server and open rag_examples/agent_rag.py.

  2. Update AGENT_ID with the ID for your Agent. It is in the Agent overview section for the Agent you created.

  3. Click the Hamburger Menu in the top left corner.

  4. Navigate to Terminal -> New Terminal.

  5. Run the code with python3 rag_examples/agent_rag.py.

  6. Try playing with QUERY on line 6 to see what type of responses you get.

Clean up

Note: If you are using AWS provided accounts, you can skip this section.

To avoid incurring unnecessary charges, follow this section to delete the endpoints and resources that you created while running the exercises.

Delete objects in S3

For the awsdocsbucket,openapiBucket and the dataanalysisbucket delete the objects in the buckets

Follow the guidance here on how to delete objects. Then you can delete the buckets

Delete IAM Roles

  1. Open the IAM console

  2. In the navigation pane, choose Roles, and then select the check box next to the role name that you want to delete. We created these roles:

  • AmazonBedrockExecutionRoleForAgents_*

  • AmazonBedrockExecutionRoleForKnowledgeBase_*

  • AWSServiceRoleForAmazonOpenSearchServerless

  1. At the top of the page, choose Delete.

Delete Knowledge Base

  1. Open the Knowledge Base console

  2. Select the Knowledge Base, then click delete

  3. Type delete to confirm

Delete Agent

  1. Open the Agent console

  2. Select the Agent, then click delete

  3. Type delete to confirm

Delete Vector Database

  1. Open the OpenSearch Collections Console

  2. Select the collection, then click delete

  3. Type confirm to delete

Delete the CloudFormation stack

  1. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation

  2. On the Stacks page in the CloudFormation console, select the stack the gen-ai-workshop-cfn.

  3. In the stack details pane, choose Delete.

  4. Select Delete stack when prompted.

Conclusion

This project demonstrated the power and flexibility of AWS Generative AI tools to build diverse AI-driven applications. Whether you’re:

  • Creating No-Code Apps: Using intuitive AWS services to build applications without needing to write extensive code.

  • Generating Dynamic Content: Leveraging powerful models like Amazon Bedrock and Titan to create responsive, context-aware content on demand.

  • Building a Retrieval Augmented Generation (RAG) System: Combining FAISS, Titan embeddings, and Claude to create intelligent systems capable of answering queries based on document context.

Each exercise emphasized how AWS services can enable innovative AI solutions that enhance interactive user experiences, streamline knowledge management, and drive real-time content generation.

This project serves as a solid foundation for building applications that utilize Generative AI for tasks like customer support, knowledge retrieval, and creative content generation, with scalable and efficient workflows. The capabilities of Amazon Bedrock, Faiss, and LangChain give developers the tools needed to create robust AI systems that can continuously learn and improve.

Code Repository

To explore and experiment with the project’s code and documentation, visit the **GitHub Repository**. Here you can access the complete code, follow along with detailed instructions, and customize the applications to fit your own use cases.

Asif Khan — Aspiring Cloud Architect | Weekly Cloud Learning Chronicler

LinkedIn/Twitter/GitHub


💖 💪 🙅 🚩
asif_khan
Asif Khan

Posted on November 10, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related