Generative AI Serverless - Build a Travel Policy Assistant using Bedrock Knowledge base, Lambda and API
Girish Bhatia
Posted on August 20, 2024
Generative AI is rapidly accelerating the pace of innovation in large enterprises. Providers of large language models (LLMs) are continuously developing more advanced foundation models, while cloud providers push the boundaries by offering services that seamlessly integrate with these LLMs.
Amazon Bedrock is at the forefront of these cloud services, providing a streamlined and consistent way to integrate with multiple LLMs.
While LLMs excel at generating generic responses, they often fall short in delivering contextually relevant information. Retrieval-Augmented Generation (RAG) is gaining popularity in the world of Generative AI to address this limitation. RAG enhances LLMs by incorporating contextual data, enabling organizations to build more tailored and accurate AI solutions.
Amazon Bedrock features a knowledge base specifically designed to support RAG functionality. With this feature, you can use your own data, such as documents, as a knowledge source for your Generative AI solutionβwithout needing to set up a vector database.
This knowledge base feature is particularly valuable for building use cases that require quick access to information from large documents. For example, technical support teams can extract information from user manuals to quickly resolve customer inquiries, HR departments can answer questions based on policy documents, developers can reference technical documentation to find information about specific functions, and call center teams can efficiently address customer inquiries using these documents.
In my use case, I am leveraging Amazon Bedrock and its knowledge base feature to create a travel policy assistant that provides contextual responses based on the travel policy document created by the HR team. This policy document, stored in an S3 bucket, outlines travel policies such as allowable classes of service, expense limits for domestic versus international travel, and other relevant information. By using this data as the knowledge base, a Generative AI solution in the form of an API can answer questions related to travel policies, service classes, expense limits, and more.
Here is the architecture diagram for our use case:
The AWS services used in this solution are:
- Amazon Bedrock
- Amazon Bedrock Knowledge Base
- AWS Lambda
- AWS REST API
- AWS S3
- AWS CloudWatch
The LLM used is Anthropic Sonnet.
This solution can be built using either the AWS Management Console or an Infrastructure as Code (IaC) approach with AWS SAM (Serverless Application Model) and CloudFormation. When working with Lambda, my preferred approach is to utilize AWS SAM; however, the function can also be deployed directly through the AWS Console.
To use the AWS Console, log in to your account and follow the on-screen prompts after clicking on the Knowledge Base.
Select a model and provide the document. You can either upload the document using the AWS Console or point to a document hosted in an S3 bucket.
That's all for the AWS Console-based solution. Once you select the model and upload the document, you are ready to ask questions, and the answers provided will be contextual, based on the data in your document.
Alternatively, you can use a code-based approach using AWS Lambda, API, S3, and Bedrock services and then integrate the API with your applications. I have outlined the solution in the workshop video posted in the link below.
Here are a few examples of prompts and the responses provided by this Generative AI solution:
Prompt: According to the MyBankGB Travel Policy, what is the standard class of service for air travel, and under what circumstances might exceptions be made?
Response:
Prompt: What are the dinner allowances for domestic and international travel as stated in the policy, and how do they differ?
Response:
These are just few prompt examples, you can ask other questions from this RAG based Generative AI solution.
This solution implements a travel policy assistant that can provided contextual responses on the policy documented and posted in a s3 bucket.
AWS continues to add new features to its Generative AI services. I am staying connected and will be posting content on some of these new features soon!
Thanks for reading!
Click here to watch the YouTube video for this solution:
https://www.youtube.com/watch?v=U5jEfnEyK2g
π’πΎππΎππ½ β¬π½πΆππΎπΆ
πππ ππ¦π³π΅πͺπ§πͺπ¦π₯ ππ°ππΆπ΅πͺπ°π― ππ³π€π©πͺπ΅π¦π€π΅ & ππ¦π·π¦ππ°π±π¦π³ ππ΄π΄π°π€πͺπ’π΅π¦
πππ°πΆπ₯ ππ¦π€π©π―π°ππ°π¨πΊ ππ―π΅π©πΆπ΄πͺπ’π΄π΅
Posted on August 20, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.