Embedding AWS Bedrock Into Your Workloads

mohalbakerkaw

Mohamad Albaker Kawtharani

Posted on October 3, 2023

Embedding AWS Bedrock Into Your Workloads

Introduction

Generative AI is on the rise, and with services like Amazon Bedrock, it's easier than ever to integrate powerful foundation models (FMs) into your workloads. If you're wondering how to take full advantage of this technology, this blog post is for you. I delve into the world of Amazon Bedrock and guide on embedding it seamlessly into running applications.

What is Amazon Bedrock?

Amazon Bedrock is Amazon's fully managed service designed for building and scaling generative AI applications. In simple terms, it's a platform that provides access to a range of top-performing FMs from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon itself. With Bedrock, developers can easily integrate and experiment with various FMs, customize them with their own data, and even extend their capabilities without writing a single line of code.

Benefits of Using Amazon Bedrock

  • Diverse Range of Foundation Models: Experiment and deploy with a plethora of FMs from industry leaders, all accessible via a single API.
  • Minimal Code Changes: Due to its consistent API structure, transitioning between different models or updating to newer versions requires minimal code alterations.
  • Code-free Customization: Enhance FMs with your own data through a visual interface. Link with datasets on Amazon S3 and tweak hyperparameters to achieve optimal performance.
  • Fully Managed Agents: Beyond just generative capabilities, Bedrock offers agents that can execute intricate business tasks, such as managing inventory or processing insurance claims. These agents dynamically interact with company systems and APIs, taking generative AI applications to a new frontier.
  • Knowledge Bases Enhancement: Securely connect FMs to your data sources within Bedrock. This feature augments the model's capabilities, making it more attuned to your domain or organization's specifics.

Embedding Amazon Bedrock in Your Workloads

  • Start with the Playground: Before diving deep, experiment with different FMs in Bedrock's playground. This sandbox environment allows for quick testing and understanding of a model's capabilities.
  • Integrate Using Bedrock's API: Regardless of the FM you choose, Bedrock offers a unified API. This ensures easy integration with your applications and consistency in invoking different models.
  • Customize for Your Needs: Once integrated, use Bedrock's visual interface to enhance the model's performance. Link it with datasets stored in Amazon S3 and adjust hyperparameters to get the desired results.
  • Incorporate Managed Agents: Want to automate complex business tasks? Bedrock's agents can be dynamically called to interact with your systems. Whether it's orchestrating ad campaigns or managing inventories, these agents can significantly optimize processes.
  • Enrich with Knowledge Bases: Extend the model's capabilities by connecting it to your data sources within Bedrock. This allows the FM to be more knowledgeable and attuned to your organization's specific nuances.

Practical Integration: Embedding Amazon Bedrock Models into Your Applications using Python

Prerequisites

  • Python Environment Python 3.8> installed
  • AWS Account
  • boto3 installed
pip install boto3
Enter fullscreen mode Exit fullscreen mode
  • Amazon Bedrock Access

Invoking the AI21's J2-Ultra-V1 Model

Here's a Python script that uses the Boto3 library to invoke AI21's J2-Ultra-V1 model via Amazon Bedrock:

import boto3
import json

# Constants
SERVICE_NAME = "bedrock-runtime"
REGION_NAME = "us-east-1"
MODEL_ID = "ai21.j2-ultra-v1"
CONTENT_TYPE = "application/json"
ACCEPT = "*/*"


def create_bedrock_client():
    """Create and return a Bedrock client."""
    return boto3.client(
        service_name=SERVICE_NAME,
        region_name=REGION_NAME
    )


def generate_request_body(prompt: str, max_tokens: int = 200, temperature: float = 0.7) -> dict:
    """Generate and return the body for the request."""
    return {
        "prompt": prompt,
        "maxTokens": max_tokens,
        "temperature": temperature,
        "topP": 1,
        "stopSequences": [],
        "countPenalty": {"scale": 0},
        "presencePenalty": {"scale": 0},
        "frequencyPenalty": {"scale": 0}
    }


def invoke_model(client, prompt: str) -> str:
    """Invoke the model and return the response."""
    body = json.dumps(generate_request_body(prompt))
    kwargs = {
        "modelId": MODEL_ID,
        "contentType": CONTENT_TYPE,
        "accept": ACCEPT,
        "body": body
    }
    response = client.invoke_model(**kwargs)
    content = json.loads(response["body"].read())
    return content.get('completions')[0].get('data').get('text')


def main():
    prompt_text = "Hello"
    try:
        client = create_bedrock_client()
        response = invoke_model(client, prompt_text)
        print(response)
    except Exception as e:
        print(f"Error occurred: {e}")


if __name__ == '__main__':
    main()

Enter fullscreen mode Exit fullscreen mode

Invoking Anthropic's Claude-V2 Model with Streamed Response

To get a streamed response from the Anthropic's Claude-V2 model, use the following code:

import boto3
import json

# Constants
SERVICE_NAME = "bedrock-runtime"
REGION_NAME = "us-east-1"
MODEL_ID = "anthropic.claude-v2"
CONTENT_TYPE = "application/json"
ACCEPT = "*/*"
ANTHROPIC_VERSION = "bedrock-2023-05-31"


def create_bedrock_client():
    """Create and return a Bedrock client."""
    return boto3.client(
        service_name=SERVICE_NAME,
        region_name=REGION_NAME
    )


def generate_request_body(prompt: str) -> dict:
    """Generate and return the body for the request."""
    return {
        "prompt": f"Human: {prompt}\nAssistant:",
        "max_tokens_to_sample": 300,
        "temperature": 1,
        "top_k": 250,
        "top_p": 0.999,
        "stop_sequences": ["\n\nHuman:"],
        "anthropic_version": ANTHROPIC_VERSION
    }


def invoke_model(client, prompt: str) -> dict:
    """Invoke the model and return the response."""
    body = json.dumps(generate_request_body(prompt))
    kwargs = {
        "modelId": MODEL_ID,
        "contentType": CONTENT_TYPE,
        "accept": ACCEPT,
        "body": body
    }
    return client.invoke_model_with_response_stream(**kwargs)


def extract_and_print_response(response: dict):
    """Extract response and print it."""
    stream = response.get('body')
    if stream:
        for event in stream:
            chunk = event.get('chunk')
            if chunk:
                print(json.loads(chunk.get('bytes')).get('completion'), end="")


def main():
    prompt = "write an article about the fictional planet Foobar"
    try:
        client = create_bedrock_client()
        response = invoke_model(client, prompt)
        extract_and_print_response(response)
    except Exception as e:
        print(f"Error occurred: {e}")


if __name__ == "__main__":
    main()

Enter fullscreen mode Exit fullscreen mode

Why Go for Amazon Bedrock?

In the era of AI and machine learning, having a robust platform is essential for harnessing the full potential of generative models. Amazon Bedrock stands out due to its comprehensive capabilities, serverless nature, and seamless integration options. With Bedrock, not only do you get access to industry-leading FMs but also the tools to customize and extend their capabilities. And the best part? All of this without the hassle of managing infrastructure or delving deep into code.

In conclusion, Amazon Bedrock is an indispensable tool for businesses and developers looking to leap into the future of generative AI. By embedding it into your workloads, you open doors to unprecedented automation, efficiency, and innovation. Dive into the world of Amazon Bedrock today and reshape how you approach AI in your applications.

References:

💖 💪 🙅 🚩
mohalbakerkaw
Mohamad Albaker Kawtharani

Posted on October 3, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related