Testing Amazon Bedrock Text G1 Models (Lite vs Express)
Matias Kreder
Posted on January 17, 2024
Intro
Amazon released its own LLMs at reInvent 2023, including the G1 Text LLMs Titan Lite and Express. The main difference is that the Lite model has a context length of 4000 tokens, while Express has a context length of up to 8000, making it more robust. While the Lite model has some cost benefits, the Express version should be able to generate better answers. I gave them a try.
Pricing
Bedrock prices are measured on how many "tokens" get in and out and out of the model. In Text models like Titan that is somehow connected to the number of words that get in and out of it.
The Lite model currently costs $0.0003 per 1,000 input tokens and $0.0008 per 1,000 output tokens while the Express model costs $0.0006 per 1,000 input tokens and $0.0016 per 1,000 output tokens.
Access
Before being able to use them, you need to request access to them. in the "Model Access" section inside the BedRock service console, click on "Manage Model Access", select the models you need access to, and click the "Save Changes" button.
It can take a couple of minutes to get access to the models. For some models that are not provided by Amazon, it can take several days or weeks as the access needs to be approved by the provider.
Text Generation
To test the output of both models, I used the following input text: 'Generate a text for a marketing campaign for a new Shampoo called "Lambda"'. This is the output they generated:
Lite:
"Introducing Lambda Shampoo, the ultimate solution for hair that looks and feels its best! Say goodbye to dull, lifeless locks and hello to vibrant, healthy-looking hair with Lambda Shampoo. Formulated with a blend of natural ingredients, Lambda Shampoo is designed to nourish, strengthen, and revitalize your hair from root to tip. Experience the power of botanical extracts, vitamins, and proteins that work in harmony to restore your hair's natural balance and restore its natural shine. Lambda Shampoo is suitable for all hair types, including color-treated hair, and is free from harsh chemicals and sulfates that can strip away natural moisture and damage your hair. It's gentle enough for daily use and leaves your hair feeling soft, smooth, and manageable. But that's not all! Lambda Shampoo comes in an attractive and eco-friendly packaging that reflects our commitment to sustainability. We believe in reducing our environmental footprint and doing our part to protect the planet. Join the Lambda Movement and experience the difference for yourself. Visit our website or find our shampoo at your nearest retailer to get started on your hair care journey today! #LambdaShampoo #HairRevitalization #NourishAndStrength #BotanicalExtracts #HealthyHair #SustainablePackaging"
Express:
"Lambda Shampoo: The Ultimate Hair Transformation Experience Introducing the revolutionary Lambda Shampoo, the game-changer in hair care that will transform your locks into a breathtaking masterpiece. Say goodbye to ordinary hair and hello to extraordinary beauty with Lambda's cutting-edge formula. Unlock the Power of Natural Ingredients: Lambda Shampoo is crafted with a blend of premium, natural ingredients that have been carefully selected to nourish, strengthen, and enhance your hair's natural beauty. Our formula contains a powerful combination of botanical extracts, vitamins, and antioxidants that work synergistically to promote healthy hair growth, improve scalp health, and restore shine and vitality. Revitalize Your Hair from Root to Tip: Lambda Shampoo is designed to revitalize your hair from root to tip. Our deep cleansing formula gently removes impurities, excess oil, and product buildup, leaving your hair feeling clean, refreshed, and revitalized. With regular use, you'll notice a noticeable improvement in hair texture, volume, and overall hair health. Silky Smooth and Frizz-Free: Say goodbye to frizzy, unruly hair and hello to silky, smooth locks. Lambda Shampoo contains nourishing ingredients that help to tame frizz and impart a lustrous shine to your hair. Experience the joy of running your fingers through hair that feels incredibly soft and manageable. Customized Hair Care for Every Need: Lambda Shampoo offers a range of customized hair care solutions to suit your individual needs. Whether you have dry, damaged hair, oily scalp, or color-treated hair, we have a formula that will address your specific concerns and deliver the desired results. Choose from our variety of scents, including refreshing citrus, soothing lavender, and invigorating mint, to add a touch of luxury to your hair care routine. Sustainable and Eco-Friendly: At Lambda, we are committed to sustainability and eco-friendliness. Our shampoo bottles are made from recycled materials, and our packaging is designed to minimize waste. By choosing Lambda Shampoo, you're not only taking care of your hair but also contributing to a greener and more sustainable future. Join the Lambda Hair Transformation Movement: Experience the transformative power of Lambda Shampoo and join the thousands of satisfied customers who have already embraced this hair care revolution. Elevate your hair care routine and unlock your hair's true potential with Lambda. Visit our website or your nearest retailer to discover the full range of Lambda Shampoo and experience the difference for yourself. Get ready to turn heads with your stunning, healthy hair. Lambda Shampoo: Where Science Meets Beauty, and Your Hair Becomes a Work of Art."
Both texts are useful for marketing campaigns. It is worth noticing that the Lite text was generated almost immediately, while the Express text took a few seconds to generate. If you don't need to generate long text for these types of tasks, then the Lite version should be enough.
I'm glad none of the models returned a hallucination about AWS Lambda!
Code Generation
I tried a few things. First, I tried "Generate a Python function that receives an array of numbers, and returns the biggest number" which worked pretty well for both models, returning this code:
def find_biggest_number(array):
biggest_number = array[0]
for number in array:
if number > biggest_number:
biggest_number = number
return biggest_number
However, when I tried to do something a little more complex like "Generate a Python function that receives an array of numbers, sorts it from min to max and returns the biggest number" I got an error saying "Sorry - this model is unable to respond to this request."
When I tried with the Express version, it generated code (very similar to the previous one) but didn't do the sorting part.
def biggest_number(arr):
biggest = arr[0]
for num in arr:
if num > biggest:
biggest = num
return biggest
Both models can generate simple code, but relying on other services like CodeWhisperer for complex tasks is better. I tried this same prompt with other LLMs available on BedRock, like Cohere Command or A21 Jurassic, but both failed to generate a good function as well.
Conclusion
Text generation tests showed that both models produce compelling marketing campaign content, with the Lite version offering a quicker response and the Express version taking a few seconds longer but delivering a more detailed output. Users aiming for shorter text generation tasks may find the Lite model sufficient for their needs.
Regarding Code generation, both models can handle simple requests effectively. However, limitations became evident when presented with more complex tasks, such as sorting an array of numbers. The Lite model failed to respond to the request, while the Express version generated code but omitted the sorting component. This suggests that relying on specialized services like CodeWhisperer may be more effective for complex coding tasks.
In essence, the choice between Titan Lite and Express depends on the specific requirements of the task at hand. For shorter text generation tasks with budget considerations, the Lite model may be a suitable choice. However, the Express model would be a better choice for more extensive tasks.
Posted on January 17, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.