AWS Cost Optimization Parameters and Metrics: Part-2
Devtron
Posted on September 27, 2024
Are you trying to learn more about AWS cloud cost management? Is your monthly bill of AWS, surging and you're perplexed, if you are using all the resources that you have paid for? Here, we are back with an extended version of our previous blog on AWS Cost Optimization.
It is a well-known fact that moving to Amazon Web Services(AWS) can provide a huge number of benefits in terms of agility, responsiveness, much-simplified operations, and improved innovation. However, there is an assumption that migrating to the public cloud will lead to cost savings. Though in reality, AWS cost optimization is tougher than you can think of.
To optimize your costs, you need to know exactly what are your organization needs, how and when they are used, while adapting to the increased demands of agile and fast pacing technology.
In this blog, we will cover 4 indispensable pillars of cost optimization
Pillar 1: Cost effective resources
1. Autoscaling
- Automating the deployment of applications requires software tools and frameworks, in addition to proper infrastructure (with enough resources, such as servers and services).
- You can operate the provisioning of test environments using manual based efforts such as AWS’s APIs or Command Line Interface(CLI) tools. However, the manual intervention leads to lower productivity from the teams. Hence, there is an absolute need to automate infrastructure processes (like provisioning test environments) which will improve productivity and efficiency. Some of the ways in which you can automate processes are,
- Provisioning EC2 instances via Amazon Machine Images (AMI): AMI encapsulates OS and other software/configuration files. When an instance starts, all the applications come pre-loaded from the AMI. AMIs enable the launch of standardized instances across multiple regions, by allowing the copying of AMIs from one region to another.
- Deploying platforms using Amazon Elastic Beanstalk: With AWS Elastic Beanstalk, you can easily deploy and scale web applications and services developed with Node.js, Python etc, on familiar servers such as Apache, Nginx, and IIS. You just need to upload your code and Elastic Beanstalk will automatically handle the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. At the same time, you retain full control over all the cloud resources powering your application.
2. Right sizing of instances
Based on an analysis of OS instances, in North America, it was found that about 84% of instances were not correctly sized. It was estimated that, right-sizing of instances could lead to a cost reduction by 36% ( USD 55 million). The right-sizing could be achieved by porting them to optimally sized AWS resources.
You can achieve right-sizing by making sure:
- You use the correct granularity for the time period of analysis that is required to cover any system cycles.For e.g., if a two week analysis is performed, you might be overlooking a monthly cycle of high utilization, which could lead to under-provisioning.
- Right-sizing is an iterative process, that gets triggered by changes in usage patterns and external factors like AWS price drops or new AWS resource types.
3. Purchasing options to save cost
Amazon EC2 provides a number of purchasing models for instances. Using the Reserved Instance purchasing model, can help you save up to 75% over On-Demand capacity. Spot Instances are another phenomenal way to save money for non-stateful workloads.
Reserved instances
Reserved Instance is a 1 or 3 year commitment towards purchasing a reservation of capacity. In this case, you will significantly pay a lower hourly rate. If you use reserved instances, they enable up to 75% savings over on-demand capacity. Moreover, you get a chance of selling the unused Reserved Instances.
Spot instances
Spot Instances provide a way for saving, by allowing you to use spare compute capacity at a significantly lower cost than on Demand instances(up to 90%). You can also use spot instances to increase your computing scale and throughput for the same budget.
Spot instances can be used when you need large computing capacity such as for batch processing, scientific research, financial analysis, testing, and when you are not concerned about any interruptions, provided you have ways to deal with such interruptions.
4. Use of correct AWS S3 storage cclass
Storage in AWS is cheap and finite, but that’s not a good reason to keep your organization’s data there forever.
It is necessary, to clean the data from S3 time-to-time, so as to optimize the storage costs. Here, are some of the storage methods, that can be used:
- Amazon S3 Standard Infrequent Access (S3-IA): You can use this type of storage for storing data that is accessed less frequently. The data can be retrieved rapidly whenever needed. The major drawback is, you are charged a retrieval fee of $0.01 per GB and uses 128 blocks to store data. If you have smaller objects, then it will be more expensive than the standard storage.
- Amazon S3 Glacier: It can be used for archives, where a portion of the data might be required to be retrieved within minutes. The data stored in S3 Glacier has a minimum storage duration of 90 days and can be retrieved within 1-5 minutes.
- Delete Policy: For those files that you think might not be required anymore, set up a delete policy.
For pricing information on Amazon S3, click on Amazon S3 Pricing
5. Geographic selection
- AWS Cloud infrastructure is built around Regions and Availability Zones. As of March 3, 2020, AWS has 16 public regions and 2 non-public regions. Each region operates within the local market conditions, and resource pricing can be different for each region. You have to choose a specific region to architect your solution, so that you can run at the lowest price globally.
- Let's consider a sample workload - 5 c5.large, 20 GB gp2 EBS storage each, 1 ELB, 5.1 TB data processed. 1 ELB sends traffic to 5 c5.large instances running Amazon Linux in the same availability zone. Each instance has 20 GB of EBS SSD storage, and each instance receives 100 GB/month from the ELB and sends 1 TB/month back to the ELB. Therefore, the ELB processes 5.1 TB/month. If we consider this workload, there is a substantial cost difference in AWS pricing across different Regions. It costs 52% more to deploy this infrastructure in a location in South America, compared to a location in North America.
- While you choose a region to architect your solution based on the geographic location that minimizes your cost, it’s a best practice to place computing resources closer to users, so as to provide lower latency and strong data sovereignty.
Pillar 2: Matching supply and demand
You can deliver services at a low cost, when the infrastructure is optimized. This can be done using the following methods:
1. Demand-based approach
This approach leverages Elasticity(ability to scale up or scale down, managing capacity and provisioning resources as demand changes) of AWS Cloud. AWS provides APIs or Services for the dynamic allocation of cloud resources to your application / solution. As per AWS best practices, you should use AWS auto-scaling. It is a service that makes scaling simple with recommendations that allows you to optimize performance and cost.
2. Buffer-based approach
A buffer in AWS will allow your applications to communicate with each other when they are running at different rates over time. This approach involves decoupling components of a cloud application and creating a queue that accepts messages. A buffer will queue the request, until the resources are available.
This approach is suitable if you have workloads that are not predictable or time-sensitive. Some of the key AWS services that enable this approach are Amazon SQS and Amazon Kinesis.
If you have a workload that generates write load, and need not be processed immediately, you can use the buffer to smooth out demands on resources.
3. Time-based approach
This approach involves aligning resource capacity to demand, that is predictable over specified time periods. If you know when resources are going to be required, you can time your system to make the right resources available at the right time. You can implement time-based resource allocation by timing your auto scaling. However, while using auto scaling for this approach, you need to be careful with the following:
- Load based auto scaling is not always appropriate in every situation. For e.g., cloud deployments for small startups which have less than 50 instances and witness unusual patterns. In such cases, the close matching of demand and supply may not be optimal.
- Auto scaling can add a new instance in 5 mins, and take another 3-5 mins to start a new instance. Due to this mismatch of enough instances to handle the load, over-loading of existing instances can occur. This in turn, slows down the health check, and ELB removes the instance, which can worsen the situation.
Pillar 3: Expenditure monitoring
One of the most crucial drivers for effective decision making in the organization, is having crystal clear view of AWS Resource Metrics. AWS recommends the following approaches to achieve expenditure monitoring:
1. Stakeholders
It is a good practice to have necessary stakeholders involved in the expenditure awareness discussions, as it produces better outcomes. It is recommended to involve financial stakeholders such as CFOs, Business Unit Owners, and any Third parties that might be directly involved in resource expenditure.
This will bring any hidden costs into forefront, which would provide opportunities for cost optimization, and would make sure that costs will be correctly allocated to the right business unit.
2. Reserved instance reporting
Reserved Utilization Report and RI Coverage Report are the key tools that help you in analyzing the costs. These reports visualize the percentage of running instance hours, that are covered by reserved instances. The reports may visualize the content in an aggregate way or in a detailed way (by account, instance type, region, availability zone, tags, platform etc).
What is Reserved instance utilization report?
It allows you to visualize RI utilization (% of purchased RI hours consumed by the instances during a period of time) and shows how much savings are accrued, due to the usage of reserved instances.
What is Reserved instance coverage report?
It allows you to discover how much of the overall instance usage is covered by RIs, so you can make informed decisions about when to modify or purchase RIs to ensure maximum coverage.
Pillar 4: Optimizing over time
1. Establishing cost optimization function
- You can establish a function within your organization.
- This function will be performed by an existing team such as Cloud COE, or you can even create a new team of key stakeholders from the appropriate BUs in the organization.
- This function will coordinate and manage all aspects of cost optimization, from your technical teams, to your people and all the processes.
2. Monitor, track and analyze your service usage
- AWS recommends establishing strong goals and metrics for the organization to measure itself against. These goals should include costs, but should also surface the business output of your systems to quantify the impact of your improvements.
- AWS suggests using tools like AWS Trusted Advisor and Amazon Cloud Watch to monitor your usage and handle the workloads accordingly.
- You can use Consolidated Billing, if you have multiple AWS accounts. This service has no additional charges. It is a very helpful method, which enables you to see a combined view of all the charges across all of your AWS accounts.
Now that you know the four pillars, share your cost optimization experience with us in the comments below. Connect with us on our community Discord server
Adopting kubernetes also plays a vital role in optimizing cost if implemented with a right approach. In addition to cost optimization, it also helps to reduce carbon footprint released by the organization.
Posted on September 27, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.