AWS concepts from A to Z

helenanders26

Helen Anderson

Posted on December 10, 2018

AWS concepts from A to Z

Getting to grips with AWS services can be quite a challenge. My glossary of terminology was inspired by AWS in Plain English, so I know my Athena from my Aurora and CloudTrail from my CloudFront.

AWS


Alarm
Bucket
CLI
DB Snapshot
Edge Location
Firehose
Group
Hosted Zone
Instance Type
Job flow
KMS
Lifecycle
Messages
NAT Gateway
On Demand Instance
Persistant storage
Query
Read replica
Scaling
Tagging
Unit
Virtual Private Cloud
WAF
X.509 certificate
Yobibyte
Zone


Alarm

When you first begin using Amazon Web Services, you may want to add a Billing Alarm. Making sure you don't run into unexpected charges is important since it's so easy to forget something is running.


Bucket

An S3 bucket stores objects, similar to files and folders on your local machine. Each object consists of:

  • Key - the name of the object
  • Value - the data in the file itself made of bytes
  • VersionID
  • Metadata

There are five storage tiers:

  • S3 - Most expensive and reliable option
  • S3:IA - For storing non-critical data that CANNOT be easily reproduced and needs to be retrieved quickly
  • S3:IA-One Zone - For storing non-critical data that CAN be easily reproduced and needs to be retrieved quickly.
  • Glacier - Extremely cheap long-term storage
  • Deep Glacier - For long-term storage with a 12-hour retrieval time for 'cold' data

Learn more about S3 with David


CLI

The AWS CLI allows you to issue commands from the command line. It's useful for uploading files to S3 buckets and launching EC2 instances.

Learn more about the CLI:


DB Snapshot

Amazon RDS creates a storage volume snapshot of your entire instance. Creating this snapshot results in a brief I/O suspension that lasts from a few seconds to a few minutes. Multi-AZ DB instances are not affected by this I/O suspension since the backup is taken on the standby.

When you create a DB snapshot, you need to identify which DB instance to back up, and then give your DB snapshot a name so you can restore from it later. You can do this using the AWS Management Console, the AWS CLI, or the RDS API.

Learn how to automate this process with Jeremy


Edge location

Amazon CloudFront is the AWS CDN. It caches information closest to the user so the next user can download a copy faster. CloudFront can distribute all website content, including dynamic, static, streaming and interactive, from services like S3 or your non-AWS server.

Learn more about CloudFront with Kyle


Firehose

Amazon Kinesis Data Firehose is a reliable way to stream data in near real-time. Data can be streamed to S3, Amazon's data warehousing solution, Redshift or Elasticsearch.

Kinesis allows data to be streamed in real-time from a producer to a processor or storage option. This is a huge change from batch processing, the traditional way to land data from one location to another.


Group

AWS Identify and Access Management allows you to securely control individual and group access to your resources.

Users by default have no access until you assign them a role. Roles define a set of permissions for making AWS service requests and are most often used to assign groups of users permissions to perform tasks or access services.

Learn more about IAM with David's practical example:


Hosted Zone

Amazon Route 53 is Amazon's Domain Name System (DNS) web service. It is designed to give developers a cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect.

AWS named the service Route 53 because all DNS requests are handled through port 53.

Learn more about hosting a static website with Mario


Instance Type

EC2 is a service that provides virtual machines in the cloud where you only pay for the capacity you use and choose from ‘families’ of instance types that are good for different use cases.

General Purpose - a balance of compute, memory and networking resources
Compute Optimised - ideal for compute-bound applications that benefit from the high-performance processor
Memory-Optimised - fast performance for workloads that process large data sets in memory
Accelerated Optimised - hardware accelerators, or co-processors
Storage Optimised - high, sequential read and write access to very large data sets on local storage


Job Flow

Amazon EMR provides a scalable framework to run Spark and Hadoop processes over an S3 data lake. The run job on an EMR template launches an Amazon EMR cluster based on the parameters provided and starts running steps based on the specified schedule. Once the job completes the EMR cluster is terminated.

There is a growing list of other services that AWS offers for data science and machine learning, learn more about them with Julien


KMS

The AWS KMS Service makes it easy to create and control encryption keys on AWS which can then be utilised to encrypt and decrypt data safely. The service leverages Hardware Security Modules (HSM) under the hood which guarantees the security and integrity of the generated keys.

Learn more about how to get started with Lou


Lifecycle

If you want to store objects cost-effectively, configure their lifecycle.

The lifecycle configuration defines the actions that Amazon S3 applies to a group of objects. For instance, you might archive objects in Glacier one year after creating them, or transition them to S3:IA 30 days after creating them.

Learn more in this practical example:


Messages

Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates

Amazon SQS stores messages in a queue. SQS cannot deliver any messages, where an external service (lambda, EC2 etc) is needed to poll SQS and grab messages from SQS.

By using Amazon SNS and Amazon SQS together, messages can be delivered to applications that require immediate notification of an event and also persisted in an Amazon SQS queue for other applications to process later.

Learn more about how it all fits together with Frank


NAT Gateway

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.

You can use a NAT device to enable instances in a private subnet to connect to the internet (for example, for software updates) or other AWS services, but prevent the internet from initiating connections with the instances. A NAT device forwards traffic from the instances in the private subnet to the internet or other AWS services and then sends the response back to the instances.


On-Demand Instance

There are multiple ways to pay for Amazon EC2 instances:

  • On-Demand - pay for capacity by hour or per second depending on which instances you run.
  • Reserved Instances - provide a reservation at 75% off the On-Demand price, giving you confidence in your ability to launch instances when you need them.
  • Spot Instances - request spare Amazon EC2 computing capacity for up to 90% off the On-Demand price.
  • Dedicated Hosts - provide EC2 instance capacity on physical servers dedicated for your use.
  • Savings Plan - provides the benefits of Reserved Instances but with more flexibility to change instance type within the same family while taking advantage of savings

Learn more about pricing with Chris


Persistent storage

Amazon EBS is a persistent storage device that can be attached to a single EC2 instance to be used as a file system for databases, application hosting, and storage.

Amazon EFS is a managed network file system that can be shared across multiple Amazon EC2 instances and is scalable depending on workload.

Learn more about provisioning EBS with Ashan


Query

Amazon RDS makes it easy to provision a managed database instance in the cloud. At the time of writing the following database engines were available.

  • Amazon Aurora for MySQL and PostgreSQL
  • MySQL
  • PostgreSQL
  • MariaDB
  • Oracle
  • MS SQL Server

For cases when a NoSQL database is more appropriate, AWS offers DynamoDB. Netflix uses DynamoDB to run its A/B testing and personalisation experiments.

Learn more about DynamoDB with Ivan


Read replica

Read replication can be part of your disaster recovery plan. Replication means that a secondary database is online and can be queried. This is good for disaster recovery but can also be useful if you utilise one instance for reporting and one for live queries.

If you are using AWS setting this up takes just a few clicks. You can promote a read replica if the source database instance fails or route traffic here to reduce the load on the source database.


Scaling

Auto Scaling launches and terminates Amazon EC2 instances according to user-defined policies, schedules, and alarms. You can use Auto Scaling to maintain a fleet of AWS EC2 instances that adjust to any load. You can also use Auto Scaling to spin up multiple instances in a group at once.

Learn more with this overview from Ashan


Tagging

Using tags in your metadata helps to identify who is using each resource and gain some control over costs. You can then use these in conjunction with the Monthly Billing Report

Learn more about what you should consider when trying to manage costs:


Unit

CloudWatch is based on metrics. The metrics represent a set of data points ordered by time and published to CloudWatch. Imagine the metric as a variable to be monitored over time, with the data points representing its value.

Each data point has a timestamp and unit of measurement. When you request statistics, the returned data stream contains namespace, metric name, dimension, and unit information.

Learn how to put this into practice with Alex by creating a Cloudwatch alarm using Lambda, CloudWatch and SNS.


VPC

A Virtual Private Cloud (VPC) is a virtual data centre that exists inside an AWS availability zone that is logically isolated. The components of a VPC are Internet Gateways/Virtual Private Gateways, routes tables, network access control lists, subnets, and security groups.


WAF

AWS Web Application Firewall (WAF) protects web applications from attacks, like specific user agents, bad bots, or content scrapers, by filtering traffic based on rules that you create.

Block IP addresses that exceed request limits
This lets you control access by IP address, and country, blocking SQL injections, malicious scripts and the length of requests.

Block IP addresses that submit bad requests
This solution allows you to block IP addresses using Lambda, CloudWatch and AWS WAF to block requests after a threshold is reached.

AWS WAF can be deployed on Amazon CloudFront, protecting your resources and content at the Edge locations. Learn more about making it work for you with Cloudfront with Rob


X.509 certificate

You can use x.509 certificates in AWS Certificate Manager to identify users, computers, applications, services, servers, and other devices internally.

Learn how this works with Erik


Yobibyte

OK, I cheated here, but this is a really interesting post from Kevin that puts it all together.

Fun fact: a yobibyte is 2^80 or 1,208,925,819,614,629,174,706,176 bytes.


Zones

One of the most important introductory concepts to understand is that AWS hosts its infrastructure in data centres called Availability Zones (AZs). There are multiple AZs in a Region which means that if there is a problem in one AZ another can pick up the slack. For some services, you can host your application in multiple Regions.

Learn why you should consider this when building on AWS with Frank

💖 💪 🙅 🚩
helenanders26
Helen Anderson

Posted on December 10, 2018

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

AWS concepts from A to Z
beginners AWS concepts from A to Z

December 10, 2018