Cross-account S3 replication: encrypted and non-encrypted objects
Mariem
Posted on September 11, 2023
If you need to share S3 data between accounts, to meet compliant storage needs, or to bring S3 data closer to your end user, S3 Replication is a feature of the service S3 that could be your solution. S3 replication is for automatic, asynchronous copying of objects across buckets in the same or different AWS Regions, in the same or different AWS accounts. It retains all metadata, such as the original object creation time, object access control lists (ACLs), and version IDs.
For replication, both source and destination buckets must have versioning enabled, a replication rule established, and IAM permissions granted.
When setting up the replication of objects from a source bucket to a destination bucket using Amazon S3, there are several options to consider. You can :
Choose to replicate all the objects of the source buckets or only objects starting with a prefix or with a special tag
Set up a priority in the case of many replication rules for the same bucket
You must specify an IAM role that Amazon S3 assumes to replicate objects from the source bucket to the destination bucket.
Choose to replicate objects that are encrypted by server-side encryption.
Specify the storage class for the object replicas that can be different from the source bucket
Enable or disable the replication. The replication is applied for the uploaded objects after enabling the replication rule.
In the following, I will demonstrate some of the capabilities of S3 replication through illustrations and a hands-on that will be refined as we progress. We will begin by replicating non-encrypted objects between accounts, then move on to replicating delete markers between accounts, and finally, we will cover the replication of KMS-encrypted objects.
Thanks to Guille Ojeda's amazing newsletter for inspiring this article. I highly recommend following him on LinkedIn and subscribing to his newsletter.
A. S3 Replication of Non-encrypted Objects across accounts
To set up cross-account S3 replication, you'll need two separate accounts and buckets - one for the source and one for the destination. You need to enable versioning in the two buckets. Next, create a replication rule and specify your options. You'll also need to set permissions on both sides of the relationship:
- In the source account, create a role that the S3 service can assume to replicate objects. This role must be authorized to read from the source bucket and replicate to the destination bucket.
- In the destination account, make sure the destination bucket's policy allows the source account's S3 role to replicate objects.
Permissions can be tricky, so be carefulš!
Letās try it!
1. Create a bucket in each of the two accounts, and make sure to choose meaningful names to avoid confusion š
2. Enable versioning for both buckets.
3. In the management part of the source bucket, create the replication rule.
We'll mainly use the default options.
Choose a Name for the replication rule
Choose to replicate all objects in the bucket
Choose to replicate on another account.
Provide the account ID and bucket name of the destination where you would like to transfer the data.
Choose the IAM role for S3 to be created automatically.
Keep the other options as default
After you confirm the creation of the replication rule, you will receive a message regarding your decision about the existing objects in the source bucket before enabling the replication.They are not replicated automatically.
This is particularly helpful when you already have objects in the source bucket!
That's not all! While we have the necessary permissions and configuration in the source account, the destination bucket needs to authorize the created S3 role to replicate objects. Therefore, we must copy the name of the role and update the bucket policy of the destination bucket.
Copy the S3 role name from the details of the replication rule
In the destination account, update the bucket policy of the destination bucket
We have everything required for replication. You can upload a file into the source bucket and await its appearance in the destination bucket. Objects replicate within 15 minutes.
B. What happens in the destination bucket when we delete an object from the source bucket?
Try it before reading the next part! Donāt forget to toggle on the show version!
When we enable versioning in a bucket, and we delete an object, the object isnāt deleted permanently. Instead, a delete marker (placeholder) is added as the current version of your object. The Delete Marker makes AWS S3 behave as if the object has been deleted.
Amazon S3 does not replicate the delete marker by default. However, you can add delete marker replication to non-tag-based rules. The delete marker replication is applied for the uploaded objects after its enablement. Depending on your case and your goal behind the replication, you choose to replicate delete markers or not.
In our case, the delete marker is added in the source and not replicated in the destination bucket. Letās make the changes in our configuration to make it possible.
Update the replication rule of the source account to enable delete marker replication. Thatās all you need to do
Now, when you delete an object from the source account, the delete marker is added to the source and replicated to the destination bucket
Remark!
If you delete an object version, Amazon S3 deletes that object version in the source bucket permanently. But it doesn't replicate the deletion in the destination buckets, even when you enable delete marker replication. This protects data from malicious deletions.
C. How can I update our configuration to enable the replication of KMS-encrypted objects across accounts (same region)?
AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and AWS services.
When we have to replicate encrypted objects, we need two KMS keys in the source and destination account and there are more permissions to set up than in the case of no encrypted objects.
The S3 role must now also be allowed to use and decrypt objects in the source bucket using the KMS SourceKey and to use and encrypt them in the destination bucket using the KMS DestinationKey
Letās make the changes to be able to replicate KMS-encrypted objects with different KMS keys in the two accounts!
In this hands-on, I created two KMS keys in the source and destination account, but if you already have keys, you can use them and jump to step 3. You should only pay attention to keys permissions!
Create a KMS key in the source account
Give an alias (name) for your key and keep all other options as default.
You may need to update the key policy. For me, I kept the default policy key as I used an Adminstrator Access IAM user for this hands-on. This authorized me to allow S3 to encrypt objects using the created KMS key. If you have some permission troubles in the following steps, check the key policy and your permissions.
Create a KMS key in the destination account.
Give an alias (name) for your key, you must give the authorization to the source account (using console or JSON policy) to use it and keep all other options as default.
Update the source bucket properties to enable default encryption by the source KMS key
Or, you can specify the encryption using the KMS key when uploading an object to the source bucket, without updating the bucket's default encryption.
Update the replication rule
Allow replicating the encrypted objects and encrypt them in the destination bucket using the destination key
Update the S3 role
The role must have permission to decrypt with the source account KMS key and to encrypt with the destination account key. So, you need to add the following permissions to the S3 role
Congratulationsš! You have successfully configured everything you need to replicate encrypted objects within the same region across two accounts. You can verify the results!
(The image below is composed from four screenshots to show the final result)
Conclusion
S3 replication is a powerful feature that you can use when you need to replicate data between S3 buckets.
The hands-on I provided is done in different accounts within the same region. Try to find what changes to make in the configuration to make it possible in different regions for encrypted and non-encrypted objects.
Happy learning!
Posted on September 11, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.