Moving data from google drive to AWS S3 bucket
gerkibz
Posted on March 3, 2022
Introduction.
The current technological improvements in the cloud services sector has provided a platform for cheaper, on time and easy data storage and manipulation. At times uploading data to some of this platforms can take time, if they don't compress the file before uploading.
A good example of this is uploading data to AWS is 3 times slower compared to uploading data to a google drive folder. This is because google drive compresses the data before hand.
At times you want to upload data to your site or model hosted on AWS form the local drive or google drive.
In this tutorial I'll take you through both uploading to google drive form local storage, uploading to AWS from local storage and sharing files between AWS S3 and google drive.
Setup.
Install rclone
1.1 Head over to rclone to download the setup
To install rclone on Linux/macOS/BSD systems, run:
1.2 curl https://rclone.org/install.sh | sudo bash
Setup google drive
run:
rclone config
choose option n: New remote.
Enter remote name : google-drive
For storage option choose 16: Google Drive.
Leave client_id, client_secret blank for anonymous access or runtime credentials.
For drive access choose scope 1: full access to all files for now, you can edit this later.
Leave root_folder_id blank for anonymous access or runtime credentials.
Choose n:No for the edit service_account_file config option.
Choose yes for the edit advanced config option, this will open a link in your default browser, prompting you to sign into your google drive account and grant rclone access.Quit the config.
To test is connection was successful run:
rclone lsd :
In my case I ran
rclone lsd google-drive:
Setup AWS S3
run:
rclone config
choose option n: New remote
Enter remote name : aws-s3
For storage option choose 4: Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS.
Choose your S3 provider, pick option 1:Amazon Web Services (AWS) S3.
Get AWS credentials from runtime , pick 1:Enter AWS credentials in the next step.
Navigate to aws user to create a user and obtain access id and key to enter below.
Enter access_key_id, secret_access_key blank for the admin user created above.
Choose the region your bucket is located in : my option 11 : eu-central-1.
Endpoint for S3 API, should match your bucket region, in my case I picked option 11 : EU region.
For the canned ACL used when creating buckets and/or storing objects in S3.
Quit the config, choose option 1 : Owner gets FULL_CONTROL. No one else has access rights (default).Choose option 1 for the server side encryption.
Choose the default storage class option 1.
Code run.
Local storage to google drive
rclone copy /local/path remote:path
- copies /local/path to the remote
Local storage to AWS S3
rclone copy /local/path remote:path
- copies /local/path to the remote
Google drive to S3
remote => remote name
rclone copy remote:path remote:path
- copies /local/path to the remote
REFERENCES
Posted on March 3, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.