Adrien.S
Posted on November 3, 2020
I recently stumbled upon minio when looking at the shrine doc while setting up tests related to file uploads.
One could say minio is like a self-hosted S3 object storage. It can be used on production systems as an amazon S3 (or other) alternative to store objects.
One other interesting aspect is to use it development and test environments when you already use a cloud provider for production. This allows you to test end-to-end file operations extensively without the need to mock some operations or network queries.
In my case, I use Scaleway object storage (S3 compatible), with the shrine
gem, it looks like this :
s3_options = {
bucket: ENV['S3_BUCKET'],
access_key_id: ENV['S3_KEY_ID'],
secret_access_key: ENV['S3_SECRET_KEY'],
region: ENV['S3_REGION'],
endpoint: ENV['S3_ENDPOINT'],
force_path_style: true # This will be important for minio to work
}
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
store: Shrine::Storage::S3.new(prefix: "invoices", **s3_options)
}
Installing Minio
I use docker-compose for my database and redis, adding minio was a breeze :
Adding a dedicated volume :
volumes:
pg:
redis:
minio:
Then adding the service :
minio:
restart: always
image: minio/minio:latest
ports:
- "9000:9000"
volumes:
- minio:/data
entrypoint: minio server /data
If you're not using docker, you can find instructions for installing minio locally in the docs.
There we go, we can browse to http://localhost:9000/ to make sure our minio instance is running, and create our bucket.
Integration in dev
In the development environment, all I had to do was to update my env variables for minio. If you keep the default credentials, it will look like this :
S3_KEY_ID=minioadmin
S3_SECRET_KEY=minioadmin
S3_BUCKET=bucket-name
S3_ENDPOINT=http://localhost:9000
If you use direct upload, you may need to tweak the javascript code that catches the path of the uploaded object, to make sure it works with both minio and whatever cloud provider you use, as the addresses may be formatted differently.
Integration in test
Same goes about the environment variables in test. You will also need to create a bucket for your test env.
This works but every time a new developer will need to set-up their system they will have to do this task, and if you use a CI it will be tedious. Moreover, each time you will perform an upload in your tests, your storage will grow.
To avoid any issues, we can programatically create and delete the storages before and after the test suite.
Here it is with shrine, but you can adapt this code to use your favorite adapter instead :
if ENV['S3_ENDPOINT'].include?('localhost')
RSpec.configure do |config|
config.before(:all) do
Shrine.storages[:store].bucket.create unless Shrine.storages[:store].bucket.exists?
end
config.after(:all) do
Shrine.storages[:cache].clear!
Shrine.storages[:store].clear!
end
end
else
puts("It doesn't seem like S3 is mocked in test, skipping auto clearing of bucket")
end
As you can see, I'm a bit paranoid of clearing the wrong bucket so I added a guard to make sure we are only cleaning something containing localhost
.
Thanks for reading !
Posted on November 3, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.