Using Amazon EFS file system as a permanent volume for EKS cluster

k5trismegistus

Keigo Yamamoto

Posted on September 23, 2022

Using Amazon EFS file system as a permanent volume for EKS cluster

This post was moved from Medium. I wrote this in 2019, so content may be out of date.

Introduction

I have been struggling with EKS / Kubernetes in this several months.

As long as the application is containerized, it is best practice to keep the container as little as possible and save the state in the database or object storage, but there may be cases where you want to save the file in the file system.

In such cases, you can mount a non-volatile file system on a pod using Kubernetes’ Persistent Volumes feature.
However, you must prepare the file system to be mounted by yourself.

You can choose several mount targets, but it seems better to mount an NFS file system to be able to read from multiple pods at the same time and share their state.

Therefore, I tried to use Amazon EFS file system, which is a managed NFS file system for peristent volume for EKS cluster.
I thought that EKS and EFS were both AWS managed services, and it would be easy. However I had to spend whole of a day to achieve this. That’s why I wrote this article, I hope that this article will help for you.

Steps

Create an EFS file system

Create an EFS file system from the AWS console.

At this time, add all subnets where EKS worker nodes exist to the mount target, and set up a security groups for EKS worker nodes.

https://aws.amazon.com/jp/about-aws/whats-new/2019/02/deploy-a-kubernetes-cluster-using-amazon-eks-with-new-quick-start/

If you used this quick start, you’ll have xxx-NodeSecurityGroup-yyysecurity groups.
Sorry for not in English

Worker node IAM role

Add the policy for EFS volume created in 1 to the role of EKS worker node. It ’s also called “Quick Start” xxx-NodeInstanceRole-yyy.

If this role is granted “list, read, write” permission for the EFS file system created earlier, it is OK.

efs-provisioner settings

To use EFS as a Kubernetes persistent volume, efs-provisioneryou need to use.

https://github.com/kubernetes-retired/external-storage/tree/master/aws/efs

However, there is a trap that this does not work if you follow the README.
This is because the sample manifests in the repository areincorrect. . 😡

I fount a solution in this issue.

https://github.com/kubernetes-incubator/external-storage/issues/1209

If you just want to mount EFS file system for Pods, you have to apply 2 yaml files,

https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/rbac.yaml

and
https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/manifest.yaml

Both need to be fixed.

First of all, rbac.yaml, there are definitions of Role and ClusterRole, but I’m going to unify them into ClusterRole. You must also add a description of the service role.

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: efs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-efs-provisioner
subjects:
  - kind: ServiceAccount
    name: efs-provisioner
    namespace: development # Set namespace
roleRef:
  kind: ClusterRole
  name: efs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: development # Set namespace
  name: efs-provisioner
Enter fullscreen mode Exit fullscreen mode
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  namespace: development # Set namespace
  name: efs-provisioner
  spec:
    serviceAccount: efs-provisioner
    containers:
      - name: efs-provisioner
        image: quay.io/external_storage/efs-provisioner:latest
Enter fullscreen mode Exit fullscreen mode

Mount EFS filesystem

Then, you can use EFS filesystem from your Pods.

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: efs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: efs-pvc
      persistentVolumeClaim:
        claimName: efs
Enter fullscreen mode Exit fullscreen mode

Directory splitting

As you can see if you create an appropriate EC2 instance and mount the EFS file system, the directory is cut for each PersistentVolumeClaim, and each pod can only see inside it.
So, if you want to use one EFS file system for multiple pods, you can define multiple PersistentVolumeClaim sharing one StorageClass .

Conclusion

Now the EFS file system can now be used as a Pod’s persistent volume running on an EKS cluster.
I thought that EKS and EFS are both AWS products, so it is common use case that combine them. But unexpectedly I could not find good information, so I hope thiswould be helpful.

💖 💪 🙅 🚩
k5trismegistus
Keigo Yamamoto

Posted on September 23, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related