Using Kubernetes with NFS Storage
upinder sujlana
Posted on January 27, 2020
In this article, I want to show how Kubernetes cluster can use
an external NFS server for storage.
The code for this article is here
https://github.com/upinder-sujlana/K8S-Volumes/blob/master/README.md
Topology
--------------
kmaster - 192.168.1.80
knode1 - 192.168.1.81
knode2 - 192.168.1.82
The three form a K8S cluster:-
kmaster@kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 233d v1.14.2
knode1 Ready <none> 233d v1.14.2
knode2 Ready <none> 233d v1.14.2
kmaster@kmaster:~$
The 3-nodes are OS details are the same:-
kmaster@kmaster:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
kmaster@kmaster:~$
Additionally, I have a 4th node outside the cluster, but in the same
LAN that I am using as a NFS Server :-
minikube - 192.168.1.85 (NFS Server running here)
On the NFS Server, I have exposed three directories to the above
cluster (permit all) directory names are gold, silver, bronze.
Time to create a Persistent volume.
kmaster@kmaster:~/dockerimagemaker/NFS$ cat nfs-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
# any PV name
name: nfs-pv
labels:
volume: nfs-pv-volume
spec:
capacity:
# storage size
storage: 5Gi
accessModes:
# ReadWriteMany(RW from multi nodes), ReadWriteOnce(RW from a node), ReadOnlyMany(R from multi nodes)
- ReadWriteMany
persistentVolumeReclaimPolicy:
# retain even if pods terminate
Retain
nfs:
# NFS server's definition
path: /home/minikube/NFSShare/gold
server: 192.168.1.85
readOnly: false
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl create -f nfs-pv.yml
persistentvolume/nfs-pv created
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl get pv --show-labels -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
nfs-pv 5Gi RWX Retain Available 100s volume=nfs-pv-volume
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl describe pv nfs-pv
Name: nfs-pv
Labels: volume=nfs-pv-volume
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.1.85
Path: /home/minikube/NFSShare/gold
ReadOnly: false
Events: <none>
kmaster@kmaster:~/dockerimagemaker/NFS$
Creating a persistent volume claim.
kmaster@kmaster:~/dockerimagemaker/NFS$ cat nfs-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# any PVC name
name: nfs-pvc
spec:
selector:
matchLabels:
volume: nfs-pv-volume
accessModes:
# ReadWriteMany(RW from multi nodes), ReadWriteOnce(RW from a node), ReadOnlyMany(R from multi nodes)
- ReadWriteMany
resources:
requests:
# storage size to use
storage: 1Gi
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl create -f nfs-pvc.yml
persistentvolumeclaim/nfs-pvc created
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl get pvc --show-labels
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE LABELS
nfs-pvc Bound nfs-pv 5Gi RWX 32s <none>
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl describe pvc nfs-pvc
Name: nfs-pvc
Namespace: default
StorageClass:
Status: Bound
Volume: nfs-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Events: <none>
Mounted By: <none>
kmaster@kmaster:~/dockerimagemaker/NFS$
Time to test, going to create a test deployment (busybox) and
see if it will work. The Pod shall mount the gold directory to
its /tmp folder and I shall just send output of date command to the folder.
kmaster@kmaster:~/dockerimagemaker/NFS$ cat nfstester.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfstester
labels:
type: nfstester
spec:
replicas: 1
selector:
matchLabels:
type: nfstester
strategy:
type: Recreate
template:
metadata:
labels:
type: nfstester
spec:
volumes:
- name: nfstester
persistentVolumeClaim:
claimName: nfs-pvc
containers:
- name: nfstester
image: busybox
command: [ 'sh', '-c', 'while true; do date;sleep 10; done >> /tmp/hellopod.txt']
volumeMounts:
- name: nfstester
mountPath: /tmp
kmaster@kmaster:~/dockerimagemaker/NFS$
All this test pod does is every 10 sec wakes up and dumps the "date" to the mounted NFS share.
kmaster@kmaster:~$ kubectl create -f nfstester.yml
deployment.apps/nfstester created
kmaster@kmaster:~$
Went to the NFS server directory and started the tail on the newly created file:
minikube@ubuntu:~/NFSShare/gold$ tail -f hellopod.txt
Wed Jan 15 20:50:56 UTC 2020
Wed Jan 15 20:51:06 UTC 2020
Wed Jan 15 20:51:16 UTC 2020
Wed Jan 15 20:51:26 UTC 2020
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound nfs-pv 5Gi RWX manualstorageclass 24d
kmaster@kmaster:~/dockerimagemaker/NFS$
kmaster@kmaster:~/dockerimagemaker/NFS$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 5Gi RWX Retain Bound default/nfs-pvc manualstorageclass 24d
kmaster@kmaster:~/dockerimagemaker/NFS$
💖 💪 🙅 🚩
upinder sujlana
Posted on January 27, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
githubcopilot AI Innovations at Microsoft Ignite 2024 What You Need to Know (Part 2)
November 29, 2024