Configuring GlusterFS in Linux Server

waji97

Waji

Posted on February 15, 2023

Configuring GlusterFS in Linux Server

Introduction

According to the Gluster documentation, Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace

šŸ’” More documentation on Gluster can be found here

We need at least 3 nodes for this configuration. Two of them will work as the gluster servers and one as the client.

šŸ‘‰ I have used 3 nodes myself for glusterfs configuration. All of these nodes are Virtual Machines running CentOS 7 inside VMWare workstation

Creating and formatting partition

šŸ‘‰ I have added an extra 1GB HDD in both server side Linux systems so that I can create a partition for testing gluster

Using the fdisk manager, I created 4 partitions on both server sides

fdisk -l /dev/sdb

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      514047      256000   83  Linux
/dev/sdb2          514048     1026047      256000   83  Linux
/dev/sdb3         1026048     1538047      256000   83  Linux
/dev/sdb4         1538048     2097151      279552   83  Linux
Enter fullscreen mode Exit fullscreen mode

šŸ’” It is up to you how many partitions you want for this hands on. I needed few more as I had some other testing to do

Now, we will format this using xfs on both servers

mkfs.xfs /dev/sdb1
Enter fullscreen mode Exit fullscreen mode

After successful format, we need a directory where we will mount this partition

mkdir /gluster1

# Mounting our partition
mount /dev/sdb1 /gluster1

# Setting up auto-mount
echo '/dev/sdb1 /gluster1 xfs defaults 0 0' >> /etc/fstab
Enter fullscreen mode Exit fullscreen mode

Installing the glusterfs package

We can directly use

yum -y install glusterfs-server
Enter fullscreen mode Exit fullscreen mode

šŸ‘‰ If yum isn't able to find this package, you can use yum -y install centos-release-gluster command before using the above command

Now we can just start the service

systemctl start glusterd
systemctl enable glusterd
Enter fullscreen mode Exit fullscreen mode

Confirming the status

systemctl status glusterd
ā— glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-02-15 10:09:19 KST; 12h ago
Enter fullscreen mode Exit fullscreen mode

Firewall settings

We can either turn off the firewall service

systemctl stop firewalld
systemctl disable firewalld
Enter fullscreen mode Exit fullscreen mode

Or we can just add the service glusterfs

firewall-cmd --permanent --add-service=glusterfs
success

# Or add the port number (24007 is the default but we can add 24008 or 24009 as well)
firewall-cmd --permanent --add-port=24007/tcp 
success

firewall-cmd --reload
success
Enter fullscreen mode Exit fullscreen mode

šŸ‘‰ I will strongly recommend just adding the service instead of disabling the firewall especially if you aren't in a testing environment


Configuring the peer pool

We can use hostnames to probe with the other server but I will be using IP address to probe to the second server

šŸ‘‰ When using hostnames to probe, it is advised to probe back the server1 from server2, server3 or upto nth server based on the cluster size

Probing from the first server

gluster peer probe <Your-Second-Server-IP>
Enter fullscreen mode Exit fullscreen mode

Checking the peer status

gluster peer status
Number of Peers: 1
.
.
State: Peer in Cluster (Connected)
Enter fullscreen mode Exit fullscreen mode

If we check the status from the second server, we should get the same status


Setting up a glusterfs volume

We need a volume that can act as the 'brick' holder. Basically, a brick is simply any filesystem you can export as a GlusterFS mount point

Before creating this volume and assigning, we need a new test directory inside the /gluster1 and /gluster2 directories on both servers that will work as the gluster volume

On both of the servers,

# In Linux 1
mkdir /gluster1/gv0

# In Linux 2
mkdir /gluster2/gv0
Enter fullscreen mode Exit fullscreen mode

Now from any server,

gluster volume create gv0 <Your-First-Server-IP>:/gluster1/gv0 <Your-Second-Server-IP>:/gluster2/gv0
Enter fullscreen mode Exit fullscreen mode

Upon success, we should be able to see

volume create: gv0: success: please start the volume to access data
Enter fullscreen mode Exit fullscreen mode

Starting the volume

gluster volume start gv0
volume start: gv0: success
Enter fullscreen mode Exit fullscreen mode

We can check the information of our gluster volume

Volume Name: gv0
Type: Distribute
Volume ID: 42307952-b960-4f9d-85b5-00d8bbed7acf
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: <Your-First-Server-IP>:/gluster1/test1
Brick2: <Your-Second-Server-IP>:/gluster2/test2
Enter fullscreen mode Exit fullscreen mode

šŸ‘‰ We can access the log files for gluster /var/log/glusterfs/glusterd.log to troubleshoot any issues


Testing the volume

From the client Linux, we need to install the client glusterfs package

yum -y install gluster-client
Enter fullscreen mode Exit fullscreen mode

After installation, we need to create a mount point in our client

mkdir /gluster
Enter fullscreen mode Exit fullscreen mode

Now, we just need to mount the gluster volume

mount -t glusterfs <First/Second-server-IP>:/gv0 /gluster
Enter fullscreen mode Exit fullscreen mode

You can check the mount status using the df -h and that's it!


Conclusion

āœØ There are different types of volumes in GlusterFS. By default, when we create a gluster volume, distributed glusterfs volume is created. We can always choose between the replicated, distributed replicated, dispersed and the distributed dispersed volumes

For more details on different types of gluster volumes, do check out the Gluster documentation architecture section over here

šŸ’– šŸ’Ŗ šŸ™… šŸš©
waji97
Waji

Posted on February 15, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related