alezzandro.com

Deploying GlusterFS using Ansible

During last weeks I've been playing with two new technologies and I though it might be useful to mix them up: Ansible & GlusterFS.

The most annoying part of a GlusterFS engagement is the nodes' setup & configuration, RHN registration, subscriptions management, packages installation, etc.

Just think to all these activities per multiple nodes: a really time wasting job!

 

So I decided to start playing with Ansible for automating the whole process and take the great advantage of this tool.

I'll describe two ways:

 

Please note: for the whole tests I'm using vagrant virtual machines and vagrant dynamic inventory script.

For more information you can refer to: Ansible and Vagrant: setup your testing lab in minutes!

 

Manual: Writing a custom playbook

Just for testing Ansible features I wrote two test playbooks for:

And then:

 

This is the first playbook, that we'll use for registering the vagrant instance to RHN and for updating the systems.

-> Please don't forget change your RHN_username with yours.

[alex@freddy test_gluster]$ cat init.yml

- hosts: vagrant

  vars:

        RHN_username: user@domain.com # Change it with your RHN username!

        RHN_poolname: "RH SKU"

  vars_prompt:

        - name: "RHN_password"

          prompt: Please input RHN password

  tasks:

        - name: Checking if it's already registered to RHN

          become: yes

          ignore_errors: True

          shell: "subscription-manager status"

          register: result

 

        - name: Subscribe to RHN

          become: yes

          redhat_subscription: username= password= pool=

          when: result.rc|int > 0

 

        - name: Enable chosen repositories

          become: yes

          command: "subscription-manager repos --disable '*' --enable rhel-7-server-rpms --enable rhel-7-server-optional-rpms --enable rhel-7-server-extras-rpms --enable rh-gluster-3-for-rhel-7-server-rpms"

 

        - name: Update the system

          become: yes

          yum: state=latest name='*'

 

We are now ready to test the registration playbook.

Note: in this case I'll use default vagrant dynamic inventory script, set in /etc/ansible/ansible.cfg

[alex@freddy test_gluster]$ ansible-playbook init.yml

Please input RHN password:

 

PLAY [vagrant] *****************************************************************

 

TASK [setup] *******************************************************************

ok: [gluster3]

ok: [gluster4]

ok: [gluster2]

ok: [gluster1]

 

TASK [Checking if it's already registered to RHN] ******************************

fatal: [gluster3]: FAILED! => {"changed": true, "cmd": "subscription-manager status", "delta": "0:00:03.126296", "end": "2016-06-24 09:42:05.565593", "failed": true, "rc": 1, "start": "2016-06-24 09:42:02.439297", "stderr": "", "stdout": "+-------------------------------------------+\n   System Status Details\n+-------------------------------------------+\nOverall Status: Unknown", "stdout_lines": ["+-------------------------------------------+", "   System Status Details", "+-------------------------------------------+", "Overall Status: Unknown"], "warnings": []}

...ignoring

fatal: [gluster4]: FAILED! => {"changed": true, "cmd": "subscription-manager status", "delta": "0:00:03.431978", "end": "2016-06-24 09:42:05.685565", "failed": true, "rc": 1, "start": "2016-06-24 09:42:02.253587", "stderr": "", "stdout": "+-------------------------------------------+\n   System Status Details\n+-------------------------------------------+\nOverall Status: Unknown", "stdout_lines": ["+-------------------------------------------+", "   System Status Details", "+-------------------------------------------+", "Overall Status: Unknown"], "warnings": []}

...ignoring

fatal: [gluster1]: FAILED! => {"changed": true, "cmd": "subscription-manager status", "delta": "0:00:04.199137", "end": "2016-06-24 09:42:06.260388", "failed": true, "rc": 1, "start": "2016-06-24 09:42:02.061251", "stderr": "", "stdout": "+-------------------------------------------+\n   System Status Details\n+-------------------------------------------+\nOverall Status: Unknown", "stdout_lines": ["+-------------------------------------------+", "   System Status Details", "+-------------------------------------------+", "Overall Status: Unknown"], "warnings": []}

...ignoring

fatal: [gluster2]: FAILED! => {"changed": true, "cmd": "subscription-manager status", "delta": "0:00:05.895824", "end": "2016-06-24 09:42:08.195838", "failed": true, "rc": 1, "start": "2016-06-24 09:42:02.300014", "stderr": "", "stdout": "+-------------------------------------------+\n   System Status Details\n+-------------------------------------------+\nOverall Status: Unknown", "stdout_lines": ["+-------------------------------------------+", "   System Status Details", "+-------------------------------------------+", "Overall Status: Unknown"], "warnings": []}

...ignoring

 

TASK [Subscribe to RHN] ********************************************************

changed: [gluster1]

changed: [gluster3]

changed: [gluster2]

changed: [gluster4]

 

TASK [Enable chosen repositories] **********************************************

changed: [gluster2]

changed: [gluster4]

changed: [gluster3]

changed: [gluster1]

 

TASK [Update the system] *******************************************************

ok: [gluster2]

ok: [gluster3]

changed: [gluster1]

changed: [gluster4]

 

PLAY RECAP *********************************************************************

gluster1                   : ok=5    changed=3    unreachable=0    failed=0

gluster2                   : ok=5    changed=2    unreachable=0    failed=0

gluster3                   : ok=5    changed=2    unreachable=0    failed=0

gluster4                   : ok=5    changed=3    unreachable=0    failed=0

 

We are ready to setup our gluster environment using the following playbook.

I'll try to describe all the tasks I've created in the playbook:

[alex@freddy test_gluster]$ cat gluster.yml

---

- hosts: vagrant

  become: yes

 

  vars:

    config_lvm: true

    create: true

    create_vgname: vg_gluster

    create_lvname: lv_gluster

    create_lvsize: 100%FREE

    new_disk: /dev/vdb

    filesystem: xfs

    gluster_mount_dir: /mnt/gluster

    gluster_mount_brick_dir: /srv/gluster

    gluster_brick_dir: "/brick01"

    gluster_brick_name: gluster

 

  tasks:

 

    - name: installing system-storage-manager

      yum: name=system-storage-manager state=present

      when: config_lvm and ansible_os_family == "RedHat"

 

    - name: installing lvm2

      yum: name=lvm2 state=present

      when: config_lvm and ansible_os_family == "RedHat"

 

    - name: installing sg3_utils

      yum: name=sg3_utils state=present

      when: config_lvm and ansible_os_family == "RedHat"

 

    - name: creating new LVM volume group

      lvg: vg= pvs= state=present

      when: create and config_lvm

 

    - name: checking if we need to create logical volume

      shell: lvs | grep -c

      ignore_errors: yes

      register: lv_result

      when: create and config_lvm

 

    - name: creating new LVM logical volume

      lvol: vg= lv= size=

      when: create and config_lvm and lv_result.stdout|int == 0

 

    - name: creating new filesystem on new LVM logical volume

      filesystem: fstype= dev=/dev//

      when: create and config_lvm

 

    - name: Ensure GlusterFS is installed.

      yum:

        name: ""

        state: installed

      with_items:

        - glusterfs-server

        - glusterfs-client

 

    - name: Ensure GlusterFS daemon is enabled and running.

      service:

        name: ""

        state: started

        enabled: yes

      with_items:

        - glusterd

 

    - name: Ensure Gluster mount client and brick directories exist.

      file: "path= state=directory mode=0775"

      with_items:

        - ""

        - ""

 

    - name: mounting new filesystem

      mount: name= src=/dev// fstype= state=mounted

      when: create and config_lvm

 

    - name: Ensure Gluster brick directory exist.

      file: "path= state=directory mode=0775"

      with_items:

        - ""

 

    - name: enabling firewall tcp communication for every gluster node

      firewalld: port=/tcp permanent=True state=enabled immediate=True zone=public

      with_items: [ 111, 139, 445, 965, 2049, 24007, 24009, 38465, 38466, 38468, 38469, 39543, 49152, 55863 ]

 

    - name: enabling firewall udp communication for every gluster node

      firewalld: port=/udp permanent=true state=enabled immediate=True zone=public

      with_items: [ 111, 963 ]

 

    - name: "Build hosts file"

      lineinfile: dest=/etc/hosts regexp='.*$' line=" " state=present

      when: hostvars[item].ansible_default_ipv4.address is defined

      with_items: groups['all']

 

    - name: Configure Gluster volume.

      gluster_volume:

        state: present

        name: ""

        brick: ""

        replicas: 4

        cluster: ""

        host: ""

      run_once: true

 

    - name: Ensure Gluster volume is mounted.

      mount:

        name: ""

        src: ":/"

        fstype: glusterfs

        opts: "defaults,_netdev"

        state: mounted

 

Let's try it:

[alex@freddy test_gluster]$ ansible-playbook gluster.yml

 

PLAY [vagrant] *****************************************************************

 

TASK [setup] *******************************************************************

ok: [gluster2]

ok: [gluster3]

ok: [gluster1]

ok: [gluster4]

 

 

...

That's all!

 

Automatic: Using gdeploy

Looking at the previous paragraph, you can imagine the effort you may spend for creating this kind of playbook: write, test, fix, rewrite, test, fix, etc. etc.

Luckily we have a very powerful tool from GlusterFS repository: gdeploy.

 

Just for quoting the gdeploy official description:

What is gdeploy?

 

gdeploy is a tool to set-up and deploy GlusterFS using ansible over multiple hosts. gdeploy is written to be modular, it can be used to deploy any software depending on how the configuration file is written.

 

gdeploy can be used to set-up bricks for GlusterFS, create a GlusterFS volume and mount it on one or more clients from an ansible installed machine. The framework reads a configuration file and applies on the hosts listed in the configuration file.

 

Gdeploy could also be used directly from your notebook, it doesn't require to be installed on one of the gluster nodes. It only depends on Ansible but unluckily it cannot be integrated with Ansible Dynamic Intentories.

For that matter you have to use _only_ it's cluster configuration file defining the inventory and all the parameters inside.

Another annoying fact is that you cannot use Ansible's standard way for hosts defining, gdeploy hosts section it's only a list of addresses. This implies that you will able to use _only_ root to deploy GlusterFS using gdeploy through your default SSH key.

 

For more information on gdeploy you can take a look at: GitHub - gluster/gdeploy: Tool to deploy glusterfs

 

The latest RPM version of gdeploy can be downloaded at: Index of /pub/gluster/gdeploy/LATEST

You can use the community RPM version also to deploy RH GlusterFS packages on RHEL machines.

 

Let's take a look on a cluster.conf I've setup for configuring 4 gluster vm from zero:

[alex@freddy test_gluster]$ cat 2x2-volume-create.conf

#

# Usage:

#       gdeploy -c 2x2-volume-create.conf

#

# This does backend setup first and then create the volume using the

# setup bricks.

#

#

 

 

[hosts]

192.168.121.84

192.168.121.89

192.168.121.231

192.168.121.77

 

[RH-subscription]

action=register

username=user@domain.com                   # You should change this

password=MY_PASSWORD                            # You should change this

pool=8a8f981407da00b997cb2     # You should change this

repos=rhel-7-server-rpms,rhel-7-server-optional-rpms,rhel-7-server-extras-rpms,rh-gluster-3-for-rhel-7-server-rpms

 

[yum]

action=install

packages=lvm2,glusterfs-server,glusterfs-client

 

[firewalld]

action=add

ports=111/tcp,139/tcp,445/tcp,965/tcp,2049/tcp,24007/tcp,24009/tcp,38465/tcp,38466/tcp,38468/tcp,38469/tcp,39543/tcp,49152/tcp,55863/tcp,111/udp,963/udp

permanent=true

zone=public

 

 

# Common backend setup for the hosts.

[backend-setup]

devices=vdb

vgs=vg_gluster

pools=pool_gluster

lvs=lv_gluster

mountpoints=/mnt/data

brick_dirs=/mnt/data/brick1

 

[volume]

action=create

volname=sample_gluster_vol

replica=yes

replica_count=2

force=yes

 

 

[clients]

action=mount

volname=sample_gluster_vol

hosts=192.168.121.77

fstype=glusterfs

client_mount_points=/mnt/mountGluster

 

As you can see I just started defining the hosts involved in the installation/configuration process, after that I defined RHN subscription details and yum, firewalld configuration.

The backend-setup, volume and clients sections refer to the real GlusterFS configuration, in this example I'll configure a volume with replica 2 on the four hosts and then I use one of these hosts to mount the final GlusterFS volume.

 

The process is not so fast but at least is automated!

 

For testing the configuration file after defined all the needed values, you can just run (supposing you installed gdeploy by RPM):

[alex@freddy test_gluster]$ gdeploy -c 2x2-volume-create.conf

...

 

In my case using vagrant machines I have to run it twice because lvm2 is not installed in my VMs and the yum install process it's executed only after the lvm setup (that fails in my case because of lvm2 missing package).

 

Please note: for obtaining in a fast way the ip addresses from a vagrant multi-vms setup, you can just run:

[alex@freddy test_gluster]$ vagrant ssh-config | grep HostName | egrep -o [0-9.]+

192.168.121.84

192.168.121.89

192.168.121.231

192.168.121.77

One more thing: as I said before, gdeploy expects to use root user during ansible deployment, for that matter you should deploy your default SSH key under root's HOME for making gdeploy to work properly.

You can complete it by running a simple Ansible's command like:

[alex@freddy test_gluster]$ ansible all -b -m authorized_key -a "user=root key="

gluster3 | SUCCESS => {

    "changed": true,

...

 

If everything worked fine it should end without any relevant errors and connecting to the chosen client you should find your GlusterFS volume mounted and ready to be used!

 

References

For more information take a look to the following links.

  1. GitHub - gluster/gdeploy: Tool to deploy glusterfs
  2. gdeploy/examples at master · gluster/gdeploy · GitHub
  3. gluster_volume - Manage GlusterFS volumes — Ansible Documentation

If you have any comments/questions please comment!