Ansible and Vagrant: setup your testing lab in minutes!


Looking at the upcoming sunset of OS1 platform and moving around different customer's sites, with all the network troubles you may expect, having a fast and always reliable way for creating a test lab could be very helpful.

As you know Vagrant (from Hashicorp) established itself as one of the best tool for handling vm on development/testing environments.

In this document I'll describe the few necessary steps for enabling use of Ansible with a Vagrant box (also with multiple boxes).


Please note: I won't introduce you Ansible provisioning via VagrantFile, instead I'll show the usage of Ansible's Vagrant Dynamic Intentory.


Prerequisites: Choose a Vagrant box

The first step is to choose the right vagrant box, no matter what you're trying to setup/test/troubleshoot/configure.. You have to use a valid, small, reliable vagrant box.

Unfortunately we (as Red Hat) don't ship any RHEL as vagrant (apart from OSE all-in-one). For that matter you may use centos, freely available through Hashicorp cloud store:

Vagrant box centos/7 | Atlas by HashiCorp


By the way you may also use a RHEL boxes, many colleagues (me included) has written some articles to get a RHEL vagrant box set-up in minutes, please take a look at the following articles to gain more information:

How to create RHEL based Vagrant boxes (authored by me)



*) In the next step I'll use a RHEL7 box created using my tutorial.

*) I'm supposing you'll use a Fedora workstation, but I can bet that this should work also on OSX if you managed getting to work Vagrant and Virtualbox.


Vagrant configuration and setup

First of all we need to create a VagrantFile for starting one or multiple instances.

I've just written an example VagrantFile for a multi-instance setup:

[alex@freddy test_vagrant]$ cat Vagrantfile

# -*- mode: ruby -*-

# vi: set ft=ruby :


# All Vagrant configuration is done below. The "2" in Vagrant.configure

# configures the configuration version (we support older styles for

# backwards compatibility). Please don't change it unless you know what

# you're doing.

Vagrant.configure(2) do |config|

  # The most common configuration options are documented and commented below.

  # For a complete reference, please see the online documentation at



  # Every Vagrant development environment requires a box. You can search for

  # boxes at


  config.vm.define "webserver" do |webserver| = "rhel72"

    webserver.vm.provider :libvirt do |libvirt|

      libvirt.memory = 1024




  config.vm.define "database" do |database| = "rhel72"

    database.vm.provider :libvirt do |libvirt|

      libvirt.memory = 1024






I'll not focus on details about the Vagrant keywords, anyway as you can see I've defined two instances of which one be two a webserver and the other a database.

I'm using as base box for the two instances the "rhel72" box [1], using the "libvirt" provider and 1GB of memory.


After defining the VagrantFile, we can check that is all ok by running:

[alex@freddy test_vagrant]$ vagrant status

Current machine states:


webserver                  not created (libvirt)

database                    not created (libvirt)


This environment represents multiple VMs. The VMs are all listed

above with their current state. For more information about a specific

VM, run `vagrant status NAME`.


And then try to starting the instances by running:

[alex@freddy test_vagrant]$ vagrant up

Bringing machine 'webserver' up with 'libvirt' provider...

Bringing machine 'database' up with 'libvirt' provider...

==> database: Creating image (snapshot of base box volume).

==> webserver: Creating image (snapshot of base box volume).

==> database: Creating domain with the following settings...

==> webserver: Creating domain with the following settings...

==> database:  -- Name:              test_vagrant_database

==> webserver:  -- Name:              test_vagrant_webserver

==> database:  -- Domain type:       kvm

==> webserver:  -- Domain type:       kvm



After running some "vagrant ssh" against the single instances to check if all it's working as expected, we can now focus on the Ansible integration.

First of all we need to download the latest (Ansible's Vagrant script):

[alex@freddy test_vagrant]$ wget

Please note that what I'll introduce you is not the Ansible provisioning via VagrantFile [4], but the usage of Ansible's Vagrant Dynamic Intentory [3].

Using this Dynamic Inventory you can test your existing playbook without any modification and/or use with Ansible. You will use your vagrant instances like any other vm/baremetal hosts.


Give to the script the execute permission and try it! You should run it inside your current vagrant folder, the path where you stored your VagrantFile and where you run "vagrant *" commands.

[alex@freddy test_vagrant]$ chmod +x

[alex@freddy test_vagrant]$ --list all

{"vagrant": ["webserver", "database"], "_meta": {"hostvars": {"webserver": {"ansible_ssh_host": "", "ansible_ssh_port": "22", "ansible_ssh_user": "vagrant", "ansible_ssh_private_key_file": "/home/alex/Projects/Ansible/test_vagrant/.vagrant/machines/webserver/libvirt/private_key"}, "database": {"ansible_ssh_host": "", "ansible_ssh_port": "22", "ansible_ssh_user": "vagrant", "ansible_ssh_private_key_file": "/home/alex/Projects/Ansible/test_vagrant/.vagrant/machines/database/libvirt/private_key"}}}}


As you can see the two machines are automatically placed under "vagrant" group, remember that while we'll try to run an Ansible Playbook.

Pay attention that Ansible expect the Dynamic Inventory's output to be a JSON blob. So in case you're thinking of writing up a custom dynamic inventory, keep it in mind!


If your output looks like the one I showed you, then you can consider the script working.

For easily executing the dynamic inventory script you can place it in one of your PATH.


Let's try it with Ansible:

[alex@freddy test_vagrant]$ ansible all -i ./ -m ping

database | SUCCESS => {

    "changed": false,

    "ping": "pong"


webserver | SUCCESS => {

    "changed": false,

    "ping": "pong"




As you can see, we've chosen to run Ansible's module "ping" against "all" hosts using a dynamic "inventory" called "".


Or maybe we can just run one command on a single host:

[alex@freddy test_vagrant]$ ansible webserver -i ./ -m command -a "uptime"

webserver | SUCCESS | rc=0 >>

09:34:20 up 7 min,  1 user,  load average: 0.00, 0.03, 0.05



Default inventory:

Setting the default dynamic inventory can be done by editing /etc/ansible/ansible.cfg

[alex@freddy test_vagrant]$ cat /etc/ansible/ansible.cfg |grep inventory

#inventory      = /etc/ansible/hosts

inventory    = /home/alex/bin/

# if inventory variables overlap, does the higher precedence one win

# These values may be set per host via the ansible_module_compression inventory


That's all, let's try an Ansible's playbook now!


Test: Download and execute a playbook

Just for testing the Dynamic Inventory script I wrote one sample playbook for:


You'll find below the playbook that we'll use for registering the vagrant instances to RHN and for updating the systems.

[Note: Don't forget to change RHN_username, to your RHN user!]

[alex@freddy test_vagrant]$ cat init.yml

- hosts: vagrant



        RHN_poolname: .*Employee.*


        - name: "RHN_password"

          prompt: Please input RHN password


        - name: Checking if it's already registered to RHN

          become: yes

          ignore_errors: True

          shell: "subscription-manager status"

          register: result


        - name: Subscribe to RHN

          become: yes

          redhat_subscription: username= password= pool=

          when: result.rc|int > 0


        - name: Enable chosen repositories

          become: yes

          command: "subscription-manager repos --disable '*' --enable rhel-7-server-rpms --enable rhel-7-server-optional-rpms --enable rhel-7-server-extras-rpms"


        - name: Update the system

          become: yes

          yum: state=latest name='*'


Let's execute the playbook for registering our vagrant instances:

[alex@freddy test_vagrant]$ ansible-playbook init.yml

Please input RHN password:


PLAY [vagrant] *****************************************************************


TASK [setup] *******************************************************************

ok: [database]

ok: [webserver]


TASK [Checking if it's already registered to RHN] ******************************

fatal: [database]: FAILED! => {"changed": true, "cmd": "subscription-manager status", "delta": "0:00:02.471373", "end": "2016-06-23 11:37:50.194596", "failed": true, "rc": 1, "start": "2016-06-23 11:37:47.723223", "stderr": "", "stdout": "+-------------------------------------------+\n   System Status Details\n+-------------------------------------------+\nOverall Status: Unknown", "stdout_lines": ["+-------------------------------------------+", "   System Status Details", "+-------------------------------------------+", "Overall Status: Unknown"], "warnings": []}


fatal: [webserver]: FAILED! => {"changed": true, "cmd": "subscription-manager status", "delta": "0:00:02.465741", "end": "2016-06-23 11:37:50.790684", "failed": true, "rc": 1, "start": "2016-06-23 11:37:48.324943", "stderr": "", "stdout": "+-------------------------------------------+\n   System Status Details\n+-------------------------------------------+\nOverall Status: Unknown", "stdout_lines": ["+-------------------------------------------+", "   System Status Details", "+-------------------------------------------+", "Overall Status: Unknown"], "warnings": []}



TASK [Subscribe to RHN] ********************************************************

changed: [webserver]

changed: [database]


TASK [Enable chosen repositories] **********************************************

changed: [webserver]

changed: [database]


TASK [Update the system] *******************************************************

changed: [webserver]

changed: [database]


PLAY RECAP *********************************************************************

database                   : ok=5    changed=3    unreachable=0    failed=0  

webserver                  : ok=5    changed=3    unreachable=0    failed=0 


If all is gone ok, we can proceed to the next step, download a sample ansible playbook and run it!

We can download "ansible-examples" github's project:

[alex@freddy test_vagrant]$ git clone

Cloning into 'ansible-examples'...

remote: Counting objects: 2235, done.

remote: Total 2235 (delta 0), reused 0 (delta 0), pack-reused 2235

Receiving objects: 100% (2235/2235), 3.81 MiB | 360.00 KiB/s, done.

Resolving deltas: 100% (641/641), done.

Checking connectivity... done.


[alex@freddy test_vagrant]$ ll ansible-examples/lamp_simple_rhel7/

total 24

drwxrwxr-x. 2 alex alex 4096 Jun 23 15:29 group_vars

-rw-rw-r--. 1 alex alex   59 Jun 23 15:29 hosts

-rw-rw-r--. 1 alex alex  237 Jun 23 15:29

-rw-rw-r--. 1 alex alex 1163 Jun 23 15:29

drwxrwxr-x. 5 alex alex 4096 Jun 23 15:29 roles

-rw-rw-r--. 1 alex alex  411 Jun 23 15:29 site.yml


This example playbook uses two groups of hosts: "webservers" and "dbservers".

So we need to setup a mixed dynamic/static inventory.

For doing it we can create a directory and place our "" dynamic inventory script inside with the hosts file provided by the ansible example playbook:

[alex@freddy test_vagrant]$ mkdir inventory

[alex@freddy test_vagrant]$ cp ansible-examples/lamp_simple_rhel7/hosts inventory/

[alex@freddy test_vagrant]$ cp inventory/

[alex@freddy test_vagrant]$ ll inventory/

total 8

-rw-rw-r--. 1 alex alex   59 Jun 23 16:01 hosts

-rwxrwxr-x. 1 alex alex 3960 Jun 23 16:01



After that, you need to edit the hosts file like this:

[alex@freddy test_vagrant]$ cat inventory/hosts







Finally we can test it!

As you can see by the command below we invoke the command specifying the "inventory" folder, the username for connection ("-u vagrant") and the option "-b" for letting the script become root with "sudo" command:

[alex@freddy test_vagrant]$ ansible-playbook -i inventory ansible-examples/lamp_simple_rhel7/site.yml -u vagrant -b


PLAY [apply common configuration to all nodes] *********************************


TASK [setup] *******************************************************************

ok: [database]

ok: [webserver]


TASK [common : Install ntp] ****************************************************

changed: [webserver]

changed: [database]


TASK [common : Configure ntp file] *********************************************

changed: [webserver]

changed: [database]


TASK [common : Start the ntp service] ******************************************

changed: [webserver]

changed: [database]


RUNNING HANDLER [common : restart ntp] *****************************************

changed: [database]

changed: [webserver]




PLAY RECAP *********************************************************************

database                   : ok=16   changed=14   unreachable=0    failed=0  

webserver                  : ok=14   changed=10   unreachable=0    failed=0  



If all it's gone ok, we can grab the ip address of our webserver:

[alex@freddy test_vagrant]$ --list all | json_reformat


    "vagrant": [




    "_meta": {

        "hostvars": {

            "webserver": {

                "ansible_ssh_host": "",

                "ansible_ssh_port": "22",

                "ansible_ssh_user": "vagrant",

                "ansible_ssh_private_key_file": "/home/alex/Projects/Ansible/test_vagrant/.vagrant/machines/webserver/libvirt/private_key"


            "database": {

                "ansible_ssh_host": "",

                "ansible_ssh_port": "22",

                "ansible_ssh_user": "vagrant",

                "ansible_ssh_private_key_file": "/home/alex/Projects/Ansible/test_vagrant/.vagrant/machines/database/libvirt/private_key"






And then we can test if the webserver is running fine.

Quoting the ansible github page:

Once done, you can check the results by browsing to http://localhost/index.php.
You should see a simple test page and a list of databases retrieved from the database server.

[alex@freddy test_vagrant]$ curl



  <title>Ansible Application</title>




  <a href=>Homepage</a>


Hello, World! I am a web server configured using Ansible and I am : localhost.localdomain</BR>List of Databases: </BR>information_schema









That's all!



[1]  How to create RHEL based Vagrant boxes

[2]  ansible/ at devel · ansible/ansible · GitHub

[3]  Using Vagrant and Ansible — Ansible Documentation

[4]  GitHub - ansible/ansible-examples: A few starter examples of ansible playbooks, to show features and how they work toget…