It’s well know, for every software developer, that when you have been working in one or several projects, written in different languages, with external dependencies (libraries, modules, packages, etc.), for a long time in the same development machine, eventually you’ll need to format it and reinstall everything again. The mess is so complicated that is easier to “let it go”.
Because the computer OS is usually managed by the company you work for (programs, security setups, VPNs, etc.), I prefer to create my own virtual machines to work with. Over time I’ve polished the tools and scripts intended for this purpose. This entry will share with you how do I configure and spawn one or several VMs in my laptop just with one command.
First of all, the repository: https://github.com/ralonsoh/infra_deploy
This is a collection of bash and Ansible1 scripts I’ve implemented, based on the deployment scripts of Yardstick2, which I helped to create.
The only requirements you need installed in your host OS are:
- Python. Because we are now in 2020, you’ll need at least 3.6.
- Ansible. The recommended version is defined in the description of the project. Today this version should be >=2.9.2.
I’ve developed and tested these scripts for Fedora and Ubuntu cloud images. All versions can be found here:
Configuration file.
An example of this file can be found in /etc/config_01.yaml. Two main sections are defined: nodes and networks.
A node is a virtual machine, defined with a name, e.g. VM1 as in the example file. The section interfaces should have at least the management network; this network must be always defined to be able to access to the machine via SSH.
The user and password parameters are mandatory too. This will become the main user to access to the VM. There is also a root_password parameter, that will allow to access to the VM via SSH using the root user. Although this is not recommendable for security reasons, let me remind you that this is going to be a development machine, not a production one, and this is usually very useful.
The network section contains the networks definition. Remember that management is always mandatory. The scripts will create a XML definition per network. E.g.:
<network>
<name>management</name>
<uuid>9fe5f247-381b-4664-8d1b-a2254a1ec6ef</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='management' stp='on' delay='0'/>
<mac address='52:54:00:84:db:f8'/>
<ip address='192.168.110.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.110.254' end='192.168.110.254'/>
<host mac='52:54:00:5d:7d:00' ip='192.168.110.10'/>
</dhcp>
</ip>
</network>
Executing the script.
There are two bash scripts, located in /scripts, that will call the correspondent ansible ones. The first one, destroy.sh. This script will delete any previous configuration; that includes any network definition (remember, virsh networks implemented in Linux Bridges) and any VM running.
The second one, deploy.sh. This script will read, be default, the configuration file /etc/config_01.yaml. It accepts three tags (see /ansible/deploy.yml):
- destroy_nodes: this will wipe out any previous deployment. It has the same effect as executing /scripts/destroy.sh prior to building a new deployment.
- build_network: if present, the script will create again the network infrastructure.
- build_nodes: if present, the script will define and start the VMs described in the configuration file.
The result.
And finally, once the script is finished, what do we have? One or several VMs, with at least one “management” interface. If configured, the other interfaces will be switched via a Linux Bridge. Each network (each Linux Bridge) has a host interface. That means the traffic from the VMs can be NATed to internet through the host connected interface.
Several packages are installed by default: Python3, git, screen, sshd, docker or libvirt. In addition, PyCharm3 is also installed (I usually develop in Python).
The VMs created have a main user (with a defined password). As commented before, another password is explicitly defined for the root user. Insecure but useful for development. The SSH daemon is also configured to accept root login with password.
Limitations.
Well, of course there are. The main one I’ve detected: you can’t re-execute the script without tearing down the network infrastructure (that means delete the VMs first and the Linux Bridges created by virsh) and changing at the same time the VMs network configuration. When the network XMLs are created, one per network, the DHCP server has one static assignation per VM. At this moment, the network definition can’t be changed. That means the VMs must have the same IP addresses.
Next steps.
As you can see in the configuration file, there is a parameter called openstack_node. This will be the OpenStack tag to define the role of this VM; for example, a controller, a network node or a compute node. This script will be able to deploy a simple OpenStack configuration (maybe using devstack, but this is something not defined yet) in the VMs spawned.