Container Platform and Security

Automating PostgreSQL provisioning using Ansible

Written by Ajit Gadge

| Dec 14, 2017

5 MIN READ

Ansible’s motto is “simple, agentless and powerful open source IT automation” quoting from Ansible docs. Ansible is your go-to friend when you want to automate everything in your system. It’s agentless and you can simply manage your things (i.e. servers, load balancers, switches, firewalls) via SSH. It’s a tool that has been used for automating admin tasks, provisioning the infrastructure and more. So let’s delve a little deeper into how one can automate Postgres provisioning using Ansible, in this blog.
In this fast moving IT world, we need to launch servers/nodes in different environments like Public cloud, private cloud or on premises virtual environment, do provisioning of Infra for different applications very frequently, automating patch and updates for 100’s/1000’s servers etc. These tasks are tedious and time consuming. Sys Engineers, Network Engineers or DBAs need to manage these usual tasks every day. Honestly, having been a DBA myself for years, it’s not exactly an interesting piece of work for most engineers. They would rather focus on some new challenging work using new technologies instead. And that’s why, I think Ansible solves the issue for them.
I am going to explain here, how you can create simple Ansible playbook and then create an Ansible role for PostgreSQL to provision automatically on cloud and on premise.
So let’s begin – I presume, you already know how to install Ansible. Hence, I’m skipping the installation part from this blog. But if not, you can refer to this page, which covers installation of Ansible on various Linux Systems.
Once you install Ansible, you can see three main files, Ansible.cfg, Ansible_playbook.yml and hosts.
Ansible.cfg file is an environment setting file where you can define environment setting parameters like where there’s an inventory file of remote hosts, where is role_path , default user and remote system user and you can even add your SSH key from where you are accessing the host environment. Below is the snapshot of my Ansible.cfg file, for your reference:
cmd1
Hosts file is a simple file where you need to define remote hosts IP address where you like to apply Ansible playbook/roles. If you have different types of servers / nodes like System Servers, App Servers, DB Servers then you can actually create a group for the same – under which you can define different servers/nodes.
Ansible_playbook.yml is your starting point where you can define the hosts groups, remote user, and roles from where it will start and apply on the hosts servers/nodes. Below is an example.
cmd2
Remember, Ansible requires python libraries on hosts as well as a remote machine from where you can run Ansible; and servers/nodes on which you can run Ansible roles.
Now let’s check simple examples to ping a remote machine with Ansible. Below is another example.
You can simply edit hosts file and add your IP address of hosts where you’d like to run the Ansible role.
cmd3
Ansible Roles completes tasks where you can define what you like to automate. Roles has different directories and file structure like common, tasks, handlers vars, defaults etc. We can easily create this file/directory structure for that role.
Ansible Galaxy is a command which you can use for creating a Ansible role and it’s directory structure. In our case, we are going to create role for PostgreSQL DB as creatdb role. Below is an example. If you see below, it has Ansible-Galaxy creating a role as createdb and inside the folder, it has created all the required files/directory structure.
cmd4
Now, let’s create roles which can do provisioning of PostgreSQL. Under the createdb role, important folder is tasks and handlers.
You can define your main.yml where you can write your automating tasks under tasks folder. Below is my example of main.yml file for Installation and provisioning of Postgresql.
In the below main.yml file, I am installing PostgreSQL using apt-get as I am using Ubuntu on a remote server. Along with PostgreSQL I am installing it’s required extension like libpq, contrib etc. But before installation of PostgreSQL, I am also checking and installing opnessl, devel packages as pre-requisites. I am also providing Postgresql installation parameters like local, template etc. and setting up “postgresql” as default user and database.
cmd5
Under the handler folder, you can define Postgresql service state like start, restart in main.yml. Below is the example for it.
cmd6
Now, let’s run the Ansible-playbook to provision the postgresql on the given host with Ansible role createdb which we have written.
cmd7
If you look at the above output file, you can see that it has done the distinct tasks which we had defined in main.yml under the role. You are done J . You have installed PostgreSQL under just a few minutes and similarly you can configure PostgreSQL replication, tune postgresql.conf file in a few minutes.
There is another way you can pull the Ansible Galaxy file from Ansible Galaxy website. There are many pre-defined roles where you can use it in the way shown below.
Just create your role folder directory and type below command.
ansible-galaxy install zzet.postgresql
This will download and install role zzet.postgresql from https://galaxy.ansible.com.  You can find many pre-defined roles for many technologies.
So basically, a tool like Ansible allows you to manage changes and maintain them by putting configuration and infrastructure as codes. This was just a simple example of how you can automate one of the administration / configuration tasks around PostgreSQL. I have used the concept of roles to demonstrate my example. Defining roles will help you ensure re-usability of granular components. You can also get pre-defined roles various different tasks. Automation in general and Ansible in particular can be a very strong companion for DBAs and system administrators working with PostgreSQL. It can help you ensure uniform deployment, faster rollout and easy configuration management across multiple environments.
Not many enterprises are able to realize the full value out of automation. At the recent event of Tech Insights organized by Ashnik, only 1 out of 75 attendees said that they are able to automate to an extent where a new environment creation along with database setup takes 20-30 minutes. Hence, we can come handy. We help many customers automate mundane tasks like backup testing and challenging tasks requiring precision like recovering a failed node. Feel free to reach out to us if you are looking for consulting in modernizing your database deployment, configuration management and administrative tasks.
Ajit Gadge I Senior Database Consultant, Ashnik


Ajit brings over 16 years of solid experience in solution architecting and implementation. He has successfully delivered solutions on various database technologies including PostgreSQL, MySQL, SQLServer. His derives his strength from his passion for database technologies and love of pre-sales engagement with customers. Ajit has successfully engaged with customers from across the globe including South East Asia, USA and India. His commitment and ability to find solutions has earned him great respect from customers. Prior to Ashnik he has worked with EnterpriseDB for 7 years and helped it grow in South East Asia and India



Go to Top