<?xml version="1.0" encoding="utf-8"?><rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:media="http://search.yahoo.com/mrss/"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
>
<channel>
  <title>Srijan Choudhary, all posts tagged: ansible</title>
  <link>https://srijan.ch/feed/all/tag:ansible</link>
  <lastBuildDate>Tue, 21 Nov 2023 06:55:00 +0000</lastBuildDate>
  
  <sy:updatePeriod>daily</sy:updatePeriod>
  <sy:updateFrequency>1</sy:updateFrequency>
  <generator>Kirby</generator>
  <atom:link href="https://srijan.ch/feed/all.xml/tag:ansible" rel="self" type="application/rss+xml" />
  <description>Srijan Choudhary&#039;s Articles and Notes Feed for tag: ansible</description>
  <item>
    <title>Testing ansible playbooks against multiple targets using vagrant</title>
    <description><![CDATA[How to test your ansible playbooks against multiple target OSes and versions using Vagrant]]></description>
    <link>https://srijan.ch/testing-ansible-playbooks-using-vagrant</link>
    <guid isPermaLink="false">tag:srijan.ch:/testing-ansible-playbooks-using-vagrant</guid>
    <category><![CDATA[ansible]]></category>
    <category><![CDATA[vagrant]]></category>
    <category><![CDATA[devops]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 21 Nov 2023 06:55:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/testing-ansible-playbooks-using-vagrant/9f989c7a78-1700550017/kvistholt-photography-ozpwn40zck4-unsplash.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/testing-ansible-playbooks-using-vagrant/9f989c7a78-1700550017/kvistholt-photography-ozpwn40zck4-unsplash.jpg" alt="">
  
  </figure>
<p>I recently updated my <a href="https://srijan.ch/install-docker-and-docker-compose-using-ansible">Install docker and docker-compose using ansible</a> post and wanted to test it against multiple target OSes and OS versions. Here's a way I found to do it easily using Vagrant.</p>
<p>Here's the Vagrantfile:</p>
<pre><code class="language-ruby"># -*- mode: ruby -*-
# vi: set ft=ruby :

targets = [
  "debian/bookworm64",
  "debian/bullseye64",
  "debian/buster64",
  "ubuntu/jammy64",
  "ubuntu/bionic64",
  "ubuntu/focal64"
]

Vagrant.configure("2") do |config|
  targets.each_with_index do |target, index|
    config.vm.define "machine#{index}" do |machine|
      machine.vm.hostname = "machine#{index}"
      machine.vm.box = target
      machine.vm.synced_folder ".", "/vagrant", disabled: true

      if index == targets.count - 1
        machine.vm.provision "ansible" do |ansible|
          ansible.playbook = "playbook.yml"
          ansible.limit = "all"
          ansible.compatibility_mode = "2.0"
          # ansible.verbose = "v"
        end
      end
    end
  end
end</code></pre>
<p>The <code>targets</code> variable defines what Vagrant boxes to target. The possible list of boxes can be found here: <a href="https://app.vagrantup.com/boxes/search">https://app.vagrantup.com/boxes/search</a></p>
<p>In the <code>Vagrant.configure</code> section, I've defined a machine with an auto-generated machine ID for each target.</p>
<p>The <code>machine.vm.synced_folder</code> line disables the default vagrant share to keep things fast.</p>
<p>Then, I've run the ansible provisioning once at the end instead of for each box separately (from: <a href="https://developer.hashicorp.com/vagrant/docs/provisioning/ansible#tips-and-tricks">https://developer.hashicorp.com/vagrant/docs/provisioning/ansible#tips-and-tricks</a>).</p>
<p>The test can be run using:</p>
<pre><code class="language-shell-session">$ vagrant up</code></pre>
<p>If the boxes are already up, to re-run provisioning, run:</p>
<pre><code class="language-shell-session">$ vagrant provision</code></pre>
<p>This code can also be found on GitHub: <a href="https://github.com/srijan/ansible-install-docker">https://github.com/srijan/ansible-install-docker</a></p>]]></content:encoded>
    <comments>https://srijan.ch/testing-ansible-playbooks-using-vagrant#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>Advanced PostgreSQL monitoring using Telegraf, InfluxDB, Grafana</title>
    <description><![CDATA[My experience with advanced monitoring for PostgreSQL database using Telegraf, InfluxDB, and Grafana, using a custom postgresql plugin for Telegraf.]]></description>
    <link>https://srijan.ch/advanced-postgresql-monitoring-using-telegraf</link>
    <guid isPermaLink="false">603cefe38527ef00014f776d</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[postgresql]]></category>
    <category><![CDATA[monitoring]]></category>
    <category><![CDATA[telegraf]]></category>
    <category><![CDATA[influxdb]]></category>
    <category><![CDATA[ansible]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 11 Mar 2021 15:30:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/d28e269c6f-1699621096/grafana-postgresql-monitoring.png" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/54e29f97da-1699621096/photo-1564760055775-d63b17a55c44.jpeg" alt="Advanced PostgreSQL monitoring using Telegraf, InfluxDB, Grafana">
  
  </figure>
<h2>Introduction</h2>
<p>This post will go 
through my experience with setting up some advanced monitoring for 
PostgreSQL database using Telegraf, InfluxDB, and Grafana (also known as
 the TIG stack), the problems I faced, and what I ended up doing at the 
end.</p> <p>What do I mean by advanced? I liked <a href="https://www.datadoghq.com/blog/postgresql-monitoring/#key-metrics-for-postgresql-monitoring" rel="noreferrer">this Datadog article</a> about some key metrics for PostgreSQL monitoring. Also, this <a href="https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/db/postgresql" rel="noreferrer">PostgreSQL monitoring template for Zabbix</a>
 has some good pointers. I didn’t need everything mentioned in these 
links, but they acted as a good reference. I also prioritized monitoring
 for issues which I’ve myself faced in the past.</p> <p>Some key things that I planned to monitor:</p><ul><li>Active (and idle) connections vs. max connections configured</li><li>Size of databases and tables</li><li><a href="https://www.datadoghq.com/blog/postgresql-monitoring/#read-query-throughput-and-performance" rel="noreferrer">Read query throughput and performance</a> (sequential vs. index scans, rows fetched vs. returned, temporary data written to disk)</li><li><a href="https://www.datadoghq.com/blog/postgresql-monitoring/#write-query-throughput-and-performance" rel="noreferrer">Write query throughput and performance</a> (rows inserted/updated/deleted, locks, deadlocks, dead rows)</li></ul><p>There
 are a lot of resources online about setting up the data collection 
pipeline from Telegraf to InfluxDB, and creating dashboards on Grafana. 
So, I’m not going into too much detail on this part. This is what the 
pipeline looks like:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/7c74d1bdcd-1699621096/pg_telegraf_influx_grafana.png" alt="PostgreSQL to Telegraf to InfluxDB to Grafana">
  
    <figcaption class="text-center">
    PostgreSQL to Telegraf to InfluxDB to Grafana. <a href="https://www.planttext.com/?text=TP9RRu8m5CVV-oawdp2PCfCzBTkY8d4cA0OmcqzD1nqsmPRqacc6ttr5A7Etyz2UzlpE_vnUnb9XeVI-05UKfONEY1O5t2bLoZlN5VXzc5ErqwzQ4f5ofWXJmvJltOYcM6HyHKb92jUx7QmBpDHc6RY250HBueu6DsOVUIO9KqR4iAoh19Djk4dGyo9vGe4_zrSpfm_0b6kMON5qkBo6lJ3kzU47WCRYerHaZ_o3SfJHpGL-Cq3IkXtsXJgKbLePPb7FS5tedB9U_oT53YJD3ENNCrmBdX8fkVYNvrerik7P-SrrJaGADBDTs3BmWco0DjBfMk84EhMBiwVbo32UbehlRRTjGYqNMRc6go2KAgCCmke22XeLsr9b45FT4k04WBbKmZ8eQBvJe7g0tyoiasD9O0Mg-tWR9_uIJUV82uCmUgp3q3vAUpTdq7z9_6Wr2T0V6UUaCBR7CRmfthG0ncOml-KJ" target="_blank" rel="noreferrer">View Source</a>  </figcaption>
  </figure>
<p>And here’s what my final Grafana dashboard looks like</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/d28e269c6f-1699621096/grafana-postgresql-monitoring.png" alt="Grafana dashboard sample for postgresql monitoring">
  
    <figcaption class="text-center">
    Grafana dashboard sample for PostgreSQL monitoring  </figcaption>
  </figure>
<h2>Research on existing solutions</h2>
<p>I found several solutions and articles online about monitoring PostgreSQL using Telegraf:</p><h3>1. Telegraf PostgreSQL input plugin</h3>
<p>Telegraf has a <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql" rel="noreferrer">PostgreSQL input plugin</a> which provides some built-in metrics from the <code>pg_stat_database</code> and <code>pg_stat_bgwriter</code>
 views. But this plugin cannot be configured to run any custom SQL 
script to gather the data that we want. And the built-in metrics are a 
good starting point, but not enough. So, I rejected it.</p><h3>2. Telegraf postgresql_extensible input plugin</h3>
<p>Telegraf has another PostgreSQL input plugin called <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql_extensible" rel="noreferrer">postgresql_extensible</a>.
 At first glance, this looks promising: it can run any custom query, and
 multiple queries can be defined in its configuration file.</p> <p>However, there is an <a href="https://github.com/influxdata/telegraf/issues/5009" rel="noreferrer">open issue</a>
 due to which this plugin does not run the specified query against all 
databases, but only against the database name specified in the 
connection string.</p> <p>One way this can still work is to specify multiple input blocks in the Telegraf config file, one for each database.</p><figure>
  <pre><code class="language-toml">[[inputs.postgresql_extensible]]
  address = &quot;host=localhost user=postgres dbname=database1&quot;
  [[inputs.postgresql_extensible.query]]
    script=&quot;db_stats.sql&quot;

[[inputs.postgresql_extensible]]
  address = &quot;host=localhost user=postgres dbname=database2&quot;
  [[inputs.postgresql_extensible.query]]
    script=&quot;db_stats.sql&quot;</code></pre>
  </figure>
<p>But, <strong>configuring this does not scale</strong>, especially if the database names are dynamic or we don’t want to hardcode them in the config.</p> <p>But I really liked the configuration method of this plugin, and I think this will work very well for my use case once the <a href="https://github.com/influxdata/telegraf/issues/5009" rel="noreferrer">associated Telegraf issue</a> gets resolved.</p><h3>3. Using a monitoring package like pgwatch2</h3>
<p>Another method I found was to use a package like <a href="https://github.com/cybertec-postgresql/pgwatch2" rel="noreferrer">pgwatch2</a>. This is a self-contained solution for PostgreSQL monitoring and includes dashboards as well.</p> <p>Its main components are</p><ol><li><u>A metrics collector service</u>.
 This can either be run centrally and “pull” metrics from one or more 
PostgreSQL instances, or alongside each PostgreSQL instance (like a 
sidecar) and “push” metrics to a metrics storage backend.</li><li><u>Metrics storage backend</u>. pgwatch2 supports multiple metrics storage backends like bare PostgreSQL, TimescaleDB, InfluxDB, Prometheus, and Graphite.</li><li><u>Grafana dashboards</u></li><li><u>A configuration layer</u> and associated UI to configure all of the above.</li></ol><p>I
 really liked this tool as well, but felt like this might be too complex
 for my needs. For example, it monitors a lot more than what I want to 
monitor, and it has some complexity to handle multiple PostgreSQL 
versions and multiple deployment configurations.</p> <p>But I will definitely keep this in mind for a more “batteries included” approach to PostgreSQL monitoring for future projects.</p><h2>My solution: custom Telegraf plugin</h2>
<p>Telegraf supports writing an external custom plugin, and running it via the <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/execd" rel="noreferrer">execd plugin</a>. The <code>execd</code> plugin runs an external program as a long-running daemon.</p> <p>This
 approach enabled me to build the exact features I wanted, while also 
keeping things simple enough to someday revert to using the Telegraf 
built-in plugin for PostgreSQL.</p> <p>The custom plugin code can be found at <a href="https://github.com/srijan/telegraf-execd-pg-custom" rel="noreferrer">this Github repo</a>. Note that I’ve also included the <code>line_protocol.py</code> file from influx python sdk so that I would not have to install the whole sdk just for line protocol encoding.</p> <p>What this plugin (and included configuration) does:</p><ol><li>Runs as a daemon using Telegraf execd plugin.</li><li>When
 Telegraf asks for data (by sending a newline on STDIN), it runs the 
queries defined in the plugin’s config file (against the configured 
databases), converts the results into Influx line format, and sends it 
to Telegraf.</li><li>Queries can be defined to run either on a single database, or on all databases that the configured pg user has access to.</li></ol><p>This
 plugin solves the issue with Telegraf’s postgresql_extensible plugin 
for me—I don’t need to manually define the list of databases to be able 
to run queries against all of them.</p> <p>This is what the custom plugin configuration looks like</p><figure>
  <pre><code class="language-toml">[postgresql_custom]
address=&quot;&quot;

[[postgresql_custom.query]]
sqlquery=&quot;select pg_database_size(current_database()) as size_b;&quot;
per_db=true
measurement=&quot;pg_db_size&quot;

[[postgresql_custom.query]]
script=&quot;queries/backends.sql&quot;
per_db=true
measurement=&quot;pg_backends&quot;

[[postgresql_custom.query]]
script=&quot;queries/db_stats.sql&quot;
per_db=true
measurement=&quot;pg_db_stats&quot;

[[postgresql_custom.query]]
script=&quot;queries/table_stats.sql&quot;
per_db=true
tagvalue=&quot;table_name,schema&quot;
measurement=&quot;pg_table_stats&quot;</code></pre>
  </figure>
<p>Any queries defined with <code>per_db=true</code> will be run against all databases. Queries can be specified either inline, or using a separate file.</p> <p>The <a href="https://github.com/srijan/telegraf-execd-pg-custom" rel="noreferrer">repository for this plugin</a>
 has the exact queries configured above. It also has the Grafana 
dashboard JSON which can be imported to get the same dashboard as above.</p><h2>Future optimizations</h2>
<ul><li>Monitoring related to replication is not added yet, but can be added easily</li><li>No need to use superuser account in PostgreSQL 10+</li><li>This does not support running different queries depending on version of the target PostgreSQL system.</li></ul><hr />
<p>Let me know in the comments below if you have any doubts or suggestions to make this better.</p>]]></content:encoded>
    <comments>https://srijan.ch/advanced-postgresql-monitoring-using-telegraf#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>Install docker and docker-compose using Ansible</title>
    <description><![CDATA[Optimized way to install docker and docker-compose using Ansible]]></description>
    <link>https://srijan.ch/install-docker-and-docker-compose-using-ansible</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cd</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[docker]]></category>
    <category><![CDATA[ansible]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 11 Jun 2020 14:30:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/install-docker-and-docker-compose-using-ansible/b62b609bf9-1699621096/photo-1584444707186-b7831c11014f.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/install-docker-and-docker-compose-using-ansible/b62b609bf9-1699621096/photo-1584444707186-b7831c11014f.jpg" alt="">
  
  </figure>
<p>Updated for 2023: I've updated this post with the following changes:</p><p>1. Added a top-level sample playbook<br>2. Used ansible apt_module's <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_module.html#parameter-cache_valid_time" title="cache_time" rel="noreferrer">cache_time</a> parameter to prevent repeated apt-get updates<br>3. Install <code>docker-compose-plugin</code> using apt (provides docker compose v2)<br>4. Make installing docker compose v1 optional<br>5. Various fixes as suggested in comments<br>6. Tested against Debian 10,11,12 and Ubuntu 18.04 (bionic), 20.04 (focal), 22.04 (jammy) using Vagrant.</p><p>I've published a <a href="https://srijan.ch/testing-ansible-playbooks-using-vagrant" rel="noreferrer">new post on how I've done this testing</a>.</p><hr />
<p>I wanted a simple, but optimal (and fast) way to install 
docker and docker-compose using Ansible. I found a few ways online, but I
 was not satisfied.</p> <p>My requirements were:</p><ul><li>Support Debian and Ubuntu</li><li>Install docker and docker compose v2 using apt repositories</li><li>Prevent unnecessary <code>apt-get update</code> if it has been run recently (to make it fast)</li><li>Optionally install docker compose v1 by downloading from github releases<ul><li>But, don’t download if current version &gt;= the minimum version required</li></ul></li></ul><p>I feel trying to achieve these requirements gave me a very good idea of how powerful ansible can be.</p> <p>The final role and vars files can be seen in <a href="https://gist.github.com/srijan/2028af568459195cb9a3dae8d111e754">this gist</a>. But, I’ll go through each section below to explain what makes this better / faster.</p><h2>File structure</h2>
<figure>
  <pre><code class="language-treeview">playbook.yml
roles/
├── docker/
│    ├── defaults/
│    │   ├── main.yml
│    ├── tasks/
│    │   ├── main.yml
│    │   ├── docker_setup.yml</code></pre>
    <figcaption class="text-center">File structure</figcaption>
  </figure>
<h2>Playbook</h2>
<p>This is the top-level playbook. Any default vars mentioned below can be overridden here.</p><figure>
  <pre><code class="language-yaml">---
- hosts: all
  vars:
    - docker_compose_install_v1: true
    - docker_compose_version_v1: &quot;1.29.2&quot;
  tasks:
    - name: Docker setup
      block:
        - import_role: name=docker</code></pre>
    <figcaption class="text-center">playbook.yml</figcaption>
  </figure>
<h2>Variables</h2>
<p>First, we’ve defined some variables in <code>defaults/main.yml</code>. These will control which release channel of docker will be used and whether to install docker compose v1.</p><figure>
  <pre><code class="language-yaml">---
docker_apt_release_channel: stable
docker_apt_arch: amd64
docker_apt_repository: &quot;deb [arch={{ docker_apt_arch }}] https://download.docker.com/linux/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} {{ docker_apt_release_channel }}&quot;
docker_apt_gpg_key: https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg
docker_compose_install_v1: false
docker_compose_version_v1: &quot;1.29.2&quot;</code></pre>
    <figcaption class="text-center">roles/docker/defaults/main.yml</figcaption>
  </figure>
<h2>Role main.yml</h2>
<p>The <code>tasks/main.yml</code> file imports tasks from <code>tasks/docker_setup.yml</code> and turns on <a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html#using-become" rel="noreferrer">become</a> for the whole task.</p><figure>
  <pre><code class="language-yaml">---
- import_tasks: docker_setup.yml
  become: true</code></pre>
    <figcaption class="text-center">roles/docker/tasks/main.yml</figcaption>
  </figure>
<h2>Docker Setup</h2>
<p>This task is divided into the following sections:</p><h3>Install dependencies</h3>
<figure>
  <pre><code class="language-yaml">- name: Install packages using apt
  apt:
    name: 
        - apt-transport-https
        - ca-certificates
        - curl
        - gnupg2
        - software-properties-common
    state: present
    cache_valid_time: 86400</code></pre>
  </figure>
<p>Here the <code>state: present</code> makes sure that these packages are only installed if not already installed. I've set <code>cache_valid_time</code> to 1 day so that <code>apt-get update</code> is not run if it has already run recently.</p><h3>Add docker repository</h3>
<figure>
  <pre><code class="language-yaml">- name: Add Docker GPG apt Key
  apt_key:
    url: &quot;{{ docker_apt_gpg_key }}&quot;
    state: present

- name: Add Docker Repository
  apt_repository:
    repo: &quot;{{ docker_apt_repository }}&quot;
    state: present
    update_cache: true</code></pre>
  </figure>
<p>Here, the <code>state: present</code> and <code>update_cache: true</code> make sure that the cache is only updated if this state was changed. So, <code>apt-get update</code> is not run if the docker repo is already present.</p><h3>Install and enable docker and docker compose v2</h3>
<figure>
  <pre><code class="language-yaml">- name: Install docker-ce
  apt:
    name: docker-ce
    state: present
    cache_valid_time: 86400

- name: Run and enable docker
  service:
    name: docker
    state: started
    enabled: true

- name: Install docker compose
  apt:
    name: docker-compose-plugin
    state: present
    cache_valid_time: 86400</code></pre>
  </figure>
<p>Again, due to <code>state: present</code> and <code>cache_valid_time: 86400</code>, there are no extra cache fetches if docker and docker-compose-plugin are already installed.</p><h2>Docker Compose V1 Setup</h2>
<p>WARNING: docker-compose v1 is end-of-life, please keep that in mind and only install/use it if absolutely required.</p><p>This task is wrapped in an <a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_blocks.html" rel="noreferrer">ansible block</a> that checks if <code>docker_compose_install_v1</code> is true.</p><figure>
  <pre><code class="language-text">- name: Install docker-compose v1
  when:
    - docker_compose_install_v1 is defined
    - docker_compose_install_v1
  block:</code></pre>
  </figure>
<p>Inside the block, there are two sections:</p><h3>Check if docker-compose is installed and it’s version</h3>
<figure>
  <pre><code class="language-yaml">- name: Check current docker-compose version
  command: docker-compose --version
  register: docker_compose_vsn
  changed_when: false
  failed_when: false
  check_mode: no

- set_fact:
    docker_compose_current_version: &quot;{{ docker_compose_vsn.stdout | regex_search(&#039;(\\d+(\\.\\d+)+)&#039;) }}&quot;
  when:
    - docker_compose_vsn.stdout is defined</code></pre>
  </figure>
<p>The first block saves the output of <code>docker-compose --version</code> into a variable <code>docker_compose_vsn</code>. The <code>failed_when: false</code> ensures that this does not call a failure even if the command fails to execute. (See <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html">error handling in ansible</a>).</p> <p>Sample output when docker-compose is installed: <code>docker-compose version 1.26.0, build d4451659</code></p> <p>The second block parses this output and extracts the version number using a regex (see <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html">ansible filters</a>). There is a <code>when</code> condition which causes the second block to skip execution if the first block failed (See <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html">playbook conditionals</a>).</p><h3>Install or upgrade docker-compose if required</h3>
<figure>
  <pre><code class="language-yaml">- name: Install or upgrade docker-compose
  get_url: 
    url : &quot;https://github.com/docker/compose/releases/download/{{ docker_compose_version }}/docker-compose-Linux-x86_64&quot;
    dest: /usr/local/bin/docker-compose
    mode: &#039;a+x&#039;
    force: yes
  when: &gt;
    docker_compose_current_version == &quot;&quot;
    or docker_compose_current_version is version(docker_compose_version, &#039;&lt;&#039;)</code></pre>
  </figure>
<p>This just downloads the required docker-compose binary and saves it to <code>/usr/local/bin/docker-compose</code>,
 but it has a conditional that this will only be done if either 
docker-compose is not already installed, or if the installed version is 
less than the required version. To do version comparison, it uses 
ansible’s built-in <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_tests.html#version-comparison">version comparison function</a>.</p> <p>So,
 we used a few ansible features to achieve what we wanted. I’m sure 
there are a lot of other things we can do to make this even better and 
more fool-proof. Maybe a post for another day.</p>]]></content:encoded>
    <comments>https://srijan.ch/install-docker-and-docker-compose-using-ansible#comments</comments>
    <slash:comments>10</slash:comments>
  </item></channel>
</rss>
