<?xml version="1.0" encoding="utf-8"?><rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:media="http://search.yahoo.com/mrss/"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
>
<channel>
  <title>Srijan Choudhary, all posts tagged: postgresql</title>
  <link>https://srijan.ch/feed/all/tag:postgresql</link>
  <lastBuildDate>Thu, 11 Mar 2021 15:30:00 +0000</lastBuildDate>
  
  <sy:updatePeriod>daily</sy:updatePeriod>
  <sy:updateFrequency>1</sy:updateFrequency>
  <generator>Kirby</generator>
  <atom:link href="https://srijan.ch/feed/all.xml/tag:postgresql" rel="self" type="application/rss+xml" />
  <description>Srijan Choudhary&#039;s Articles and Notes Feed for tag: postgresql</description>
  <item>
    <title>Advanced PostgreSQL monitoring using Telegraf, InfluxDB, Grafana</title>
    <description><![CDATA[My experience with advanced monitoring for PostgreSQL database using Telegraf, InfluxDB, and Grafana, using a custom postgresql plugin for Telegraf.]]></description>
    <link>https://srijan.ch/advanced-postgresql-monitoring-using-telegraf</link>
    <guid isPermaLink="false">603cefe38527ef00014f776d</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[postgresql]]></category>
    <category><![CDATA[monitoring]]></category>
    <category><![CDATA[telegraf]]></category>
    <category><![CDATA[influxdb]]></category>
    <category><![CDATA[ansible]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 11 Mar 2021 15:30:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/d28e269c6f-1699621096/grafana-postgresql-monitoring.png" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/54e29f97da-1699621096/photo-1564760055775-d63b17a55c44.jpeg" alt="Advanced PostgreSQL monitoring using Telegraf, InfluxDB, Grafana">
  
  </figure>
<h2>Introduction</h2>
<p>This post will go 
through my experience with setting up some advanced monitoring for 
PostgreSQL database using Telegraf, InfluxDB, and Grafana (also known as
 the TIG stack), the problems I faced, and what I ended up doing at the 
end.</p> <p>What do I mean by advanced? I liked <a href="https://www.datadoghq.com/blog/postgresql-monitoring/#key-metrics-for-postgresql-monitoring" rel="noreferrer">this Datadog article</a> about some key metrics for PostgreSQL monitoring. Also, this <a href="https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/db/postgresql" rel="noreferrer">PostgreSQL monitoring template for Zabbix</a>
 has some good pointers. I didn’t need everything mentioned in these 
links, but they acted as a good reference. I also prioritized monitoring
 for issues which I’ve myself faced in the past.</p> <p>Some key things that I planned to monitor:</p><ul><li>Active (and idle) connections vs. max connections configured</li><li>Size of databases and tables</li><li><a href="https://www.datadoghq.com/blog/postgresql-monitoring/#read-query-throughput-and-performance" rel="noreferrer">Read query throughput and performance</a> (sequential vs. index scans, rows fetched vs. returned, temporary data written to disk)</li><li><a href="https://www.datadoghq.com/blog/postgresql-monitoring/#write-query-throughput-and-performance" rel="noreferrer">Write query throughput and performance</a> (rows inserted/updated/deleted, locks, deadlocks, dead rows)</li></ul><p>There
 are a lot of resources online about setting up the data collection 
pipeline from Telegraf to InfluxDB, and creating dashboards on Grafana. 
So, I’m not going into too much detail on this part. This is what the 
pipeline looks like:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/7c74d1bdcd-1699621096/pg_telegraf_influx_grafana.png" alt="PostgreSQL to Telegraf to InfluxDB to Grafana">
  
    <figcaption class="text-center">
    PostgreSQL to Telegraf to InfluxDB to Grafana. <a href="https://www.planttext.com/?text=TP9RRu8m5CVV-oawdp2PCfCzBTkY8d4cA0OmcqzD1nqsmPRqacc6ttr5A7Etyz2UzlpE_vnUnb9XeVI-05UKfONEY1O5t2bLoZlN5VXzc5ErqwzQ4f5ofWXJmvJltOYcM6HyHKb92jUx7QmBpDHc6RY250HBueu6DsOVUIO9KqR4iAoh19Djk4dGyo9vGe4_zrSpfm_0b6kMON5qkBo6lJ3kzU47WCRYerHaZ_o3SfJHpGL-Cq3IkXtsXJgKbLePPb7FS5tedB9U_oT53YJD3ENNCrmBdX8fkVYNvrerik7P-SrrJaGADBDTs3BmWco0DjBfMk84EhMBiwVbo32UbehlRRTjGYqNMRc6go2KAgCCmke22XeLsr9b45FT4k04WBbKmZ8eQBvJe7g0tyoiasD9O0Mg-tWR9_uIJUV82uCmUgp3q3vAUpTdq7z9_6Wr2T0V6UUaCBR7CRmfthG0ncOml-KJ" target="_blank" rel="noreferrer">View Source</a>  </figcaption>
  </figure>
<p>And here’s what my final Grafana dashboard looks like</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/d28e269c6f-1699621096/grafana-postgresql-monitoring.png" alt="Grafana dashboard sample for postgresql monitoring">
  
    <figcaption class="text-center">
    Grafana dashboard sample for PostgreSQL monitoring  </figcaption>
  </figure>
<h2>Research on existing solutions</h2>
<p>I found several solutions and articles online about monitoring PostgreSQL using Telegraf:</p><h3>1. Telegraf PostgreSQL input plugin</h3>
<p>Telegraf has a <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql" rel="noreferrer">PostgreSQL input plugin</a> which provides some built-in metrics from the <code>pg_stat_database</code> and <code>pg_stat_bgwriter</code>
 views. But this plugin cannot be configured to run any custom SQL 
script to gather the data that we want. And the built-in metrics are a 
good starting point, but not enough. So, I rejected it.</p><h3>2. Telegraf postgresql_extensible input plugin</h3>
<p>Telegraf has another PostgreSQL input plugin called <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql_extensible" rel="noreferrer">postgresql_extensible</a>.
 At first glance, this looks promising: it can run any custom query, and
 multiple queries can be defined in its configuration file.</p> <p>However, there is an <a href="https://github.com/influxdata/telegraf/issues/5009" rel="noreferrer">open issue</a>
 due to which this plugin does not run the specified query against all 
databases, but only against the database name specified in the 
connection string.</p> <p>One way this can still work is to specify multiple input blocks in the Telegraf config file, one for each database.</p><figure>
  <pre><code class="language-toml">[[inputs.postgresql_extensible]]
  address = &quot;host=localhost user=postgres dbname=database1&quot;
  [[inputs.postgresql_extensible.query]]
    script=&quot;db_stats.sql&quot;

[[inputs.postgresql_extensible]]
  address = &quot;host=localhost user=postgres dbname=database2&quot;
  [[inputs.postgresql_extensible.query]]
    script=&quot;db_stats.sql&quot;</code></pre>
  </figure>
<p>But, <strong>configuring this does not scale</strong>, especially if the database names are dynamic or we don’t want to hardcode them in the config.</p> <p>But I really liked the configuration method of this plugin, and I think this will work very well for my use case once the <a href="https://github.com/influxdata/telegraf/issues/5009" rel="noreferrer">associated Telegraf issue</a> gets resolved.</p><h3>3. Using a monitoring package like pgwatch2</h3>
<p>Another method I found was to use a package like <a href="https://github.com/cybertec-postgresql/pgwatch2" rel="noreferrer">pgwatch2</a>. This is a self-contained solution for PostgreSQL monitoring and includes dashboards as well.</p> <p>Its main components are</p><ol><li><u>A metrics collector service</u>.
 This can either be run centrally and “pull” metrics from one or more 
PostgreSQL instances, or alongside each PostgreSQL instance (like a 
sidecar) and “push” metrics to a metrics storage backend.</li><li><u>Metrics storage backend</u>. pgwatch2 supports multiple metrics storage backends like bare PostgreSQL, TimescaleDB, InfluxDB, Prometheus, and Graphite.</li><li><u>Grafana dashboards</u></li><li><u>A configuration layer</u> and associated UI to configure all of the above.</li></ol><p>I
 really liked this tool as well, but felt like this might be too complex
 for my needs. For example, it monitors a lot more than what I want to 
monitor, and it has some complexity to handle multiple PostgreSQL 
versions and multiple deployment configurations.</p> <p>But I will definitely keep this in mind for a more “batteries included” approach to PostgreSQL monitoring for future projects.</p><h2>My solution: custom Telegraf plugin</h2>
<p>Telegraf supports writing an external custom plugin, and running it via the <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/execd" rel="noreferrer">execd plugin</a>. The <code>execd</code> plugin runs an external program as a long-running daemon.</p> <p>This
 approach enabled me to build the exact features I wanted, while also 
keeping things simple enough to someday revert to using the Telegraf 
built-in plugin for PostgreSQL.</p> <p>The custom plugin code can be found at <a href="https://github.com/srijan/telegraf-execd-pg-custom" rel="noreferrer">this Github repo</a>. Note that I’ve also included the <code>line_protocol.py</code> file from influx python sdk so that I would not have to install the whole sdk just for line protocol encoding.</p> <p>What this plugin (and included configuration) does:</p><ol><li>Runs as a daemon using Telegraf execd plugin.</li><li>When
 Telegraf asks for data (by sending a newline on STDIN), it runs the 
queries defined in the plugin’s config file (against the configured 
databases), converts the results into Influx line format, and sends it 
to Telegraf.</li><li>Queries can be defined to run either on a single database, or on all databases that the configured pg user has access to.</li></ol><p>This
 plugin solves the issue with Telegraf’s postgresql_extensible plugin 
for me—I don’t need to manually define the list of databases to be able 
to run queries against all of them.</p> <p>This is what the custom plugin configuration looks like</p><figure>
  <pre><code class="language-toml">[postgresql_custom]
address=&quot;&quot;

[[postgresql_custom.query]]
sqlquery=&quot;select pg_database_size(current_database()) as size_b;&quot;
per_db=true
measurement=&quot;pg_db_size&quot;

[[postgresql_custom.query]]
script=&quot;queries/backends.sql&quot;
per_db=true
measurement=&quot;pg_backends&quot;

[[postgresql_custom.query]]
script=&quot;queries/db_stats.sql&quot;
per_db=true
measurement=&quot;pg_db_stats&quot;

[[postgresql_custom.query]]
script=&quot;queries/table_stats.sql&quot;
per_db=true
tagvalue=&quot;table_name,schema&quot;
measurement=&quot;pg_table_stats&quot;</code></pre>
  </figure>
<p>Any queries defined with <code>per_db=true</code> will be run against all databases. Queries can be specified either inline, or using a separate file.</p> <p>The <a href="https://github.com/srijan/telegraf-execd-pg-custom" rel="noreferrer">repository for this plugin</a>
 has the exact queries configured above. It also has the Grafana 
dashboard JSON which can be imported to get the same dashboard as above.</p><h2>Future optimizations</h2>
<ul><li>Monitoring related to replication is not added yet, but can be added easily</li><li>No need to use superuser account in PostgreSQL 10+</li><li>This does not support running different queries depending on version of the target PostgreSQL system.</li></ul><hr />
<p>Let me know in the comments below if you have any doubts or suggestions to make this better.</p>]]></content:encoded>
    <comments>https://srijan.ch/advanced-postgresql-monitoring-using-telegraf#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>PostgreSQL replication using Bucardo</title>
    <description><![CDATA[Keeping a live replica of selected PostgreSQL tables using Bucardo]]></description>
    <link>https://srijan.ch/postgresql-replication-using-bucardo</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cf</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[postgresql]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 15 Sep 2015 18:05:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/postgresql-replication-using-bucardo/71791f08a7-1699621096/photo-1551356277-dbb545a2d493.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/postgresql-replication-using-bucardo/71791f08a7-1699621096/photo-1551356277-dbb545a2d493.jpg" alt="PostgreSQL Replication using Bucardo">
  
  </figure>
<p>There are many different ways to use replication in PostgreSQL, whether for high<br />
availability (using a failover), or load balancing (for scaling), or just for<br />
keeping a backup. Among the various tools I found online, I though bucardo is<br />
the best for my use case - keeping a live backup of a few important tables.</p>
<p>I've assumed the following databases:</p>
<ul>
<li>Primary: Hostname = <code>host_a</code>, Database = <code>btest</code></li>
<li>Backup: Hostname = <code>host_b</code>, Database = <code>btest</code></li>
</ul>
<p>We will install bucardo in the primary database (it required it's own database<br />
to keep track of things).</p>
<ol>
<li>
<p>Install postgresql</p>
<pre><code class="language-shell-session"> sudo apt-get install postgresql-9.4</code></pre>
</li>
<li>
<p>Install dependencies on <code>host_a</code></p>
<pre><code class="language-shell-session"> sudo apt-get install libdbix-safe-perl libdbd-pg-perl libboolean-perl build-essential postgresql-plperl-9.4</code></pre>
</li>
<li>
<p>On <code>host_a</code>, Download and extract bucardo source</p>
<pre><code class="language-shell-session"> wget https://github.com/bucardo/bucardo/archive/5.4.0.tar.gz
 tar xvfz 5.4.0.tar.gz</code></pre>
</li>
<li>
<p>On <code>host_a</code>, Build and Install</p>
<pre><code class="language-shell-session"> perl Makefile.PL
 make
 sudo make install
 sudo mkdir /var/run/bucardo
 sudo mkdir /var/log/bucardo</code></pre>
</li>
<li>
<p>Create bucardo user on all hosts</p>
<pre><code class="language-sql"> CREATE USER bucardo SUPERUSER PASSWORD 'random_password';
 CREATE DATABASE bucardo;
 GRANT ALL ON DATABASE bucardo TO bucardo;</code></pre>
<p>Note: All commands from now on are to be run on <code>host_a</code> only.</p>
</li>
<li>
<p>On <code>host_a</code>, set a password for the <code>postgres</code> user:</p>
<pre><code class="language-sql"> ALTER USER postgres PASSWORD 'random_password';</code></pre>
</li>
<li>
<p>On <code>host_a</code>, add this to the installation user's <code>~/.pgpass</code> file:</p>
<pre><code class="language-ini"> host_a:5432:*:postgres:random_password
 host_a:5432:*:bucardo:random_password</code></pre>
<p>Also add entries for the other hosts for which users were created in step 5.</p>
<p>Note: It is also a good idea to chmod the <code>~/.pgpass</code> file to <code>0600</code>.</p>
</li>
<li>
<p>Run the bucardo install command:</p>
<pre><code class="language-shell-session"> bucardo -h host_a install</code></pre>
</li>
<li>
<p>Copy schema from A to B:</p>
<pre><code class="language-shell-session"> psql -h host_b -U bucardo template1 -c "drop database if exists btest;"
 psql -h host_b -U bucardo template1 -c "create database btest;"
 pg_dump -U bucardo --schema-only -h host_a btest | psql -U bucardo -h host_b btest</code></pre>
</li>
<li>
<p>Add databases to bucardo config</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add db main db=btest user=bucardo pass=host_a_pass host=host_a
 bucardo -h host_a -U bucardo add db bak1 db=btest user=bucardo pass=host_b_pass host=host_b</code></pre>
<p>This will save database details (host, port, user, password) to bucardo<br />
database.</p>
</li>
<li>
<p>Add tables to be synced</p>
<p>To add all tables:</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add all tables db=main relgroup=btest_relgroup</code></pre>
<p>To add one table:</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add table table_name db=main relgroup=btest_relgroup</code></pre>
<p>Note: Only table which have a primary key can be added here. This is a<br />
limitation of bucardo.</p>
</li>
<li>
<p>Add db group</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add dbgroup btest_dbgroup main:source bak1:target</code></pre>
</li>
<li>
<p>Create sync</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add sync btest_sync dbgroup=btest_dbgroup relgroup=btest_relgroup conflict_strategy=bucardo_source onetimecopy=2 autokick=0</code></pre>
</li>
<li>
<p>Start the bucardo service</p>
<pre><code class="language-shell-session"> sudo bucardo -h host_a -U bucardo -P random_password start</code></pre>
<p>Note that this command requires passing the password because it uses sudo,<br />
and root user's <code>.pgpass</code> file does not have the credentials saved for bucardo<br />
user.</p>
</li>
<li>
<p>Run sync once</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo kick btest_sync 0</code></pre>
</li>
<li>
<p>Set auto-kick on any changes</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo update sync btest_sync autokick=1
 bucardo -h host_a -U bucardo reload config</code></pre>
</li>
</ol>
<p>That's it. Now, the tables specified in step 11 will be replicated from <code>host_a</code><br />
to <code>host_b</code>.</p>
<p>I also plan to write about other alternatives I've tried soon.</p>]]></content:encoded>
    <comments>https://srijan.ch/postgresql-replication-using-bucardo#comments</comments>
    <slash:comments>6</slash:comments>
  </item></channel>
</rss>
