<?xml version="1.0" encoding="utf-8"?><rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:media="http://search.yahoo.com/mrss/"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
>
<channel>
  <title>Srijan Choudhary, all posts</title>
  <link>https://srijan.ch/feed/all</link>
  <lastBuildDate>Mon, 29 Dec 2025 22:20:00 +0000</lastBuildDate>
  
  <sy:updatePeriod>daily</sy:updatePeriod>
  <sy:updateFrequency>1</sy:updateFrequency>
  <generator>Kirby</generator>
  <atom:link href="https://srijan.ch/feed/all.xml" rel="self" type="application/rss+xml" />
  <description>Srijan Choudhary&#039;s Articles and Notes Feed</description>
  <item>
    <title>2025-12-29-001</title>
    <description><![CDATA[Faced a failing disk in my raidz2 ZFS pool today. Recovery was pretty simple: Asked the service provider to replace the disk Find new disk ID etc using: lsblk -o NAME,SIZE,MODEL,SERIAL,LABEL,FSTYPE ls -ltrh /dev/disk/by-id/ata-* Resilver using: sudo zpool replace lake &lt;old_disk_id&gt; &lt;new_disk_id&gt; Watch status using: watch zpool status -v Re-silvering is still ongoing, but hopefully …]]></description>
    <link>https://srijan.ch/notes/2025-12-29-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-12-29-001</guid>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 29 Dec 2025 22:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Faced a failing disk in my raidz2 ZFS pool today.</p>
<p>Recovery was pretty simple:</p>
<ol>
<li>Asked the service provider to replace the disk</li>
<li>Find new disk ID etc using:<pre><code>lsblk -o NAME,SIZE,MODEL,SERIAL,LABEL,FSTYPE
ls -ltrh /dev/disk/by-id/ata-*</code></pre>
</li>
<li>Resilver using:<pre><code>sudo zpool replace lake &lt;old_disk_id&gt; &lt;new_disk_id&gt;</code></pre>
</li>
<li>Watch status using:<pre><code>watch zpool status -v</code></pre>
</li>
</ol>
<p>Re-silvering is still ongoing, but hopefully completes without issues. Will run a manual <code>zpool scrub</code> at the end to make sure everything is okay.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-12-29-001#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>2025-12-15-001</title>
    <description><![CDATA[A small elisp snippet that I found useful. I often switch between terminals and #Emacs, and they have slightly different behaviors for C-w. This makes it behave the same in Emacs as it does in bash/zsh/fish etc - deletes the last word. It retains the kill-region behavior if a region is actually selected. (defun kill-region-or-backward-word () "If the region is active and non-empty, call …]]></description>
    <link>https://srijan.ch/notes/2025-12-15-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-12-15-001</guid>
    <category><![CDATA[emacs]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 15 Dec 2025 06:55:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>A small elisp snippet that I found useful. I often switch between terminals and <a href="https://srijan.ch/tags/emacs" class="p-category">#Emacs</a>, and they have slightly different behaviors for <code>C-w</code>. This makes it behave the same in Emacs as it does in bash/zsh/fish etc - deletes the last word. It retains the <code>kill-region</code> behavior if a region is actually selected.</p>
<pre><code class="language-emacs-lisp">(defun kill-region-or-backward-word ()
  "If the region is active and non-empty, call `kill-region'.
Otherwise, call `backward-kill-word'."
  (interactive)
  (call-interactively
   (if (use-region-p) 'kill-region 'backward-kill-word)))
(global-set-key (kbd "C-w") 'kill-region-or-backward-word)</code></pre>
<p>Ref: <a href="https://stackoverflow.com/questions/13844453/how-do-i-make-c-w-behave-the-same-as-bash">https://stackoverflow.com/questions/13844453/how-do-i-make-c-w-behave-the-same-as-bash</a></p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-12-15-001#comments</comments>
    <slash:comments>4</slash:comments>
  </item><item>
    <title>2025-12-09-002</title>
    <description><![CDATA[tramp-hlo looks interesting. Anything that can make tramp on #Emacs snappier is a good thing in my books.]]></description>
    <link>https://srijan.ch/notes/2025-12-09-002</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-12-09-002</guid>
    <category><![CDATA[emacs]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 09 Dec 2025 22:50:00 +0000</pubDate>
    <content:encoded><![CDATA[<p><a href="https://github.com/jsadusk/tramp-hlo">tramp-hlo</a> looks interesting. Anything that can make tramp on  <a href="https://srijan.ch/tags/emacs" class="p-category">#Emacs</a> snappier is a good thing in my books.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-12-09-002#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>2025-07-21-001</title>
    <description><![CDATA[gcloud_ssh A simple script that finds a google cloud compute VM by IP address across all projects of an organization and runs gcloud ssh to it. #!/bin/bash GCLOUD_SSH_FLAGS="--internal-ip" # Get organization ID dynamically get_org_id() { gcloud organizations list --format="value(name)" --limit=1 2&gt;/dev/null | sed 's|organizations/||' } search_and_connect() { local ip_address=$1 echo "Searching …]]></description>
    <link>https://srijan.ch/notes/2025-07-21-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-07-21-001</guid>
    <category><![CDATA[scripts]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 21 Jul 2025 16:45:00 +0000</pubDate>
    <content:encoded><![CDATA[<h2>gcloud_ssh</h2><p>A simple script that finds a google cloud compute VM by IP address across all projects of an organization and runs gcloud ssh to it.</p>
<pre><code class="language-bash">#!/bin/bash

GCLOUD_SSH_FLAGS="--internal-ip"

# Get organization ID dynamically
get_org_id() {
    gcloud organizations list --format="value(name)" --limit=1 2&gt;/dev/null | sed 's|organizations/||'
}

search_and_connect() {
    local ip_address=$1

    echo "Searching for IP: $ip_address across organization..."

    # Get organization ID
    ORG_ID=$(get_org_id)
    if [ -z "$ORG_ID" ]; then
        echo "Failed to get organization ID. Make sure you have organization-level access."
        exit 1
    fi

    # Search for instance with this IP address
    RESULT=$(gcloud asset search-all-resources \
        --scope=organizations/$ORG_ID \
        --query="$ip_address" \
        --asset-types='compute.googleapis.com/Instance' \
        --format=json 2&gt;/dev/null)

    if [ -z "$RESULT" ] || [ "$RESULT" = "[]" ]; then
        echo "IP address $ip_address not found in organization."
        exit 1
    fi

    # Parse JSON to extract instance details
    INSTANCE_NAME=$(echo "$RESULT" | jq -r '.[0].name' | sed 's|.*/||')
    PROJECT=$(echo "$RESULT" | jq -r '.[0].parentFullResourceName' | sed 's|.*/||')
    ZONE=$(echo "$RESULT" | jq -r '.[0].location' | sed 's|.*/||')
    STATE=$(echo "$RESULT" | jq -r '.[0].state')

    if [ "$INSTANCE_NAME" = "null" ] || [ "$PROJECT" = "null" ] || [ "$ZONE" = "null" ]; then
        echo "Failed to parse instance details from search result."
        echo "Raw result: $RESULT"
        exit 1
    fi

    # Check if instance is running
    if [ "$STATE" != "RUNNING" ]; then
        echo "Instance $INSTANCE_NAME is not running (state: $STATE)."
        echo "Cannot connect to a non-running instance."
        exit 1
    fi

    echo "Found instance: $INSTANCE_NAME in zone: $ZONE (project: $PROJECT)"

    # Generate and display the SSH command
    SSH_COMMAND="gcloud compute ssh $INSTANCE_NAME --zone=$ZONE --project=$PROJECT ${GCLOUD_SSH_FLAGS}"
    echo "SSH command: $SSH_COMMAND"

    # Execute the SSH command
    echo "Connecting to $INSTANCE_NAME..."
    exec $SSH_COMMAND
}

# Main script logic
case "${1:-}" in
    "")
        echo "Usage: $0 &lt;IP_ADDRESS&gt;"
        echo "  &lt;IP_ADDRESS&gt;  - Connect to instance with this IP"
        exit 1
        ;;
    *)
        search_and_connect "$1"
        ;;
esac
</code></pre>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-07-21-001#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>2025-06-11-001</title>
    <description><![CDATA[Quick note for me to generate #Emacs TAGS file for an #Erlang project: find {src,apps,_build/default,$(dirname $(which erl))/../lib} -name "*.[he]rl" | xargs realpath --relative-to="$(pwd)" | etags.emacs -o TAGS - The relative path ensures that this works over tramp as well.]]></description>
    <link>https://srijan.ch/notes/2025-06-11-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-06-11-001</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[emacs]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 11 Jun 2025 02:35:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Quick note for me to generate <a href="/tags/emacs" class="p-category">#Emacs</a> TAGS file for an <a href="/tags/erlang" class="p-category">#Erlang</a> project:</p>
<pre><code>find {src,apps,_build/default,$(dirname $(which erl))/../lib} -name "*.[he]rl" | xargs realpath --relative-to="$(pwd)" | etags.emacs -o TAGS -</code></pre>
<p>The relative path ensures that this works over tramp as well.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-06-11-001#comments</comments>
    <slash:comments>4</slash:comments>
  </item><item>
    <title>2025-05-02-001</title>
    <description><![CDATA[I didn't know that Aloe Vera can have flowers!]]></description>
    <link>https://srijan.ch/notes/2025-05-02-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-05-02-001</guid>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 02 May 2025 16:50:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306.jpg" medium="image" />
    <content:encoded><![CDATA[<p>I didn't know that Aloe Vera can have flowers!</p>
<figure><picture><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-300x.avif 300w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-600x.avif 600w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-704x.avif 704w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-900x.avif 900w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-1200x.avif 1200w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-1800x.avif 1800w" type="image/avif"><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-300x.webp 300w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-600x.webp 600w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-704x.webp 704w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-900x.webp 900w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-1200x.webp 1200w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-1800x.webp 1800w" type="image/webp"><img alt="A photo of an Aloe Vera plant with a long flower stalk rising out of the center with yellow flowers" class="u-photo" height="704" sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" src="https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-704x.jpg" srcset="https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-300x.jpg 300w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-600x.jpg 600w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-704x.jpg 704w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-900x.jpg 900w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-1200x.jpg 1200w, https://srijan.ch/media/pages/notes/2025-05-02-001/4b4f6252c9-1746204641/20250502_093306-1800x.jpg 1800w" title="A photo of an Aloe Vera plant with a long flower stalk rising out of the center with yellow flowers" width="528"></picture></figure>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-05-02-001#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>2025-03-24-002</title>
    <description><![CDATA[Read Jeremy&#039;s post on quickly switching the default browser. I had a shell script to do this as well. Doing it from Emacs makes more sense because I can have a completion UI. So, here's my modified version for Linux: (defun sj/default-browser (&amp;optional name) "Set the default browser based on the given NAME." (interactive (list (completing-read "Browser: " (split-string …]]></description>
    <link>https://srijan.ch/notes/2025-03-24-002</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-03-24-002</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 24 Mar 2025 20:55:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Read <a href="https://takeonrules.com/2025/02/05/quick-switch-default-browser/">Jeremy&#039;s post on quickly switching the default browser</a>.</p>
<p>I had a shell script to do this as well. Doing it from Emacs makes more sense because I can have a completion UI.</p>
<p>So, here's my modified version for Linux:</p>
<pre><code class="language-elisp">(defun sj/default-browser (&amp;optional name)
  "Set the default browser based on the given NAME."
  (interactive
   (list
    (completing-read
     "Browser: "
     (split-string
      (shell-command-to-string
       "find /usr/share/applications ~/.local/share/applications -name \"*.desktop\" -exec grep -l \"Categories=.*WebBrowser\" {} \\;")
      "\n" t))))
  (let ((browser-desktop (file-name-nondirectory name)))
    (shell-command (format "xdg-mime default %s text/html" browser-desktop))
    (shell-command (format "xdg-mime default %s application/xhtml+xml" browser-desktop))
    (shell-command (format "xdg-mime default %s application/x-extension-html" browser-desktop))
    (shell-command (format "xdg-settings set default-web-browser %s" browser-desktop))))</code></pre>
<p>As a plus, it automatically lists the installed browsers based on <code>.desktop</code> files on your system.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-03-24-002#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>2025-01-15-001</title>
    <description><![CDATA[I had been facing an issue in #Emacs on my work Mac system: C-S-&lt;tab&gt; was somehow being translated to C-&lt;tab&gt;. I tried to look into key-translation-map to figure out the issue, but could not find anything. Finally, turned out that I had bound C-&lt;tab&gt; to tab-line-switch-to-next-tab and C-&lt;iso-lefttab&gt; to tab-line-switch-to-prev-tab, but the actual C-S-&lt;tab&gt; was …]]></description>
    <link>https://srijan.ch/notes/2025-01-15-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2025-01-15-001</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[TIL]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 15 Jan 2025 02:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I had been facing an issue in <a href="https://srijan.ch/tags/emacs" class="p-category">#Emacs</a> on my work Mac system: <code>C-S-&lt;tab&gt;</code> was somehow being translated to <code>C-&lt;tab&gt;</code>. I tried to look into <code>key-translation-map</code> to figure out the issue, but could not find anything.</p>
<p>Finally, turned out that I had bound <code>C-&lt;tab&gt;</code> to <code>tab-line-switch-to-next-tab</code> and <code>C-&lt;iso-lefttab&gt;</code> to <code>tab-line-switch-to-prev-tab</code>, but the actual <code>C-S-&lt;tab&gt;</code> was unbound. <code>C-&lt;iso-lefttab&gt;</code> only works on linux: something to do with how X11 sends the event to the application (and probably some compatibility mode due to which wayland was doing the same).</p>
<p>On Mac, once I explicitly bound <code>C-S-&lt;tab&gt;</code> in my Emacs config, it started working correctly.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2025-01-15-001#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>Triggering Orgzly sync on Android when Org file changes</title>
    <description><![CDATA[Event based orgzly sync using tasker to prevent conflicts]]></description>
    <link>https://srijan.ch/triggering-orgzly-sync-on-android-when-org-file-changes</link>
    <guid isPermaLink="false">tag:srijan.ch:/triggering-orgzly-sync-on-android-when-org-file-changes</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[orgmode]]></category>
    <category><![CDATA[android]]></category>
    <category><![CDATA[tasker]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 13 Jan 2025 18:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<h3>Introduction</h3>
<p>To use my org GTD system on the go, I use the excellent <a href="https://www.orgzlyrevived.com/">Orgzly Revived</a> mobile app for Android. To sync the org files between my laptop and my phone, I use <a href="https://syncthing.net/">syncthing</a> (specifically, the <a href="https://github.com/Catfriend1/syncthing-android">Syncthing Android Fork by Catfriend1</a>).</p>
<p>This allows me to quickly capture things from my phone, get reminder notifications of time-sensitive tasks, and mark tasks complete from my phone. The widgets are also useful for quickly looking at some contexts like errands or calls.</p>
<h3>Sync Issues</h3>
<p>However, Orgzly works by contructing a copy of the contents in it's own database and synchronizing it against the files periodically or when some action is taken in the app. Right now, it does not support synchronizing the data when the file changes.</p>
<p>For me, this has sometimes lead to a conflict between the Orgzly database and the actual files in the org folder. This only happens if the org file is edited on my laptop and something is also edited in the Orgzly app before syncing.</p>
<p>But, the Orgzly app <a href="https://github.com/orgzly-revived/documentation/blob/master/android/public-receiver.org">supports intents</a> that can be used from Tasker, Automate, etc to trigger an event-based sync.</p>
<h3>Tasker Profile</h3>
<p>So, I created a tasker profile to do this. It was surprisingly easy (I've used tasker before, though not too much). It can be found here: <a href="https://taskernet.com/?public&amp;tags=orgzly&amp;time=AllTime">https://taskernet.com/?public&amp;tags=orgzly&amp;time=AllTime</a> called "Run Orgzly Sync When Org Files Change".</p>
<p>Here's the basic flow of the profile:</p>
<pre><code class="language-less">Profile: Run Orgzly Sync When Org Files Change
Settings: Notification: no
Variables: [ %orgfolder:has value ]
    Event: File Modified [ File:%orgfolder Event:* ]



Enter Task: Orgzly Sync
Settings: Run Both Together

A1: If [ %evtprm1 ~ *.org &amp; %evtprm2 ~ ClosedWrite/MovedTo ]

    A2: Send Intent [
         Action: com.orgzly.intent.action.SYNC_START
         Cat: None
         Package: com.orgzlyrevived
         Class: com.orgzly.android.ActionReceiver
         Target: Broadcast Receiver ]

A3: End If</code></pre>
<h3>How it works</h3>
<ol>
<li>When importing this profile, it will ask for your org folder in the local Android filesystem.</li>
<li>Once selected and the profile is activated, it will start monitoring this folder for inotify events.</li>
<li>When any file is modified (or created or moved to/from the folder or deleted), this profile is triggered. It received the parameters: full path of the affected file, and the actual event.</li>
<li>Then, it checks if the affected file is an org file (ending in <code>.org</code>) AND if the event is one of ClosedWrite or MovedTo. I filtered to these events because they are usually the last event received for an edit.</li>
<li>If yes, then Orgzly sync is triggered using an intent.</li>
</ol>
<p>I've been using this for the last several months, and it works well as long as both devices are online when making edits. Conflicts can still happen if, for example, I make some edits on my laptop and subsequently on my phone but the phone is not online or the syncthing app is not running due to data saver or battery saver active. In those cases, eventually syncthing creates a conflicted file that I can manually resolve.</p>
<h3>Limitations</h3>
<ol>
<li>It does not support recursive files inside the folder. Tasker right now does not support recursively watching a folder, but if it's added this can be a good edition; specially because Orgzly Revived supports that.</li>
<li>Since this watches the files in the folder, it also triggers a sync if Orgzly app was used to change the file. Not sure if this can be filtered somehow. Maybe based on foreground app? But it seems flakey.</li>
</ol>
<h3>Alternatives</h3>
<ol>
<li>Orgzly Revived has a git sync backend in beta. This might work better with auto-commit &amp; push.</li>
<li>Using Emacs on Android instead of Orgzly is also an option, but I felt it did not work very well without an external keyboard. Also, it does not have widgets.</li>
</ol>]]></content:encoded>
    <comments>https://srijan.ch/triggering-orgzly-sync-on-android-when-org-file-changes#comments</comments>
    <slash:comments>21</slash:comments>
  </item><item>
    <title>My Default Apps at the End of 2024</title>
    <description><![CDATA[I saw a few blog posts with people sharing their default apps for the year, and I wanted to share mine as well. Here's the list: 📨 Mail Service: Fastmail 📮 Mail Client: Fastmail web, mu4e (Emacs), FairEmail (Android) 📝 Notes: Markdown and Org files in denote (Emacs), Markor (Android) ✅ To-Do: GTD using Orgmode (Emacs), Orgzly Revived (Android) 📆 Calendar: Google Calendar 🙍🏻‍♂️ Contacts: Google …]]></description>
    <link>https://srijan.ch/my-default-apps-at-the-end-of-2024</link>
    <guid isPermaLink="false">tag:srijan.ch:/my-default-apps-at-the-end-of-2024</guid>
    <category><![CDATA[Tech]]></category>
    <category><![CDATA[Fun]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 31 Dec 2024 15:10:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I saw a few blog posts with people sharing their default apps for the year, and I wanted to share mine as well.</p>
<p>Here's the list:</p>
<ul>
<li>📨 Mail Service: Fastmail</li>
<li>📮 Mail Client: Fastmail web, mu4e (Emacs), FairEmail (Android)</li>
<li>📝 Notes: Markdown and Org files in denote (Emacs), Markor (Android)</li>
<li>✅ To-Do: GTD using Orgmode (Emacs), Orgzly Revived (Android)</li>
<li>📆 Calendar: Google Calendar</li>
<li>🙍🏻‍♂️ Contacts: Google Contacts</li>
<li>📖 RSS Service: Miniflux</li>
<li>🗞️ RSS Client: Miniflux WebUI, Miniflutt (Android), Elfeed (Emacs)</li>
<li>⌨️ Launcher: Krunner (KDE) and Alfred (Mac)</li>
<li>☁️ Cloud storage amd Sync: Google Drive, Syncthing</li>
<li>🌅 Photo library: Google Photos</li>
<li>🌐 Web Browser: Zen and Firefox</li>
<li>💬 Chat: WhatsApp and Slack</li>
<li>🔖 Bookmarks: Linkding</li>
<li>📑 Read Later: Linkding</li>
<li>📚 Reading: Kindle Oasis</li>
<li>📜 Word Processing: NA</li>
<li>📈 Spreadsheets: Google Sheets</li>
<li>📊 Presentations: NA</li>
<li>🛒 Shopping Lists: Orgmode</li>
<li>💰 Personal Finance: YNAB</li>
<li>🎵 Music: Roon</li>
<li>🎤 Podcasts: Pocketcasts</li>
<li>🔐 Password Management: 1Password</li>
<li>🤦‍♂️ Social Media: Mastodon</li>
<li>🌤️ Weather: Today Weather</li>
<li>🔎 Search: Kagi</li>
<li>🧮 Code Editor: Emacs and VSCode</li>
<li>🏡 Home Automation: HomeAssistant</li>
</ul>
<p>I realized that I didn't need Word Processing or Presentations at all this year.</p>]]></content:encoded>
    <comments>https://srijan.ch/my-default-apps-at-the-end-of-2024#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>Capturing slack messages directly into Emacs orgmode inbox</title>
    <description><![CDATA[Learn how to seamlessly capture Slack messages into your Emacs Orgmode (GTD) inbox using a custom browser userscript.]]></description>
    <link>https://srijan.ch/capturing-slack-messages-directly-into-emacs-orgmode-inbox</link>
    <guid isPermaLink="false">tag:srijan.ch:/capturing-slack-messages-directly-into-emacs-orgmode-inbox</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[orgmode]]></category>
    <category><![CDATA[slack]]></category>
    <category><![CDATA[gtd]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 27 Dec 2024 11:50:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/capturing-slack-messages-directly-into-emacs-orgmode-inbox/d0e1ffec00-1735275760/screenshot-2024-12-27-at-12.01.31am.png" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/capturing-slack-messages-directly-into-emacs-orgmode-inbox/d0e1ffec00-1735275760/screenshot-2024-12-27-at-12.01.31am.png" alt="Screenshot of a custom org capture button in Slack message popup">
  
    <figcaption class="text-center">
    Org capture button in Slack message popup  </figcaption>
  </figure>
<p>Ever since switching to orgmode as my GTD system, I've been trying to (slowly) optimize my capture system for things coming to me from different places. One of the things that I was not satisfied with was capturing and processing inputs from Slack.</p>
<p>Slack messages can easily get lost in the fast-paced stream of conversations. Manually copying and pasting messages introduces too much friction, and also removes some context (backlinks), unless a link to the original message is also manually copied. There are ways to save messages in Slack itself, but it just makes it another Inbox to check, and I'm trying to reduce that.</p>
<p>I started by using the saved messages feature to save messages in a single place, and later either shifting them to my org inbox manually, or directly working on them and completing them. Then, I shifted to using the <a href="https://todoist.com/integrations/apps/slack">Slack Todoist integration</a> to save slack messages to Todoist, and <a href="https://srijan.ch/todoist-cloud-inbox-for-gtd-in-emacs-orgmode">pulling them from Todoist into my org Inbox</a>.</p>
<p>I've now found a better mechanism that allows me to seamlessly capture Slack message with it's context into orgmode. Here's a demo:</p><figure>
  <video controls muted preload="auto"><source src="https://srijan.ch/media/pages/blog/capturing-slack-messages-directly-into-emacs-orgmode-inbox/4ee876ac11-1728494126/slack-to-org-demo.webm" type="video/webm"></video>    <figcaption>Demo video showing a custom slack button to trigger org capture with the selected message's contents and a link back to the message</figcaption>
  </figure>
<h2>Demo breakdown</h2>
<p>This method uses userscripts to add a custom button to the hover menu that comes up for any message in slack, and triggers org protocol capture with the message details when the button is clicked. The message details include the sender, message text, and a direct link back to the message. I've set up this protocol handler to ask me to enter the heading of the capured item, but it can be as easily set up to directly capture the message without user input.</p>
<h2>Setup and Implementation</h2>
<h3>Prerequisites</h3>
<ol>
<li>Slack running in a browser (instead of a desktop app)</li>
<li>Browser extention for userscripts (Tampermonkey, Violentmonkey, Greasemonkey, etc)</li>
<li>Emacs with orgmode installed</li>
<li>Org protocol setup</li>
</ol>
<p>I didn't find a good way to inject userscripts into the Slack destop app, so for now, this method requires using Slack in a browser. It also works when Slack is installed using the "Install Page as App" feature of the browser.</p>
<p>Update: I got a comment that using this in the Slack app is also possible, though I've not evaluated this yet. You can <a href="#komment_6503f91cf9943cf408e47743f863bc12">check this comment for details</a>.</p>
<h3>Setting up Org Protocol Capture</h3>
<p>Setting up org protocol capture involves two steps: configuring your OS to use Emacs to open <code>org-protocol://</code> links, and configuring Emacs to save the captured data as you want.</p>
<p>For OS setup, please check the guides for your OS here: <a href="https://orgmode.org/worg/org-contrib/org-protocol.html#orge00964c">https://orgmode.org/worg/org-contrib/org-protocol.html#orge00964c</a></p>
<p>On Emacs side, this is the minimal config required:</p>
<pre><code class="language-emacs-lisp">(server-start)
(setq-default org-agenda-files '("~/org"))
(setq-default my-org-inbox
    (expand-file-name "inbox.org" "~/org"))
(setq-default org-capture-templates
      '(    
    ("i" "Inbox" entry (file my-org-inbox)
         "* %?\n%i\n%U"
         :kill-buffer t)
    ("l" "Inbox with link" entry (file my-org-inbox)
         "* %?\n%i\n%a\n%U"
         :kill-buffer t)))
(setq-default org-protocol-default-template-key "l")
(require 'org-protocol)</code></pre>
<p>An optional enhancement that can be seen in the demo is: open a new emacs frame to capture this message, then close it automatically after the capture has been done. For this, I use the config snippet from prot's excellent post: <a href="https://protesilaos.com/codelog/2024-09-19-emacs-command-popup-frame-emacsclient/">https://protesilaos.com/codelog/2024-09-19-emacs-command-popup-frame-emacsclient/</a></p>
<p>To run emacsclient with the popup frame parameter, I use:</p>
<pre><code class="language-shell-session">emacsclient --create-frame -F \
    '((prot-window-popup-frame . t) (name . "org-protocol-capture") (width . 80))' \
    -- %u</code></pre>
<p>Emacsclient's args can be set in the desktop entry file (linux) or org-protocol.app file (OSX) when setting up org-protocol.</p>
<h3>The userscript</h3>
<p>For the userscript, I searched for an existing published userscript for Slack that <a href="https://greasyfork.org/en/scripts/500127-slack-quick-edit-button">adds a button</a>, then tweaked it a bit to configure the button according to my needs.</p>
<p>I've published the userscript here: <a href="https://greasyfork.org/en/scripts/521908-slack-org-protocol-capture">https://greasyfork.org/en/scripts/521908-slack-org-protocol-capture</a></p>
<p>From this, I learned about an interesting API called <a href="https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver">MutationObserver</a> that seems very useful for userscripts.</p>
<h2>Notes</h2>
<h3>Using emacs-slack</h3>
<p>Another simple approach for this can be to use the <a href="https://github.com/emacs-slack/emacs-slack">emacs-slack</a> package to directly use Slack from Emacs. I tried this, and it does not work very well for me because:</p>
<ol>
<li>My org limits Slack session authentication to 1 week, and authenticating in emacs-slack is a little cumbersome.</li>
<li>We also use Slack huddles, and it does not work well with emacs-slack.</li>
</ol>
<h3>Possible Improvements</h3>
<ol>
<li>Make org capture template configurable</li>
<li>Add tooltip to the button</li>
<li>Pass even more metadata like message timestamp, channel name, emojis, etc.</li>
<li>Maybe some keyboard shortcuts to trigger the capture?</li>
</ol>
<h3>Limitations / Drawbacks</h3>
<p>Since this is dependent on the HTML structure of the slack web app, it can be fragile and can break easily if there are changes in the app.</p>
<p>This userscript uses the MutationObserver API to observe DOM changes in the whole slack workspace. So, it has some impact on performance. However, I've been using this daily for the last several months and I have not noticed any issues.</p>]]></content:encoded>
    <comments>https://srijan.ch/capturing-slack-messages-directly-into-emacs-orgmode-inbox#comments</comments>
    <slash:comments>20</slash:comments>
  </item><item>
    <title>2024-10-08-002</title>
    <description><![CDATA[Read an interesting set of posts today: https://lethain.com/extract-the-kernel/ and https://lethain.com/executive-translation/ . The basic concept is: ... executives are generally directionally correct but specifically wrong, and it’s your job to understand the overarching direction without getting distracted by the narrow errors in their idea. This resonates well with my experience. I have been …]]></description>
    <link>https://srijan.ch/notes/2024-10-08-002</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-10-08-002</guid>
    <category><![CDATA[management]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 08 Oct 2024 07:40:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Read an interesting set of posts today: <a href="https://lethain.com/extract-the-kernel/">https://lethain.com/extract-the-kernel/</a> and <a href="https://lethain.com/executive-translation/">https://lethain.com/executive-translation/</a> . The basic concept is:</p>
<blockquote>
<p>... executives are generally directionally correct but specifically wrong, and it’s your job to understand the overarching direction without getting distracted by the narrow errors in their idea.</p>
</blockquote>
<p>This resonates well with my experience. I have been doing this unconsciously, but it's good to put it in these words.</p><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/srijan4.bsky.social/post/3l5yd3rfbbn2j">https://bsky.app/profile/srijan4.bsky.social/post/3l5yd3rfbbn2j</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-10-08-002#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>2024-10-08-001</title>
    <description><![CDATA[Tried using X11 on #Linux the last few days due to some issues with Zoom screensharing in Wayland with the latest pipewire, and I already miss #Wayland. Issues I faced with X11: Smooth scrolling broken Apps work noticeably slower Screen tearing This bug in Emacs GTK build: https://debbugs.gnu.org/cgi/bugreport.cgi?bug=67654 (To be fair, this is a GTK-specific issue, not X11 specific) I will go …]]></description>
    <link>https://srijan.ch/notes/2024-10-08-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-10-08-001</guid>
    <category><![CDATA[wayland]]></category>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 08 Oct 2024 03:45:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Tried using X11 on <a href="/tags/linux" class="p-category">#Linux</a> the last few days due to some issues with Zoom screensharing in Wayland with the latest pipewire, and I already miss <a href="/tags/wayland" class="p-category">#Wayland</a>.</p>
<p>Issues I faced with X11:</p>
<ol>
<li>Smooth scrolling broken</li>
<li>Apps work noticeably slower</li>
<li>Screen tearing</li>
<li>This bug in Emacs GTK build: <a href="https://debbugs.gnu.org/cgi/bugreport.cgi?bug=67654">https://debbugs.gnu.org/cgi/bugreport.cgi?bug=67654</a> (To be fair, this is a GTK-specific issue, not X11 specific)</li>
</ol>
<p>I will go back to Wayland as soon as Zoom fixes this: <a href="https://community.zoom.com/t5/Zoom-Meetings/share-screen-linux-wayland-broken/m-p/203624/highlight/true#M112235">https://community.zoom.com/t5/Zoom-Meetings/share-screen-linux-wayland-broken/m-p/203624/highlight/true#M112235</a></p><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/srijan4.bsky.social/post/3l5ycxn4gak27">https://bsky.app/profile/srijan4.bsky.social/post/3l5ycxn4gak27</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-10-08-001#comments</comments>
    <slash:comments>7</slash:comments>
  </item><item>
    <title>2024-10-01-002</title>
    <description><![CDATA[I have been using #karousel on #KDE for several weeks, and yesterday shifted to #PaperWM on #GNOME. Took some time to configure things like I wanted, but it's much smoother than karousel (and fancier). Overall, I like the scrolling tiling pane paradigm. I realized I've been manually doing something like this using workspaces with 1-2 windows per workspace with two keybindings - one to change …]]></description>
    <link>https://srijan.ch/notes/2024-10-01-002</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-10-01-002</guid>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 01 Oct 2024 22:50:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using #karousel on #KDE for several weeks, and yesterday shifted to #PaperWM on #GNOME. Took some time to configure things like I wanted, but it's much smoother than karousel (and fancier).</p>
<p>Overall, I like the scrolling tiling pane paradigm. I realized I've been manually doing something like this using workspaces with 1-2 windows per workspace with two keybindings - one to change workspace and one to switch windows inside a workspace. So, this window management model really clicks for me.</p>
<p>I switched from GNOME to KDE several years ago due to getting burnt by extensions breaking too frequently, but hopefully things are better now.</p><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/srijan4.bsky.social/post/3l5icw34klr2e">https://bsky.app/profile/srijan4.bsky.social/post/3l5icw34klr2e</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-10-01-002#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>2024-09-24-001</title>
    <description><![CDATA[#Emacs #TIL : I learned about save-interprogram-paste-before-kill - which saves the existing system clipboard text into the kill ring before replacing it. This ensures that Emacs kill operations do not irrevocably overwrite existing clipboard text. A common workflow for me is to copy some text from a different application and paste it inside Emacs. But, if I want to first delete a word or region …]]></description>
    <link>https://srijan.ch/notes/2024-09-24-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-09-24-001</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[TIL]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 24 Sep 2024 17:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p><a href="https://srijan.ch/tags/emacs" class="p-category">#Emacs</a> <a href="https://srijan.ch/tags/TIL" class="p-category">#TIL</a> : I learned about <code>save-interprogram-paste-before-kill</code> - which saves the existing system clipboard text into the kill ring before replacing it. This ensures that Emacs kill operations do not irrevocably overwrite existing clipboard text.</p>
<p>A common workflow for me is to copy some text from a different application and paste it inside Emacs. But, if I want to first delete a word or region to replace, the deleted word or region goes to the system clipboard and replaces my copied text. This config saves the previous entry in the system clipboard so I can do a <code>C-p</code> after paste to choose the previous paste.</p><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/srijan4.bsky.social/post/3l4w53avhjw2b">https://bsky.app/profile/srijan4.bsky.social/post/3l4w53avhjw2b</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-09-24-001#comments</comments>
    <slash:comments>17</slash:comments>
  </item><item>
    <title>Emacs 30.1 highlight - intuitive tab line</title>
    <description><![CDATA[Tabs in Emacs 30.1 behave similarly to other common desktop applications]]></description>
    <link>https://srijan.ch/emacs-30-1-highlight-intuitive-tab-line</link>
    <guid isPermaLink="false">tag:srijan.ch:/emacs-30-1-highlight-intuitive-tab-line</guid>
    <category><![CDATA[emacs]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 15 Sep 2024 04:45:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/emacs-30-1-highlight-intuitive-tab-line/50039cbaae-1726378038/screenshot_20240915_012612.png" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/emacs-30-1-highlight-intuitive-tab-line/50039cbaae-1726378038/screenshot_20240915_012612.png" alt="Emacs NEWS file for 30.1">
  
    <figcaption class="text-center">
    Emacs NEWS file for 30.1  </figcaption>
  </figure>
<p>The first Emacs 30.0.91 pretest for what will be the 30.1 release of Emacs was <a href="https://lists.gnu.org/archive/html/emacs-devel/2024-09/msg00305.html">announced on the mailing list</a> a few days ago. I was going through <a href="https://git.savannah.gnu.org/cgit/emacs.git/tree/etc/NEWS?h=emacs-30">the NEWS file</a>, and found something that I've wanted in Emacs for a while now.</p>
<p>One of the niggles I had with Emacs was its tab behavior. It worked differently from how other applications that I'm used to work (firefox, kitty, etc).</p>
<p>In emacs, tab-per-buffer can be achieved by using the <a href="https://www.gnu.org/software/emacs/manual/html_node/emacs/Tab-Line.html">tab-line-mode</a>. But, before now, it had the following problems:</p>
<ol>
<li>Since tab-line-mode listed buffers sorted by recently visited, so the order or tabs kept changing</li>
<li>There was no wrap-around when trying to go to the next tab from the last tab</li>
</ol>
<p>Here's a video showing the old behavior:</p><figure>
  <video controls muted preload="auto"><source src="https://srijan.ch/media/pages/blog/emacs-30-1-highlight-intuitive-tab-line/f1188c76da-1726375650/screencast_20240915_002940.webm" type="video/webm"></video>    <figcaption>A video showing tab-line-mode behavior when switching to next/prev tabs. The tab selection does not wrap around and behaves in an unexpected manner. It also starts showing buffers that have not previously been shown in this window.</figcaption>
  </figure>
<p>To solve this, there is package called <a href="https://github.com/thread314/intuitive-tab-line-mode">intuitive-tab-line-mode</a> that solves the above two problems. But, I had problems with it because it did not work well with the <a href="https://protesilaos.com/emacs/beframe">beframe</a> package that I also use.</p>
<p>Now, with Emacs 30.1, this behavior is what comes out-of-the-box with Emacs. The only config needed is to enable <code>global-tab-line-mode</code>. And because it's built-in, it works with other modes like beframe.</p><figure>
  <video controls muted preload="auto"><source src="https://srijan.ch/media/pages/blog/emacs-30-1-highlight-intuitive-tab-line/1fdc9c1be4-1726375650/screencast_20240915_003650.webm" type="video/webm"></video>    <figcaption>A video showing an intuitive tab-line-mode behavior. Tab selection wraps around and only cycles between buffers already showing in current window.</figcaption>
  </figure>
<p>Here's my config for minimal intuitive per-buffer tabs in Emacs &gt;= 30.0.91:</p>
<pre><code class="language-emacs-lisp">(use-package tab-line
  :demand t
  :bind
  (("C-&lt;iso-lefttab&gt;" . tab-line-switch-to-prev-tab)
   ("C-&lt;tab&gt;" . tab-line-switch-to-next-tab))
  :config
  (global-tab-line-mode 1)
  (setq
   tab-line-new-button-show nil
   tab-line-close-button-show nil))</code></pre>
<p>Here, I've also added keybindings to switch to next/prev tab using <code>C-TAB</code> and <code>C-S-TAB</code> keys.</p>
<p>More details about this change can be found in <a href="https://debbugs.gnu.org/cgi/bugreport.cgi?bug=69993">it&#039;s bug report mail thread</a>.</p>
<h3>Aside: why show buffers as tabs</h3>
<p>One of the common questions is the need to show buffers as tabs at all. After all, in a long running Emacs session, there might be hundreds of buffers, and showing all of them as tabs becomes useless.</p>
<p>For me, I find that I usually work on a 2 to 5 buffers for a "purpose" at a time. These might be some files on a project I'm working on, or org-agenda + a couple of org files, or mu4e-main + headers + one or two emails and a reply. When in this mode, I think looking at all of these buffers in the tab bar gives me a good understanding of where I am in the project and makes it easy to switch to the next or previous buffer easily.</p>
<p>I also use frames and beframe to make sure that only a single "project" is showing in a frame at a time. So, even if there are hundreds of buffers in my Emacs session, only a handful are shown as tabs. This makes it useful to me without overwhelming me with too many tabs.</p>]]></content:encoded>
    <comments>https://srijan.ch/emacs-30-1-highlight-intuitive-tab-line#comments</comments>
    <slash:comments>22</slash:comments>
  </item><item>
    <title>2024-09-02-001</title>
    <description><![CDATA[Something precious stolen by magic. #WitchHatAtelier #Manga #Magic Syndicated to: https://bsky.app/profile/srijan4.bsky.social/post/3l36vuzxpbp24]]></description>
    <link>https://srijan.ch/notes/2024-09-02-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-09-02-001</guid>
    <category><![CDATA[WitchHatAtelier]]></category>
    <category><![CDATA[Manga]]></category>
    <category><![CDATA[Magic]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 02 Sep 2024 15:15:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1.png" medium="image" />
    <content:encoded><![CDATA[<p>Something precious stolen by magic.<br />
<a href="https://srijan.ch/tags/WitchHatAtelier" class="p-category">#WitchHatAtelier</a> <a href="https://srijan.ch/tags/Manga" class="p-category">#Manga</a> <a href="https://srijan.ch/tags/Magic" class="p-category">#Magic</a></p>
<figure><picture><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-300x.avif 300w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-600x.avif 600w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-704x.avif 704w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-900x.avif 900w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-1200x.avif 1200w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-1800x.avif 1800w" type="image/avif"><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-300x.webp 300w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-600x.webp 600w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-704x.webp 704w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-900x.webp 900w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-1200x.webp 1200w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-1800x.webp 1800w" type="image/webp"><img alt="A black and white manga panel from Witch Hat Atelier shows a young girl in a cloak standing before a tall, ethereal figure with flowing hair and a crown. The girl tries to answer a riddle: &#039;Something you seek but cannot be granted. A thing which nobody down here possesses. Something precious that was taken from you by magic.&#039; The girl&#039;s speech bubble reads: &#039;Is the answer... comfort?&#039;" class="u-photo" height="711" sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" src="https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-704x.png" srcset="https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-300x.png 300w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-600x.png 600w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-704x.png 704w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-900x.png 900w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-1200x.png 1200w, https://srijan.ch/media/pages/notes/2024-09-02-001/2796624257-1725289466/witch_hat_atelier_5_1-1800x.png 1800w" title="A black and white manga panel from Witch Hat Atelier shows a young girl in a cloak standing before a tall, ethereal figure with flowing hair and a crown. The girl tries to answer a riddle: &#039;Something you seek but cannot be granted. A thing which nobody down here possesses. Something precious that was taken from you by magic.&#039; The girl&#039;s speech bubble reads: &#039;Is the answer... comfort?&#039;" width="704"></picture></figure><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/srijan4.bsky.social/post/3l36vuzxpbp24">https://bsky.app/profile/srijan4.bsky.social/post/3l36vuzxpbp24</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-09-02-001#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>2024-09-01-002</title>
    <description><![CDATA[My small #emacs #orgmode #gtd customization of the day: org-edna is a plugin that can be used to setup auto triggers (and blockers) when completing a task. org-gtd uses it to auto-forward the next TODO item in a project to NEXT when a task in the project is marked as DONE. The #orgedna trigger it uses is: relatives(forward-no-wrap todo-only 1 no-sort) todo!(NEXT). This works okay for me, but also …]]></description>
    <link>https://srijan.ch/notes/2024-09-01-002</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-09-01-002</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[orgmode]]></category>
    <category><![CDATA[gtd]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 01 Sep 2024 21:45:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>My small <a href="https://srijan.ch/tags/emacs" class="p-category">#emacs</a> <a href="https://srijan.ch/tags/orgmode" class="p-category">#orgmode</a> <a href="https://srijan.ch/tags/gtd" class="p-category">#gtd</a> customization of the day:</p>
<p><a href="https://www.nongnu.org/org-edna-el/">org-edna</a> is a plugin that can be used to setup auto triggers (and blockers) when completing a task. <a href="https://github.com/Trevoke/org-gtd.el">org-gtd</a> uses it to auto-forward the next TODO item in a project to NEXT when a task in the project is marked as DONE. The #orgedna trigger it uses is: <code>relatives(forward-no-wrap todo-only 1 no-sort) todo!(NEXT)</code>.</p>
<p>This works okay for me, but also results in tickler tasks configured as repeated tasks to go to NEXT state instead of TODO state when they are completed. This results in them showing up in the org agenda even before they are due.</p>
<p>To fix this, I had to add this property to the top-level headings of the tickler file:</p>
<pre><code class="language-org">:PROPERTIES:
:TRIGGER: self todo!(TODO)
:END:</code></pre>
<p>This overrides the global triggers configured by org-gtd for these org subtrees.</p><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/srijan4.bsky.social/post/3l35iccfebi2o">https://bsky.app/profile/srijan4.bsky.social/post/3l35iccfebi2o</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-09-01-002#comments</comments>
    <slash:comments>5</slash:comments>
  </item><item>
    <title>2024-08-26-001</title>
    <description><![CDATA[Note to followers of my site using RSS feeds - I've removed the microblog replies/likes etc kind of posts from the "All Posts" feed. I feel social interaction posts like that should not be part of the default feed of my website. There is always the notes feed that includes all microblog posts including reactions / interactions. A list of feeds available can be found here: https://srijan.ch/feed/ …]]></description>
    <link>https://srijan.ch/notes/2024-08-26-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-08-26-001</guid>
    <category><![CDATA[feeds]]></category>
    <category><![CDATA[indieweb]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 26 Aug 2024 07:25:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Note to followers of my site using RSS feeds - I've removed the microblog replies/likes etc kind of posts from the "All Posts" feed. I feel social interaction posts like that should not be part of the default feed of my website.</p>
<p>There is always the notes feed that includes all microblog posts including reactions / interactions.</p>
<p>A list of feeds available can be found here: <a href="https://srijan.ch/feed/">https://srijan.ch/feed/</a></p>
<p><a href="/tags/indieweb" class="p-category">#IndieWeb</a> <a href="/tags/feeds" class="p-category">#Feeds</a></p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-08-26-001#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>2024-08-22-001</title>
    <description><![CDATA[I have been reading books mostly on Kindle for the last 10 years or so. Visited a nearby library today. I didn't realize I was missing the experience of browsing shelves, stumbling upon unexpected gems, getting lost in the recommendations section, and choosing something physical to checkout. Syndicated to: https://bsky.app/profile/did:plc:6koasqt256b6jwfrn74vwbg5/post/3l2br4drpjc2r]]></description>
    <link>https://srijan.ch/notes/2024-08-22-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-08-22-001</guid>
    <category><![CDATA[books]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 22 Aug 2024 03:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been reading books mostly on Kindle for the last 10 years or so. Visited a nearby library today.</p>
<p>I didn't realize I was missing the experience of browsing shelves, stumbling upon unexpected gems, getting lost in the recommendations section, and choosing something physical to checkout.</p><p>Syndicated to:</p><ul><li><a href="https://bsky.app/profile/did:plc:6koasqt256b6jwfrn74vwbg5/post/3l2br4drpjc2r">https://bsky.app/profile/did:plc:6koasqt256b6jwfrn74vwbg5/post/3l2br4drpjc2r</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-08-22-001#comments</comments>
    <slash:comments>10</slash:comments>
  </item><item>
    <title>2024-08-20-001</title>
    <description><![CDATA[Webmention rocks tests Redoing these tests with indieConnector v2.1.0 Discovery Tests 1-22: PASS Discovery Test 23: FAIL Update Test 1: PASS Update Test 2: FAIL Delete Test 1: Not Tested Receiver Tests 1-2: PASS #IndieWeb #Webmention]]></description>
    <link>https://srijan.ch/notes/2024-08-20-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-08-20-001</guid>
    <category><![CDATA[indieweb]]></category>
    <category><![CDATA[webmention]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 20 Aug 2024 16:25:00 +0000</pubDate>
    <content:encoded><![CDATA[<h2>Webmention rocks tests</h2><p>Redoing these tests with <a href="https://github.com/mauricerenck/indieConnector">indieConnector</a> v2.1.0</p>
<p>Discovery Tests 1-22: PASS<br />
Discovery Test 23: FAIL<br />
Update Test 1: PASS<br />
Update Test 2: FAIL<br />
Delete Test 1: Not Tested<br />
Receiver Tests 1-2: PASS</p>
<p><a href="/tags/indieweb" class="p-category">#IndieWeb</a> <a href="/tags/webmention" class="p-category">#Webmention</a></p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-08-20-001#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>2024-04-29-001</title>
    <description><![CDATA[Using sysrq on my laptop - documenting mostly for myself. My laptop has started freezing sometimes, not sure why. Usually, I can just force power off using the power button and start it again, but it has happened twice that I had to recover the system by booting via a USB drive, chrooting, and recovering the damaged files using fsck or pacman magic. The linux kernel has: a ‘magical’ key combo you …]]></description>
    <link>https://srijan.ch/notes/2024-04-29-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2024-04-29-001</guid>
    <category><![CDATA[linux]]></category>
    <category><![CDATA[devops]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 29 Apr 2024 03:10:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Using sysrq on my laptop - documenting mostly for myself.</p>
<p>My laptop has started freezing sometimes, not sure why. Usually, I can just force power off using the power button and start it again, but it has happened twice that I had to recover the system by booting via a USB drive, chrooting, and recovering the damaged files using fsck or pacman magic.</p>
<p>The linux kernel has:</p>
<blockquote>
<p>a ‘magical’ key combo you can hit which the kernel will respond to regardless of whatever else it is doing, unless it is completely locked up.</p>
</blockquote>
<p>(More details on <a href="https://wiki.archlinux.org/title/keyboard_shortcuts#Kernel_(SysRq)">archwiki</a> and <a href="https://docs.kernel.org/admin-guide/sysrq.html">kernel doc</a>)</p>
<p>To enable, I did:</p>
<pre><code>echo "kernel.sysrq = 244" | sudo tee /etc/sysctl.d/sysreq.conf
sudo sysctl --system</code></pre>
<p>However, to trigger this on my laptop, I was not able to find the right key combination for SysRq. I was able to make it work using an external keyboard that has a PrintScreen binding on a layer, by using the following:</p>
<p>Press Alt and keep it pressed for the whole sequence: PrintScreen - R - E - I - S - U - B</p>
<p>Currently, PrintScreen on my external keyboard is bound to Caps lock long press + Up arrow.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2024-04-29-001#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>Using Todoist as a cloud inbox for GTD in Emacs orgmode</title>
    <description><![CDATA[Using todoist as a cloud inbox for GTD in Emacs orgmode for better integration with services like Slack and Google Assistant]]></description>
    <link>https://srijan.ch/todoist-cloud-inbox-for-gtd-in-emacs-orgmode</link>
    <guid isPermaLink="false">tag:srijan.ch:/todoist-cloud-inbox-for-gtd-in-emacs-orgmode</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[orgmode]]></category>
    <category><![CDATA[gtd]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 16 Jan 2024 12:10:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/5e26d0c015-1705406577/austin-distel-guij0yszpig-unsplash.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/5e26d0c015-1705406577/austin-distel-guij0yszpig-unsplash.jpg" alt="Person using phone and laptop">
  
    <figcaption class="text-center">
    Photo by <a href="https://unsplash.com/@austindistel">Austin Distel</a> on <a href="https://unsplash.com/">Unsplash</a>  </figcaption>
  </figure>
<p>I've been using <a href="https://www.gnu.org/software/emacs/">Emacs</a>' <a href="https://orgmode.org/">orgmode</a> as my <a href="https://gettingthingsdone.com/">GTD</a> system for the last several months. I migrated from <a href="https://todoist.com/">Todoist</a>, and one of the things I missed was integration with other services that make things easy to capture tasks into the inbox.</p>
<p>There are ways to <a href="https://srijan.ch/notes/2023-11-30-001">capture org data via email</a>, and this can be a good enough alternative, because most (though not all) services allow some kind of email forward to capture tasks. But, this depends on a complex email setup and would probably work on a single machine.</p>
<p>The main integrations/features I wanted to use were:</p>
<ol>
<li><a href="https://slack.com/">Slack</a>: Todoist has a native app for Slack using which any Slack message can be captured into Todoist as a task.</li>
<li><a href="https://todoist.com/help/articles/how-to-use-todoist-for-google-assistant-qD7srG0c">Google Assistant</a>: Todoist has integration with Google Assistant which can be used to capture tasks by talking to the google assistant.</li>
<li><a href="https://todoist.com/help/articles/task-quick-add-va4Lhpzz">Todoist quick entry</a> on mobile with date recognition: The Todoist apps have a more polished mobile capture system that can be triggered from a widget and can recognize dates using natural language entry.</li>
</ol>
<p>The workflow would look something like this:</p>
<figure><picture><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-300x.avif 300w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-600x.avif 600w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-704x.avif 704w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-900x.avif 900w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-1200x.avif 1200w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-1800x.avif 1800w" type="image/avif"><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-300x.webp 300w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-600x.webp 600w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-704x.webp 704w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-900x.webp 900w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-1200x.webp 1200w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-1800x.webp 1800w" type="image/webp"><img alt="" height="502" sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" src="https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-704x.png" srcset="https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-300x.png 300w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-600x.png 600w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-704x.png 704w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-900x.png 900w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-1200x.png 1200w, https://srijan.ch/media/pages/blog/todoist-cloud-inbox-for-gtd-in-emacs-orgmode/3cf2fe7e71-1705405025/todoist-emacs-inbox-1800x.png 1800w" title="" width="322"></picture></figure>
<p>I found an <a href="https://github.com/abrochard/emacs-todoist">existing Todoist integration for emacs</a>, but it's more suitable for using Todoist as the source of truth for tasks, and keeping a local buffer for operations on it in Emacs.</p>
<p>But, I was able to use it's functions to achieve what I wanted. Here's my elisp:</p>
<pre><code class="language-elisp">(use-package todoist)

(defun fetch-todoist-inbox ()
  (interactive)
  (let ((tasks (todoist--query "GET" "/tasks?project_id=&lt;project_id&gt;")))
    (mapcar (lambda (task)
              (todoist--insert-task task 1 t)
              (todoist--query
                "DELETE"
                (format "/tasks/%s" (todoist--task-id task))))
            tasks)))</code></pre>
<p>Here, the <code>project_id</code> is the Todoist project id of the project from which tasks have to be imported. This can be found by opening the project in the todoist web app - the project id is the last part of the URL.</p>
<p>The elisp function <code>fetch-todoist-inbox</code> can be called when in any org buffer (or any buffer actually). It will fetch all tasks in the specified project, insert then into the current buffer, and delete them from Todoist. It can be bound to any keybinding for easy access. Note that it requires setting the todoist token using elisp or an environment variable.</p>
<h2>Improvements Ideas</h2>
<ul>
<li>Show number of un-fetched items in status bar</li>
<li>Fetch comments and attachments</li>
<li>Fetch task labels and show as orgmode tags</li>
</ul>
<p><a class="p-category" href="https://srijan.ch/tags/gtd">#GTD</a> <a class="p-category" href="https://srijan.ch/tags/emacs">#Emacs</a> <a class="p-category" href="https://srijan.ch/tags/orgmode">#OrgMode</a></p>]]></content:encoded>
    <comments>https://srijan.ch/todoist-cloud-inbox-for-gtd-in-emacs-orgmode#comments</comments>
    <slash:comments>8</slash:comments>
  </item><item>
    <title>2023-11-30-001</title>
    <description><![CDATA[Found Samuel's nice post on capturing data for org via email. This is very close to what I was looking for to be able to do GTD capture on-the-go either from phone apps like Braintoss or from any email app. One addition I would like to make is handling attachments in the email by downloading them and attaching to the org entry. This would be useful for voice notes from Braintoss - it does …]]></description>
    <link>https://srijan.ch/notes/2023-11-30-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-11-30-001</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[gtd]]></category>
    <category><![CDATA[orgmode]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 30 Nov 2023 18:10:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Found Samuel's nice post on <a href="http://web.archive.org/web/20240101143355/https://samuelwflint.com/posts/2017/03/13/capturing-data-for-org-via-email/">capturing data for org via email</a>.</p>
<p>This is very close to what I was looking for to be able to do GTD capture on-the-go either from phone apps like <a href="https://braintoss.com/">Braintoss</a> or from any email app.</p>
<p>One addition I would like to make is handling attachments in the email by downloading them and attaching to the org entry.<br />
This would be useful for voice notes from Braintoss - it does transcription of the audio and adds it to the email body, but sometimes it doesn't work so well and I have to fall back to listening to the audio. It will also be useful for forwarded emails containing attachments.</p>
<p><a class="p-category" href="https://srijan.ch/tags/gtd">#GTD</a> <a class="p-category" href="https://srijan.ch/tags/emacs">#Emacs</a> <a class="p-category" href="https://srijan.ch/tags/orgmode">#OrgMode</a></p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-11-30-001#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>Testing ansible playbooks against multiple targets using vagrant</title>
    <description><![CDATA[How to test your ansible playbooks against multiple target OSes and versions using Vagrant]]></description>
    <link>https://srijan.ch/testing-ansible-playbooks-using-vagrant</link>
    <guid isPermaLink="false">tag:srijan.ch:/testing-ansible-playbooks-using-vagrant</guid>
    <category><![CDATA[ansible]]></category>
    <category><![CDATA[vagrant]]></category>
    <category><![CDATA[devops]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 21 Nov 2023 06:55:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/testing-ansible-playbooks-using-vagrant/9f989c7a78-1700550017/kvistholt-photography-ozpwn40zck4-unsplash.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/testing-ansible-playbooks-using-vagrant/9f989c7a78-1700550017/kvistholt-photography-ozpwn40zck4-unsplash.jpg" alt="">
  
  </figure>
<p>I recently updated my <a href="https://srijan.ch/install-docker-and-docker-compose-using-ansible">Install docker and docker-compose using ansible</a> post and wanted to test it against multiple target OSes and OS versions. Here's a way I found to do it easily using Vagrant.</p>
<p>Here's the Vagrantfile:</p>
<pre><code class="language-ruby"># -*- mode: ruby -*-
# vi: set ft=ruby :

targets = [
  "debian/bookworm64",
  "debian/bullseye64",
  "debian/buster64",
  "ubuntu/jammy64",
  "ubuntu/bionic64",
  "ubuntu/focal64"
]

Vagrant.configure("2") do |config|
  targets.each_with_index do |target, index|
    config.vm.define "machine#{index}" do |machine|
      machine.vm.hostname = "machine#{index}"
      machine.vm.box = target
      machine.vm.synced_folder ".", "/vagrant", disabled: true

      if index == targets.count - 1
        machine.vm.provision "ansible" do |ansible|
          ansible.playbook = "playbook.yml"
          ansible.limit = "all"
          ansible.compatibility_mode = "2.0"
          # ansible.verbose = "v"
        end
      end
    end
  end
end</code></pre>
<p>The <code>targets</code> variable defines what Vagrant boxes to target. The possible list of boxes can be found here: <a href="https://app.vagrantup.com/boxes/search">https://app.vagrantup.com/boxes/search</a></p>
<p>In the <code>Vagrant.configure</code> section, I've defined a machine with an auto-generated machine ID for each target.</p>
<p>The <code>machine.vm.synced_folder</code> line disables the default vagrant share to keep things fast.</p>
<p>Then, I've run the ansible provisioning once at the end instead of for each box separately (from: <a href="https://developer.hashicorp.com/vagrant/docs/provisioning/ansible#tips-and-tricks">https://developer.hashicorp.com/vagrant/docs/provisioning/ansible#tips-and-tricks</a>).</p>
<p>The test can be run using:</p>
<pre><code class="language-shell-session">$ vagrant up</code></pre>
<p>If the boxes are already up, to re-run provisioning, run:</p>
<pre><code class="language-shell-session">$ vagrant provision</code></pre>
<p>This code can also be found on GitHub: <a href="https://github.com/srijan/ansible-install-docker">https://github.com/srijan/ansible-install-docker</a></p>]]></content:encoded>
    <comments>https://srijan.ch/testing-ansible-playbooks-using-vagrant#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>2023-11-20-001</title>
    <description><![CDATA[@rogerlipscombe@hachyderm.io has a nice post on using git with multiple identities. His recommended way (using includeIf to include different config files for different parent folders) also makes sense to me the most.]]></description>
    <link>https://srijan.ch/notes/2023-11-20-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-11-20-001</guid>
    <category><![CDATA[git]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Mon, 20 Nov 2023 06:45:00 +0000</pubDate>
    <content:encoded><![CDATA[<p><a href="https://hachyderm.io/@rogerlipscombe">@rogerlipscombe@hachyderm.io</a> has a nice post on <a href="https://blog.differentpla.net/blog/2023/11/17/multiple-git-identities/">using git with multiple identities</a>. His recommended way (using <code>includeIf</code> to include different config files for different parent folders) also makes sense to me the most.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-11-20-001#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>2023-11-10-001</title>
    <description><![CDATA[My new #IndieWeb enabled website is now live! You can follow my microblog posts or blog articles by entering @srijan.ch@srijan.ch in your Fediverse app search bar or reply and interact with any post using #Webmention or your Mastodon/Fediverse app. Site Features: Indieauth Webmentions Federation with the fediverse (via Bridgy Fed) Structured author, posts, and feeds using microformats Microsub …]]></description>
    <link>https://srijan.ch/notes/2023-11-10-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-11-10-001</guid>
    <category><![CDATA[indieweb]]></category>
    <category><![CDATA[webmention]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 10 Nov 2023 13:55:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>My new <a href="/tags/indieweb" class="p-category">#IndieWeb</a> enabled website is now live!<br />
You can follow my microblog posts or blog articles by entering @srijan.ch@srijan.ch in your Fediverse app search bar or reply and interact with any post using <a href="/tags/webmention" class="p-category">#Webmention</a> or your Mastodon/Fediverse app.</p>
<p>Site Features:</p>
<ul>
<li><a href="https://indieauth.net/">Indieauth</a></li>
<li><a href="https://indieweb.org/Webmention">Webmentions</a></li>
<li><a href="https://indieweb.org/federation">Federation</a> with the fediverse (via <a href="https://indieweb.org/Bridgy_Fed">Bridgy Fed</a>)</li>
<li>Structured author, posts, and feeds using <a href="https://indieweb.org/microformats">microformats</a></li>
<li><a href="https://indieweb.org/Microsub">Microsub</a></li>
</ul><p>Syndicated to:</p><ul><li><a href="https://news.indieweb.org/en">IndieNews</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-11-10-001#comments</comments>
    <slash:comments>6</slash:comments>
  </item><item>
    <title>2023-11-09-001</title>
    <description><![CDATA[New #Coffee #BlueTokai This new packaging from BlueTokai looks nice. And the coffee tastes amazing. Chocolatey flavour with very little bitterness.]]></description>
    <link>https://srijan.ch/notes/2023-11-09-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-11-09-001</guid>
    <category><![CDATA[coffee]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 09 Nov 2023 19:25:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527.jpg" medium="image" />
    <content:encoded><![CDATA[<p>New <a href="/tags/coffee" class="p-category">#Coffee</a> <a href="/tags/BlueTokai" class="p-category">#BlueTokai</a></p>
<p>This new packaging from BlueTokai looks nice. And the coffee tastes amazing. Chocolatey flavour with very little bitterness.</p>
<figure><picture><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-300x.avif 300w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-600x.avif 600w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-704x.avif 704w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-900x.avif 900w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-1200x.avif 1200w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-1800x.avif 1800w" type="image/avif"><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-300x.webp 300w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-600x.webp 600w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-704x.webp 704w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-900x.webp 900w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-1200x.webp 1200w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-1800x.webp 1800w" type="image/webp"><img alt="Photo of a coffee pouch with a beautiful design containing roasted coffee from Sandalwood Estate Coorg India" class="u-photo" height="939" sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" src="https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-704x.jpg" srcset="https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-300x.jpg 300w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-600x.jpg 600w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-704x.jpg 704w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-900x.jpg 900w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-1200x.jpg 1200w, https://srijan.ch/media/pages/notes/2023-11-09-001/0f8686998f-1699621097/20231109_104527-1800x.jpg 1800w" title="Photo of a coffee pouch with a beautiful design containing roasted coffee from Sandalwood Estate Coorg India" width="704"></picture></figure>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-11-09-001#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>2023-10-27-001</title>
    <description><![CDATA[#Patterns]]></description>
    <link>https://srijan.ch/notes/2023-10-27-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-10-27-001</guid>
    <category><![CDATA[photography]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 27 Oct 2023 16:05:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151.jpg" medium="image" />
    <content:encoded><![CDATA[<p><a href="/tags/patterns" class="p-category">#Patterns</a></p>
<figure><picture><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-300x.avif 300w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-600x.avif 600w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-704x.avif 704w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-900x.avif 900w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-1200x.avif 1200w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-1800x.avif 1800w" type="image/avif"><source sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" srcset="https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-300x.webp 300w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-600x.webp 600w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-704x.webp 704w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-900x.webp 900w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-1200x.webp 1200w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-1800x.webp 1800w" type="image/webp"><img alt="Photo of a lamp shade from below with beautiful patterns highlighted" class="u-photo" height="528" sizes="(min-width: 768px) 704px, calc(93.6vw - 45px)" src="https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-704x.jpg" srcset="https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-300x.jpg 300w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-600x.jpg 600w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-704x.jpg 704w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-900x.jpg 900w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-1200x.jpg 1200w, https://srijan.ch/media/pages/notes/2023-10-27-001/4423882a3c-1699621096/20231027_210151-1800x.jpg 1800w" title="Photo of a lamp shade from below with beautiful patterns highlighted" width="704"></picture></figure>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-10-27-001#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>2023-10-26-001</title>
    <description><![CDATA[Test webmention + fed.brid.gy: Two naked tags walk into a bar. The bartender exclaims, "Hey, you can't come in here without microformats, this is a classy joint!"]]></description>
    <link>https://srijan.ch/notes/2023-10-26-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-10-26-001</guid>
    <category><![CDATA[indieweb]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 26 Oct 2023 06:05:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Test webmention + fed.brid.gy:</p>
<blockquote>
<p>Two naked tags walk into a bar. The bartender exclaims, "Hey, you can't come in here without microformats, this is a classy joint!"</p>
</blockquote>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-10-26-001#comments</comments>
    <slash:comments>4</slash:comments>
  </item><item>
    <title>2023-10-25-001</title>
    <description><![CDATA[Added an RSS feed for notes]]></description>
    <link>https://srijan.ch/notes/2023-10-25-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-10-25-001</guid>
    <category><![CDATA[feeds]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 25 Oct 2023 06:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Added an RSS feed for notes</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-10-25-001#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>2023-10-21-001</title>
    <description><![CDATA[I've been working on an Indieweb-enabled site redesign using Kirby CMS + TailwindCSS.]]></description>
    <link>https://srijan.ch/notes/2023-10-21-001</link>
    <guid isPermaLink="false">tag:srijan.ch:/notes/2023-10-21-001</guid>
    <category><![CDATA[indieweb]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sat, 21 Oct 2023 07:45:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I've been working on an Indieweb-enabled site redesign using <a href="https://getkirby.com">Kirby CMS</a> + TailwindCSS.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes/2023-10-21-001#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>Exploring conflicting oneshot services in systemd</title>
    <description><![CDATA[Exploring ways to make two systemd services using a shared resource work with each other]]></description>
    <link>https://srijan.ch/exploring-conflicting-oneshot-services-in-systemd</link>
    <guid isPermaLink="false">64807b30f6b0810001fa0d01</guid>
    <category><![CDATA[linux]]></category>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[systemd]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 08 Jun 2023 19:20:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/exploring-conflicting-oneshot-services-in-systemd/0c15993753-1699621096/systemd-conflicts-01.png" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/exploring-conflicting-oneshot-services-in-systemd/0c15993753-1699621096/systemd-conflicts-01.png" alt="Exploring conflicting oneshot services in systemd">
  
    <figcaption class="text-center">
    Midjourney: two systemd services fighting over who will start first  </figcaption>
  </figure>
<h2>Background</h2>
<p>I use <a href="https://isync.sourceforge.io/mbsync.html" rel="noreferrer">mbsync</a> to sync my mailbox from my online provider (<a href="https://ref.fm/u12054901" rel="noreferrer">FastMail</a> - referer link) to my local system to eventually use with <a href="https://djcbsoftware.nl/code/mu/mu4e.html" rel="noreferrer">mu4e</a> (on Emacs).</p> <p>For periodic sync, I have a systemd service file called <code>mbsync.service</code> defining a oneshot service and a timer file called <code>mbsync.timer</code> that runs this service periodically. I can also activate the same service using a keybinding from inside mu4e.</p><figure>
  <pre><code class="language-ini">[Unit]
Description=Mailbox synchronization service
After=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/bin/mbsync fastmail-all
ExecStartPost=bash -c &quot;emacsclient -s srijan -n -e &#039;(mu4e-update-index)&#039; || mu index&quot;

[Install]
WantedBy=default.target</code></pre>
    <figcaption class="text-center">mbsync.service</figcaption>
  </figure>
<figure>
  <pre><code class="language-ini">[Unit]
Description=Mailbox synchronization timer
BindsTo=graphical-session.target
After=graphical-session.target

[Timer]
OnBootSec=2m
OnUnitActiveSec=5m
Unit=mbsync.service

[Install]
WantedBy=graphical-session.target</code></pre>
    <figcaption class="text-center">mbsync.timer</figcaption>
  </figure>
<p>Also, for instant download of new mail, I have another service called <a href="https://gitlab.com/shackra/goimapnotify" rel="noreferrer">goimapnotify</a> configured that listens for new/updated/deleted messages on the remote mailbox using IMAP IDLE, and calls the above <code>mbsync.service</code> when there are changes.</p><p>This has worked well for me for several years.</p><h2>The Problem</h2>
<p>I
 recently split my (huge) archive folder into yearly archives so that I 
can keep/sync only the recent years on my phone. [ Aside: <a href="https://fedi.srijan.dev/notice/AVGV5TuD1cOEWQ8iQa" rel="noreferrer">yearly refile in mu4e snippet</a>
 ]. This lead to an increase in the number of folders that mbsync has to
 sync, and this increased the time taken to sync because it syncs the 
folders one by one.</p> <p>It does have the feature to sync a subset of folders, so I created a second systemd service called <code>mbsync-quick.service</code>
 and only synced my Inbox from this service. Then I updated the 
goimapnotify config to trigger this quick service instead of the full 
service when it detects changes.</p> <p>But, this caused a problem: these
 two services can run at the same time, and hence can cause corruption 
or sync conflicts in the mail files. So, I wanted a way to make sure 
that these two services don't run at the same time.</p> <p>Ideally,
 whenever any of these services are triggered and the other service is 
already running, then it should wait for the other service to stop 
before starting, essentially forming a queue.</p><h2>Solution 1: Using systemd features</h2>
<p>Systemd has a <a href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Conflicts=" rel="noreferrer">way to specify conflicts</a> in the unit section. From the docs:</p><blockquote>
  If a unit has a<code>Conflicts=</code>setting on another unit, starting the former will stop the latter and vice versa.<br>[...] to ensure that the conflicting unit is stopped before the other unit is started, an<code>After=</code>or<code>Before=</code>dependency must be declared.  </blockquote>
<p>This
 is different from our requirement that the conflicting service should 
be allowed to finish before the triggered service starts, but maybe a 
good enough way to at least prevent both running at the same time.</p> <p>To test this, I added <code>Conflicts=</code>
 in both the services with the other service as the conflicting service,
 and it works. The only problem is that when a service is triggered, the
 other service is <code>SIGTERM</code>ed. This itself might not cause a 
corruption issue, but if this happens with the mbsync-quick service, 
then there might be a delay getting the mail.</p> <p>This is the best way
 I found that uses built-in systemd features without any workarounds or 
hacks. Other solutions below involve some workarounds.</p><h2>Solution 2: Conflict + stop after sync complete</h2>
<p>This
 is a variation on solution 1 - add a wrapper script to trap the SIGTERM
 and only exit when the sync is complete. This also worked.</p> <p>But, 
the drawback with this method is that anyone calling stop on these 
services (like the system shutting down) will have to wait for this to 
finish (or till timeout of 90s). This can cause slowdowns in system 
shutdown that are hard to debug. So, I don't prefer this solution.</p><h2>Solution 3: Delay start until the other service is finished</h2>
<p>This is also a hacky solution - use <code>ExecStartPre</code> to check if the other service is running, and busywait for it to stop before starting ourselves.</p><figure>
  <pre><code class="language-ini">[Unit]
Description=Mailbox synchronization service (quick)
After=network-online.target

[Service]
Type=oneshot
ExecStartPre=/bin/sh -c &#039;while systemctl --user is-active mbsync.service | grep -q activating; do sleep 0.5; done&#039;
ExecStart=/usr/bin/mbsync fastmail-inbox
ExecStartPost=bash -c &quot;emacsclient -s srijan -n -e &#039;(mu4e-update-index)&#039; || mu index&quot;</code></pre>
    <figcaption class="text-center">mbsync-quick.service</figcaption>
  </figure>
<p>Here, we use <code>systemctl is-active</code> to query the status of the other service, and wait until the other service is not in <code>activating</code> state anymore. The state is called <code>activating</code> instead of <code>active</code> because these are oneshot services that go from <code>inactive</code> to <code>activating</code> to <code>inactive</code> without ever reaching <code>active</code>.</p><p>To not make this an actual busywait on the CPU, I added a sleep of 0.5s.</p><p>This worked the best for my use case. When one of the services is triggered, it checks if the other service is running and waits for it to stop before running itself. It also does not have the drawback of solution 2 of trapping exits and delaying a stop command.</p><p>But, after using it for a day, I found there is a race condition (!) that can cause a deadlock between these two services and none of them are able to start.</p><p>The reason for the race condition was:</p><ul><li>A service is marked as <code>activating</code> when it's <code>ExecStartPre</code> command starts</li><li>I added a sleep of 0.5 seconds</li></ul><p>So, if the other service is triggered again in between those 0.5 seconds, both services will be marked as <code>activating</code> and they will indefinitely wait for each other to complete. This is what I get for using workarounds.</p><h2>Solution 4: One-way conflict, other way delay</h2>
<p>So,
 the final good-enough solution I came up with was to break this cyclic 
dependency by doing a hybrid of Solution 1 and Solution 3. I was okay 
with the <code>mbsync.service</code> being stopped for the (higher priority) <code>mbsync-quick.service</code>.</p> <p>So, I added <code>mbsync.service</code> in Conflicts section of <code>mbsync-quick.service</code>, and used the <code>ExecStartPre</code> method in <code>mbsync.service</code>.</p> <p>💡Let me know if you know a better way to achieve this.</p><h2>References</h2>
<ul><li><a href="https://unix.stackexchange.com/questions/503719/how-to-set-a-conflict-in-systemd-in-one-direction-only" rel="noreferrer">https://unix.stackexchange.com/questions/503719/how-to-set-a-conflict-in-systemd-in-one-direction-only</a></li><li><a href="https://unix.stackexchange.com/questions/465794/is-it-possible-to-make-a-systemd-unit-wait-until-all-its-conflicts-are-stopped/562959" rel="noreferrer">https://unix.stackexchange.com/questions/465794/is-it-possible-to-make-a-systemd-unit-wait-until-all-its-conflicts-are-stopped/562959</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/exploring-conflicting-oneshot-services-in-systemd#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Graphical password prompt for disk decryption on ArchLinux</title>
    <description><![CDATA[Enabling a graphical password prompt for disk decryption on ArchLinux]]></description>
    <link>https://srijan.ch/graphical-password-prompt-for-disk-decryption</link>
    <guid isPermaLink="false">63f7984c4c3c0f000109a2e0</guid>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 23 Feb 2023 17:30:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>In my last post, I described how I <a href="https://srijan.ch/encrypting-an-existing-linux-systems-root-partition" rel="noreferrer">enabled encryption on my Linux root partition</a>. However, during boot up, it asked the password using a plain text prompt. I was not satisfied with the design and found that there's a better way: <a href="https://wiki.archlinux.org/title/plymouth" rel="noreferrer">Plymouth</a>.</p><p>Plymouth is a package that provides a themeable graphical boot process / splash screen all the way up to the login manager. This includes a graphical password prompt as well. Here are the steps I took to set this up:</p><p>1. First, I installed <a href="https://aur.archlinux.org/packages/plymouth-git/" rel="noreferrer">plymouth-git</a> from the AUR. ArchWiki suggests plymouth-git instead of plymouth because it is actually less likely to cause problems for most users than the stable package.</p><p>2. Next, I updated the <code>HOOKS</code> section in my <code>/etc/mkinitcpio.conf</code> to include <code>sd-plymouth</code>:</p><figure>
  <pre><code class="language-ini">HOOKS=(base systemd plymouth autodetect modconf kms keyboard sd-vconsole block sd-encrypt filesystems fsck)</code></pre>
  </figure>
<p>3. And regenerated the initramfs:</p><figure>
  <pre><code class="language-shellsession"># mkinitcpio -P</code></pre>
  </figure>
<p>4. Next, I added the following kernel parameters:</p><figure>
  <pre><code class="language-text">quiet splash</code></pre>
  </figure>
<p>ArchWiki also suggests adding <code>vt.global_cursor_default=0</code>,
 but my experience was better without it. With this option, the cursor 
in TTY terminals becomes hidden, not just for the boot sequence but even
 later.</p> <p>With the above changes, after reboot, a nice password 
prompt is shown with a spinner image. But, this hid the beautiful OEM 
ROG logo that comes first at boot up. So, here are further tweaks I did 
to make it look as I wanted.</p> <p>5. First, I tried using the built-in 
BGRT theme. This is a variation of the spinner theme that keeps the OEM 
logo if available (BGRT stands for Boot Graphics Resource Table).</p><figure>
  <pre><code class="language-shellsession"># plymouth-set-default-theme -R bgrt</code></pre>
  </figure>
<p>This
 did not show the spinner, but it still hid the OEM logo when asking for
 decryption password. Although it did show the logo again after password
 was entered. So, I guessed it just needed a little customization.</p> <p>6. So, I made a copy of the bgrt theme to make my customizations.</p><figure>
  <pre><code class="language-shellsession"># cd /usr/share/plymouth/themes
# cp -r bgrt bgrt-custom
# cd bgrt-custom
# mv bgrt.plymouth bgrt-custom.plymouth</code></pre>
  </figure>
<p>7. These are the changes I had to make in <code>bgrt-custom.plymouth</code> to make it show the prompt like I wanted:</p><figure>
  <pre><code class="language-diff">diff --git a/../bgrt/bgrt.plymouth b/bgrt-custom.plymouth
index e8e9713..ca7a293 100644
--- a/../bgrt/bgrt.plymouth
+++ b/bgrt-custom.plymouth
@@ -30,8 +30,8 @@ Name[he]=BGRT
 Name[fa]=BGRT
 Name[fi]=BGRT
 Name[ie]=BGRT
-Name=BGRT
-Description=Jimmac&#039;s spinner theme using the ACPI BGRT graphics as background
+Name=BGRT-Custom
+Description=Customized Jimmac&#039;s spinner theme using the ACPI BGRT graphics as background
 ModuleName=two-step

 [two-step]
@@ -39,9 +39,9 @@ Font=Cantarell 12
 TitleFont=Cantarell Light 30
 ImageDir=/usr/share/plymouth/themes//spinner
 DialogHorizontalAlignment=.5
-DialogVerticalAlignment=.382
+DialogVerticalAlignment=.75
 TitleHorizontalAlignment=.5
-TitleVerticalAlignment=.382
+TitleVerticalAlignment=.75
 HorizontalAlignment=.5
 VerticalAlignment=.7
 WatermarkHorizontalAlignment=.5
@@ -52,7 +52,7 @@ BackgroundStartColor=0x000000
 BackgroundEndColor=0x000000
 ProgressBarBackgroundColor=0x606060
 ProgressBarForegroundColor=0xffffff
-DialogClearsFirmwareBackground=true
+DialogClearsFirmwareBackground=false
 MessageBelowAnimation=true

 [boot-up]</code></pre>
  </figure>
<p>Basically, I tweaked <code>DialogClearsFirmwareBackground</code>, <code>DialogVerticalAlignment</code>, and <code>TitleVerticalAlignment</code> to my liking. To set this custom theme, I ran:</p><figure>
  <pre><code class="language-shellsession"># plymouth-set-default-theme -R bgrt-custom</code></pre>
  </figure>
<p>8.
 This looked perfect. But, I noticed that this increased by boot up time
 considerably. Plymouth was taking a long time before displaying the 
password prompt. On further digging, I found a parameter called <code>DeviceTimeout</code> in <code>/etc/plymouth/plymouthd.conf</code> with default value of 8 seconds.</p> <p>According to <a href="https://gitlab.freedesktop.org/plymouth/plymouth/-/merge_requests/58" rel="noreferrer">this merge request</a>,
 this was needed to keep support for certain AMD GPUs. I don't have and 
AMD GPU, and anyway I think Plymouth is using the EFI framebuffer for 
this splash screen, not the GPU. So, I reduced it to 2 seconds to make 
things faster.</p>]]></content:encoded>
    <comments>https://srijan.ch/graphical-password-prompt-for-disk-decryption#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>Encrypting an existing Linux system&#039;s root partition</title>
    <description><![CDATA[Encrypt an unencrypted root partition on an Arch Linux system]]></description>
    <link>https://srijan.ch/encrypting-an-existing-linux-systems-root-partition</link>
    <guid isPermaLink="false">63f61c905e8d350001eda64a</guid>
    <category><![CDATA[linux]]></category>
    <category><![CDATA[security]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 22 Feb 2023 19:45:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/encrypting-an-existing-linux-systems-root-partition/27f278c9de-1699621096/partitions-summary.excalidraw.png" medium="image" />
    <content:encoded><![CDATA[<h2>Introduction</h2>
<p>I have an Arch Linux 
system with an unencrypted root partition that I wanted to encrypt. I've
 documented the steps I followed to achieve this here.</p> <p>I selected the "LUKS on a partition" option <a href="https://wiki.archlinux.org/title/dm-crypt/Encrypting_an_entire_system" rel="noreferrer">from here</a>. I don't have an LVM setup on this system and didn't need to encrypt the boot partition.</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/encrypting-an-existing-linux-systems-root-partition/27f278c9de-1699621096/partitions-summary.excalidraw.png" alt="">
  
  </figure>
<p>The
 first step was to have a backup so that if something failed, I could at
 least recover my critical files. I don't have filesystem-level backups 
configured, so I used <a href="https://github.com/kopia/kopia" rel="noreferrer">kopia</a> to back up my home folder. Details on this might be in a future blog post.</p><h2>Process</h2>
<p>1. To begin, I set up a <a href="https://wiki.archlinux.org/title/USB_flash_installation_medium" rel="noreferrer">USB flash installation medium</a>
 so that I could boot into a live environment to perform the actual 
actions. Since I needed to encrypt the root partition, this could not be
 performed from inside the system running off that partition.</p> <p>2. 
After booting into the live environment using the above USB medium, I 
first shrank the existing filesystem by 32MiB to make space for the LUKS
 encryption header, which is always stored at the beginning of the 
device. My filesystem size is exactly 500GiB, so I set the new size to <code>511968M</code>.</p><figure>
  <pre><code class="language-shellsession"># echo &quot;Check the filesystem&quot;
# e2fsck -f /dev/nvme0n1p7

# echo &quot;Resize&quot;
# resize2fs -p /dev/nvme0n1p7 511968M</code></pre>
  </figure>
<p>3. Now, I encrypted it using the default cipher. This took 37 minutes on my 500GiB partition, which was about 55% full.</p><figure>
  <pre><code class="language-shellsession"># cryptsetup reencrypt --encrypt --reduce-device-size 16M /dev/nvme0n1p7

WARNING!

========

This will overwrite data on LUKS2-temp-12345678-9012-3456-7890-123456789012.new irrevocably.

Are you sure? (Type &#039;yes&#039; in capital letters): YES
Enter passphrase for LUKS2-temp-12345678-9012-3456-7890-123456789012.new: 
Verify passphrase:</code></pre>
  </figure>
<p>4. Next, I extended the original ext4 file system to occupy all available space again on the now encrypted partition:</p><figure>
  <pre><code class="language-shellsession"># cryptsetup open /dev/nvme0n1p7 root
Enter passphrase for /dev/nvme0n1p7: 

# resize2fs /dev/mapper/root</code></pre>
  </figure>
<p>5. Now, I mounted the filesystem and chrooted into it:</p><figure>
  <pre><code class="language-shellsession"># mount /dev/mapper/root /mnt
# mount /dev/nvme0n1p1 /mnt/boot
# arch-chroot /mnt</code></pre>
  </figure>
<p>6. Since I have a systemd-based initramfs, I added <code>keyboard</code>, <code>sd-vconsole</code>, and <code>sd-encrypt</code> hooks in the <code>HOOKS</code> section of <code>/etc/mkinitcpio.conf</code>:</p><figure>
  <pre><code class="language-ini">HOOKS=(base systemd autodetect modconf kms keyboard sd-vconsole block sd-encrypt filesystems fsck)</code></pre>
    <figcaption class="text-center">/etc/mkinitcpio.conf</figcaption>
  </figure>
<p>7. Next, I regenerated the initramfs ( <code>-P</code> regenerates it for all presets ):</p><figure>
  <pre><code class="language-shellsession"># mkinitcpio -P</code></pre>
  </figure>
<p>8. Next, I configured the boot loader by adding to kernel parameters:</p><figure>
  <pre><code class="language-ini">rd.luks.name=&lt;device-UUID&gt;=root root=/dev/mapper/root</code></pre>
  </figure>
<p>I found the device UUID using: <code>sudo blkid -s UUID -o value /dev/nvme0n1p7</code>. Surprisingly (for me), the UUID had changed after encrypting the partition.</p> <p>My final bootloader conf file looked like this:</p><figure>
  <pre><code class="language-text">title    Arch Linux
linux    /vmlinuz-linux
initrd   /amd-ucode.img
initrd   /initramfs-linux.img
options  rd.luks.name=1df8ea89-4274-4ef9-a670-76c13e612901=root root=/dev/mapper/root rw</code></pre>
    <figcaption class="text-center">/boot/loader/entries/arch.conf</figcaption>
  </figure>
<p>9. Lastly, I updated <code>/etc/fstab</code>:</p><figure>
  <pre><code class="language-text">/dev/mapper/root  /  ext4  rw,relatime  0 1</code></pre>
    <figcaption class="text-center">/etc/fstab</figcaption>
  </figure>
<p>10. All done. To test it out, I logged out of the chroot environment and rebooted the system.</p><p>It asked me for the disk encryption password. After entering the password selected in step 3, the system booted up as usual, and everything looked to be working.</p><h2>Final Thoughts</h2>
<p>This was surprisingly easy to do and did not take much time. The <a href="https://wiki.archlinux.org/" rel="noreferrer">ArchWiki</a> was helpful, even if the information was spread over multiple pages/sections. Taking a backup before starting also made me feel safe about the process.</p><p>I did not like the design of the decryption password prompt at bootup. Maybe there's a way to customize it to look better. Update: I found a way. <a href="https://srijan.ch/graphical-password-prompt-for-disk-decryption" rel="noreferrer">Details here</a>.</p>]]></content:encoded>
    <comments>https://srijan.ch/encrypting-an-existing-linux-systems-root-partition#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>Download a file securely from GCS on an untrusted system</title>
    <description><![CDATA[Download files from google cloud storage using temporary credentials or time-limited access URLs]]></description>
    <link>https://srijan.ch/secure-gcs-download</link>
    <guid isPermaLink="false">632920ea8948d20001269e4e</guid>
    <category><![CDATA[cloud]]></category>
    <category><![CDATA[security]]></category>
    <category><![CDATA[devops]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 27 Nov 2022 19:30:00 +0000</pubDate>
    <content:encoded><![CDATA[<h2>The Problem</h2>
<p>We publish some of our build artifacts to <a href="https://cloud.google.com/storage" rel="noreferrer">Google Cloud Storage</a>,
 and users need to download these to the target installation system. 
But, this target system is not always trusted and can have shared local 
users, so we don't want to store long-lived credentials.</p> <p>As a 
user, I can download the artifact on my (secure) laptop and transfer it 
to the target system. But, the artifact can be large (several GBs). So, 
downloading and uploading again makes it cumbersome and slow.</p><h2>Option 1: use <a href="https://cloud.google.com/sdk/docs/install" rel="noreferrer">gcloud CLI</a> on the target system</h2>
<p>Log in to the target system, install gcloud CLI, authenticate, and then download the file:</p><figure>
  <pre><code class="language-shellsession">$ gcloud storage cp gs://$BUCKET/$FILE ./</code></pre>
  </figure>
<p>This has two problems:</p><ol><li>The user must install (and maybe update) gcloud CLI on the target system.</li><li>The
 user needs to store their credentials on the target system. These 
credentials have full access to whatever resources the user has. So, 
it's a huge security risk, especially if we don't trust the target 
system.</li></ol><p>To mitigate (2), the user can log out of gcloud CLI after downloading. But, this is a manual step they might miss.</p><h2>Option 2: use gcloud CLI with a service account</h2>
<p>This
 is a variation of the above solution - we log in using a service 
account instead of the user account. This service account can have 
restricted access to only the resources needed.</p><figure>
  <pre><code class="language-shellsession">$ gcloud iam service-accounts create $SA_NAME \
    --description=&quot;Service Account for downloading artifacts&quot;
$ gsutil iam ch \
    serviceAccount:$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com:roles/storage.objectViewer \
    gs://$BUCKET</code></pre>
  </figure>
<p>This partially mitigates problem (2) above. If 
the user forgets to log out of gcloud CLI, the damage will be restricted
 to the resources accessible by the service account.</p><h2>Option 3: Short-lived access token</h2>
<p>Gcloud CLI supports creating short-lived credentials for the end-user account or <a href="https://cloud.google.com/iam/docs/create-short-lived-credentials-direct" rel="noreferrer">any service account</a>.</p> <p>This credential can be used to download the artifact using wget with an authorization header - no need to install gcloud CLI.</p> <p>Here's
 a small script that asks for the auth token as input, parses various 
GCS bucket URL formats, and downloads the requested artifact directly 
using wget:</p><figure>
  <pre><code class="language-bash">#!/bin/bash
# Download artifact from GCS bucket

set -e

echo -e &quot;====&gt; Run \`gcloud auth print-access-token\` on a system where you&#039;ve setup gcloud to get access token\n&quot;
read -r -p &quot;Enter access token: &quot; StorageAccessToken
read -r -p &quot;Enter GCS artifact URL: &quot; ArtifactURL

if [[ &quot;${ArtifactURL:0:33}&quot; == &quot;https://console.cloud.google.com/&quot; ]]; then
    BucketAndFile=&quot;${ArtifactURL#*https://console.cloud.google.com/storage/browser/_details/}&quot;
elif [[ &quot;${ArtifactURL:0:33}&quot; == &quot;https://storage.cloud.google.com/&quot; ]]; then
    BucketAndFile=&quot;${ArtifactURL#*https://storage.cloud.google.com/}&quot;
elif [[ &quot;${ArtifactURL:0:5}&quot; == &quot;gs://&quot; ]]; then
    BucketAndFile=&quot;${ArtifactURL#*gs://}&quot;
else
    echo &quot;Invalid GCS artifact URL&quot;
    exit 1
fi

StorageBucket=&quot;${BucketAndFile%%/*}&quot;
StorageFile=&quot;${BucketAndFile#*/}&quot;
StorageFileEscaped=$(echo &quot;${StorageFile}&quot; | sed &#039;s/\//%2F/g&#039;)
OutputFileName=&quot;${StorageFile##*/}&quot;

echo -e &quot;\n====&gt; Downloading gs://${StorageBucket}/${StorageFile} to ${OutputFileName}\n&quot;

wget -O &quot;${OutputFileName}&quot; --header=&quot;Authorization: Bearer ${StorageAccessToken}&quot; \
    &quot;https://storage.googleapis.com/storage/v1/b/${StorageBucket}/o/${StorageFileEscaped}?alt=media&quot;</code></pre>
  </figure>
<h2>Option 4: Signed URLs</h2>
<p>Google Cloud Storage also supports <a href="https://cloud.google.com/storage/docs/access-control/signed-urls" rel="noreferrer">signed URLs</a>
 - which give time-limited access to a specific Cloud Storage resource. 
Anyone possessing the signed URL can use it while it's active without 
any further credentials. This fits our use case brilliantly.</p> <p>To do this, first we need to give ourselves the <code>iam.serviceAccountTokenCreator</code> role so that we can impersonate a service account.</p><figure>
  <pre><code class="language-shellsession">$ gcloud iam service-accounts add-iam-policy-binding \
	$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com \
    --member=$MY_EMAIL \
    --role=roles/iam.serviceAccountTokenCreator</code></pre>
  </figure>
<p>Then, we can generate a signed URL:</p><figure>
  <pre><code class="language-shellsession">$ gcloud config set auth/impersonate_service_account \
    $SA_NAME@$PROJECT_ID.iam.gserviceaccount.com

$ gsutil signurl -u -r $REGION -d 10m gs://$BUCKET/$FILE

$ gcloud config unset auth/impersonate_service_account</code></pre>
  </figure>
<p>And we can use wget to download the artifact from this URL without any further authentication.</p>]]></content:encoded>
    <comments>https://srijan.ch/secure-gcs-download#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Slackbot using google cloud serverless functions</title>
    <description><![CDATA[Slack bot using Google Cloud Functions to post a roundup of recently created channels]]></description>
    <link>https://srijan.ch/slackbot-google-cloud-part-1</link>
    <guid isPermaLink="false">634164f0219ca50001581813</guid>
    <category><![CDATA[development]]></category>
    <category><![CDATA[cloud]]></category>
    <category><![CDATA[slack]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 04 Nov 2022 19:15:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/a7a4e31f92-1699621096/screenshot_20221009_161507.png" medium="image" />
    <content:encoded><![CDATA[<p>At my org, we wanted a simple Slack bot that posts a roundup 
of new channels created recently in the workspace to a channel. While 
writing this is easy enough, I wanted to do it using <a href="https://cloud.google.com/functions" rel="noreferrer">Google Cloud Functions</a> with Python, trying to follow best practices as much as possible.</p> <p>Here's how the overall flow will look like:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/1fc9e55238-1699621096/slackbot01.5.excalidraw.png" alt="">
  
    <figcaption class="text-center">
    Google Cloud Functions Slack Bot  </figcaption>
  </figure>
<p>We want this roundup post triggered on some schedule (maybe daily), so the <a href="https://cloud.google.com/scheduler" rel="noreferrer">Cloud Scheduler</a> is required to send an event to a <a href="https://cloud.google.com/pubsub" rel="noreferrer">Google Pub/Sub</a>
 topic that triggers our cloud function, which queries the slack API to 
get channels details, filter recently created, and post it back to a 
slack channel. <a href="https://cloud.google.com/secret-manager" rel="noreferrer">Secret Manager</a> is used to securely store slack's bot token and signing secret.</p> <p>Note that the credentials shown in any screenshots below are not valid.</p><h2>Create the slack app</h2>
<p>The
 first step will be to create the slack app. Go to https://api.slack.com
 and click on "Create an app". Choose "From scratch" in the first 
dialog; enter an app name and choose a workspace for your app in the 
second dialog. In the next screen, copy the "<strong>Signing Secret</strong>" from the "App Credentials" section and save it for later use.</p><figure data-ratio="auto">
  <ul>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/a7a4e31f92-1699621096/screenshot_20221009_161507.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/d075d74c25-1699621096/screenshot_20221009_161622-1.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/e0500cfbdf-1699621096/screenshot_20221009_162905.png">    </li>
      </ul>
  </figure>
<p>Next,
 go to the "OAuth and Permissions" tab from the left sidebar, and scroll
 down to "Scopes" -&gt; "Bot Token Scopes". Here, add the scopes:</p><ul><li><a href="https://api.slack.com/scopes/channels:read" rel="noopener noreferrer"><code>channels:read</code></a>: required to query public channels and find their creation times</li><li><a href="https://api.slack.com/scopes/chat:write" rel="noopener noreferrer"><code>chat:write</code></a>: required to write to a channel (where the bot is invited)</li></ul><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/0cf73d1181-1699621096/screenshot_20221009_162243.png" alt="">
  
  </figure>
<p>Next,
 scroll up on the same screen and click "Install to Workspace" to 
install to your workspace. Click "Allow" in the next screen to allow the
 installation. Next, copy the "<strong>Bot User OAuth Token</strong>" from the "OAuth Tokens for Your Workspace" section on the same page and save it for later use.</p><figure data-ratio="auto">
  <ul>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/a79785c4f6-1699621096/screenshot_20221009_162420.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/d075d74c25-1699621096/screenshot_20221009_161622-1.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/f0e359ac2e-1699621096/screenshot_20221009_163221.png">    </li>
      </ul>
  </figure>
<p>💡Keep track of the <strong>Bot User OAuth Token</strong> and <strong>Signing Secret</strong> you copied above.</p><h2>Post to a Slack channel from a Google Cloud Function</h2>
<p>Next, we will try to use the credentials copied above to enable a Google Cloud Function to send a message to a Slack channel.</p><h3>Google Cloud Basic Setup</h3>
<p>We will use gcloud cli for the following sections, so <a href="https://cloud.google.com/sdk/docs/install" rel="noreferrer">install</a> and <a href="https://cloud.google.com/sdk/docs/initializing" rel="noreferrer">initialize</a> the Google Cloud CLI if not done yet. If you already have gcloud cli, run <code>gcloud components update</code> to update it to the latest version.</p> <p>Create
 a new project for this if required, or choose an existing project, set 
it as default, and export the project id as a shell environment for 
using later. Also export the region you want to use.</p><figure>
  <pre><code class="language-shell">export PROJECT_ID=slackbot-project
export REGION=us-central1

gcloud config set project ${PROJECT_ID}</code></pre>
  </figure>
<p>You will have to enable billing for this project to be able to use some of the functionality we require.</p> <p>You
 may also have to enable the Secret Manager, Cloud Functions, Cloud 
Build, Artifact Registry, and Logging APIs if this is the first time 
you're using Functions in this project. Note that some services like 
Secret Manager need billing to be setup before they can be enabled.</p><figure>
  <pre><code class="language-shell">gcloud services enable --project slackbot-project \
        secretmanager.googleapis.com \
        cloudfunctions.googleapis.com \
        cloudbuild.googleapis.com \
        artifactregistry.googleapis.com \
        logging.googleapis.com</code></pre>
  </figure>
<h3>Create a service account</h3>
<p>By default, Cloud Functions uses a <a href="https://cloud.google.com/functions/docs/securing/function-identity#runtime_service_account" rel="noreferrer">default service account</a> as its identity for function execution. These default service accounts have the <strong>Editor</strong>
 role, which allows them broad access to many Google Cloud services. Of 
course, this is not recommended for production, so we will create a new 
service account for this and <a href="https://cloud.google.com/iam/docs/understanding-service-accounts#granting_minimum" rel="noreferrer">grant it the minimum permissions</a> that it requires.</p><figure>
  <pre><code class="language-shell">SA_NAME=channelbot-sa
SA_EMAIL=${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com

gcloud iam service-accounts create ${SA_NAME} \
    --description=&quot;Service Account for ChannelBot slackbot&quot; \
    --display-name=&quot;ChannelBot SlackBot SA&quot;</code></pre>
  </figure>
<h3>Store secrets and give permissions to service account</h3>
<p>First, we need to store the secrets in Secret Manager.</p><figure>
  <pre><code class="language-shell">printf $SLACK_BOT_TOKEN | gcloud secrets create \
    channelbot-slack-bot-token --data-file=- \
    --project=${PROJECT_ID} \
    --replication-policy=user-managed \
    --locations=${REGION}

printf $SLACK_SIGNING_SECRET | gcloud secrets create \
    channelbot-slack-signing-secret --data-file=- \
    --project=${PROJECT_ID} \
    --replication-policy=user-managed \
    --locations=${REGION}</code></pre>
  </figure>
<p>And give our service account the <code><a href="https://cloud.google.com/secret-manager/docs/access-control#secretmanager.secretAccessor" rel="noreferrer">roles/secretmanager.secretAccessor</a></code> role on these secrets.</p><figure>
  <pre><code class="language-shell">gcloud secrets add-iam-policy-binding \
    projects/${PROJECT_ID}/secrets/channelbot-slack-bot-token \
    --member serviceAccount:${SA_EMAIL} \
    --role roles/secretmanager.secretAccessor

gcloud secrets add-iam-policy-binding \
    projects/${PROJECT_ID}/secrets/channelbot-slack-signing-secret \
    --member serviceAccount:${SA_EMAIL} \
    --role roles/secretmanager.secretAccessor</code></pre>
  </figure>
<h3>Create and deploy the function</h3>
<p>Here's a simple HTTP function that sends a message to slack on any HTTP call:</p><figure>
  <pre><code class="language-python">import functions_framework
from slack_bolt import App

# process_before_response must be True when running on FaaS
app = App(process_before_response=True)

print(&#039;Function has started&#039;)

@functions_framework.http
def send_to_slack(request):
    print(&#039;send_to_slack triggered&#039;)
    channel = &#039;#general&#039;
    text = &#039;Hello from Google Cloud Functions!&#039;
    app.client.chat_postMessage(channel=channel, text=text)
    return &#039;Sent to slack!&#039;</code></pre>
    <figcaption class="text-center">src-v1/main.py</figcaption>
  </figure>
<figure>
  <pre><code class="language-text">functions-framework
slack_bolt</code></pre>
    <figcaption class="text-center">src-v1/requirements.txt</figcaption>
  </figure>
<p>Assuming <code>main.py</code> and <code>requirements.txt</code> are present in <code>src-v1</code> folder, deploy using:</p><figure>
  <pre><code class="language-shell">gcloud beta functions deploy channelbot-send-to-slack \
    --gen2 \
    --runtime python310 \
    --project=${PROJECT_ID} \
    --service-account=${SA_EMAIL} \
    --source ./src-v1 \
    --entry-point send_to_slack \
    --trigger-http \
    --allow-unauthenticated \
    --region ${REGION} \
    --memory=128MiB \
    --min-instances=0 \
    --max-instances=1 \
    --set-secrets &#039;SLACK_BOT_TOKEN=channelbot-slack-bot-token:latest,SLACK_SIGNING_SECRET=channelbot-slack-signing-secret:latest&#039; \
    --timeout 60s</code></pre>
  </figure>
<p>💡We're using <code>--allow-unauthenticated</code> here just to test it out. It will be removed in later sections.</p><h3>Test it out</h3>
<p>Once the deployment is complete, we can view the function logs using:</p><figure>
  <pre><code class="language-shell">gcloud beta functions logs read channelbot-send-to-slack \
	--project ${PROJECT_ID} --gen2</code></pre>
  </figure>
<p>If everything was successful above, once of the recent log statements should say: <code>Function has started</code>.</p> <p>Next, add the bot to the <code>#general</code> channel using <code>/invite @ChannelBot</code> in the general channel on your slack workspace.</p> <p>Next, find the service endpoint using:</p><figure>
  <pre><code class="language-shell">gcloud functions describe channelbot-send-to-slack \
    --project ${PROJECT_ID} \
    --gen2 \
    --region ${REGION} \
    --format &quot;value(serviceConfig.uri)&quot;</code></pre>
  </figure>
<p>This will give a URL like <code>https://channelbot-send-to-slack-ga6Ofi9to0-uc.a.run.app</code>.</p> <p>To trigger the channel post, just do <code>curl ${SERVICE_URL}</code>. This should result in a test message from ChannelBot to the <code>#general</code> channel.</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/27e2d0931e-1699621096/screenshot_20221018_235104.png" alt="">
  
    <figcaption class="text-center">
    ChannelBot message from Google Cloud Functions  </figcaption>
  </figure>
<h2>Trigger via Google Pub/Sub</h2>
<p>Now,
 instead of an unauthenticated HTTP trigger, we would like to trigger 
this via Google Pub/Sub. We would also like to pass the channel name and
 the message to post in the event.</p><h3>Google Pub/Sub basics</h3>
<p>Pub/Sub enables you to create systems of event producers and consumers, called <strong><strong>publishers</strong></strong> and <strong><strong>subscribers</strong></strong>. Publishers communicate with subscribers asynchronously by broadcasting events. Some core concepts:</p><ul><li><strong><strong>Topic.</strong></strong> A named resource to which messages are sent by publishers.</li><li><strong><strong>Subscription.</strong></strong>
 A named resource representing the stream of messages from a single, 
specific topic, to be delivered to the subscribing application.</li><li><strong><strong>Message.</strong></strong> The combination of data and (optional) attributes that a publisher sends to a topic and is eventually delivered to subscribers.</li><li><strong><strong>Publisher.</strong></strong> An application that creates and sends messages to a single or multiple topics.</li></ul><p>In
 this section, we will create a topic, create a subscription for our 
cloud function to listen to messages to that topic, and produce messages
 manually to that topic using <code>gcloud</code> cli. The message will 
contain the channel name and message to post, and the cloud function 
will post that message to the specified slack channel.</p><h3>Create pub/sub topic</h3>
<p>First, we need to create a topic.</p><figure>
  <pre><code class="language-shell">export PUBSUB_TOPIC=channelbot-pubsub
gcloud pubsub topics create ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID}</code></pre>
  </figure>
<h3>Grant permissions to the service account</h3>
<p>Next, we need to give the <code>roles/pubsub.editor</code> role to the service account we're using for the function execution so that it can create a subscription to this pub/sub topic.</p><figure>
  <pre><code class="language-shell">gcloud pubsub topics add-iam-policy-binding ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID} \
    --member serviceAccount:${SA_EMAIL} \
    --role roles/pubsub.editor</code></pre>
  </figure>
<h3>Update the function code</h3>
<p>Here's the <code>main.py</code> we'll need to listen to pub/sub events, extract <code>channel</code> and <code>text</code>, and sent it to slack:</p><figure>
  <pre><code class="language-python">import base64
import json
import functions_framework
from slack_bolt import App

# process_before_response must be True when running on FaaS
app = App(process_before_response=True)

print(&#039;Function has started&#039;)

# Triggered from a message on a Cloud Pub/Sub topic.
@functions_framework.cloud_event
def pubsub_handler(cloud_event):
    try:
        data = base64.b64decode(
            cloud_event.data[&quot;message&quot;][&quot;data&quot;]).decode()
        print(&quot;Received from pub/sub: %s&quot; % data)
        event_data = json.loads(data)
        channel = event_data[&quot;channel&quot;]
        text = event_data[&quot;text&quot;]
        app.client.chat_postMessage(channel=channel, text=text)
    except Exception as E:
        print(&quot;Error decoding message: %s&quot; % E)</code></pre>
    <figcaption class="text-center">src-v2/main.py</figcaption>
  </figure>
<p>Before deploying, we also need to enable the Eventarc API in this project.</p><figure>
  <pre><code class="language-shell">gcloud services enable --project ${PROJECT_ID} \
    eventarc.googleapis.com</code></pre>
  </figure>
<h3>Deploy and Test</h3>
<p>Now, there's a slightly modified version of the deploy command to deploy this:</p><figure>
  <pre><code class="language-shell">gcloud beta functions deploy channelbot-send-to-slack \
    --gen2 \
    --runtime python310 \
    --project ${PROJECT_ID} \
    --service-account ${SA_EMAIL} \
    --source ./src-v2 \
    --entry-point pubsub_handler \
    --trigger-topic ${PUBSUB_TOPIC} \
    --region ${REGION} \
    --memory 128MiB \
    --min-instances 0 \
    --max-instances 1 \
    --set-secrets &#039;SLACK_BOT_TOKEN=channelbot-slack-bot-token:latest,SLACK_SIGNING_SECRET=channelbot-slack-signing-secret:latest&#039; \
    --timeout 60s</code></pre>
  </figure>
<p>The main changes are:</p><ul><li>Changed entry-point to the new function <code>pubsub_handler</code></li><li>Replaced <code>--trigger-http</code> with <code>--trigger-topic</code></li><li>Removed <code>--allow-unauthenticated</code></li></ul><p>Before sending a pub/sub message, we will also need to give the <code>roles/run.invoker</code> role to our service account to be able to trigger our newly deployed function.</p><figure>
  <pre><code class="language-shell">gcloud run services add-iam-policy-binding channelbot-send-to-slack \
    --project ${PROJECT_ID} \
    --region ${REGION} \
    --member=serviceAccount:${SA_EMAIL} \
    --role=roles/run.invoker</code></pre>
  </figure>
<p>To test this out, we can send a pub/sub message using gcloud cli:</p><figure>
  <pre><code class="language-shell">gcloud pubsub topics publish ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID} \
    --message &#039;{&quot;channel&quot;: &quot;#general&quot;, &quot;text&quot;: &quot;Hello from Cloud Pub/Sub!&quot;}&#039;</code></pre>
  </figure>
<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/f184b7795b-1699621096/screenshot_20221104_232055.png" alt="">
  
    <figcaption class="text-center">
    ChannelBot message via pub/sub  </figcaption>
  </figure>
<h2>Post new channels roundup using cloud scheduler</h2>
<h3>Manually post recently created channels</h3>
<p>Now
 that we have gained the capability to trigger a message from pub/sub to
 slack, we can add some logic to fetch the recently created channels 
from slack and post it as a message on this trigger.</p> <p>Here's the modified <code>main.py</code> to do this:</p><figure>
  <pre><code class="language-python">import base64
import json
import time
import functions_framework
from slack_bolt import App

# process_before_response must be True when running on FaaS
app = App(process_before_response=True)

print(&#039;Function has started&#039;)

# Triggered from a message on a Cloud Pub/Sub topic.
@functions_framework.cloud_event
def pubsub_handler(cloud_event):
    try:
        data = base64.b64decode(
            cloud_event.data[&quot;message&quot;][&quot;data&quot;]).decode()
        print(&quot;Received from pub/sub: %s&quot; % data)
        event_data = json.loads(data)
        max_days = event_data[&quot;max_days&quot;] # Max age of channels
        channel = event_data[&quot;channel&quot;]
        recent_channels = get_recent_channels(app, max_days)
        if len(recent_channels) &gt; 0:
            blocks, text = format_channels(recent_channels, max_days)
            app.client.chat_postMessage(channel=channel, text=text,
                                        blocks=blocks)
        else:
            print(&quot;No recent channels&quot;)
    except Exception as E:
        print(&quot;Error decoding message: %s&quot; % E)


def get_recent_channels(app, max_days):
    max_age_s = max_days * 24 * 60 * 60
    result = app.client.conversations_list()
    all = result[&quot;channels&quot;]
    now = time.time()
    return [ c for c in all if (now - c[&quot;created&quot;] &lt;= max_age_s) ]

def format_channels(channels, max_days):
    text = (&quot;%s channels created in the last %s day(s):&quot; %
            (len(channels), max_days))
    blocks = [{
        &quot;type&quot;: &quot;header&quot;,
        &quot;text&quot;: {
            &quot;type&quot;: &quot;plain_text&quot;,
            &quot;text&quot;: text
        }
    }]
    summary = &quot;&quot;
    for c in channels:
        summary += &quot;\n*&lt;#%s&gt;*: %s&quot; % (c[&quot;id&quot;], c[&quot;purpose&quot;][&quot;value&quot;])
    blocks.append({
        &quot;type&quot;: &quot;section&quot;,
        &quot;text&quot;: {
            &quot;type&quot;: &quot;mrkdwn&quot;,
            &quot;text&quot;: summary
        }
    })
    return blocks, text</code></pre>
    <figcaption class="text-center">src-v3/main.py</figcaption>
  </figure>
<p>After deploying this with the same command above (just change <code>--source ./src-v2</code> to <code>--source ./src-v3</code>), we can send a pub/sub event to trigger it:</p><figure>
  <pre><code class="language-shell">gcloud pubsub topics publish ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID} \
    --message &#039;{&quot;channel&quot;: &quot;#general&quot;, &quot;max_days&quot;: 7}&#039;</code></pre>
  </figure>
<p>And it posts a message like this:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/7e9ecc0ef4-1699621096/screenshot_20221104_235500.png" alt="">
  
    <figcaption class="text-center">
    Recently created channels posted by ChannelBot  </figcaption>
  </figure>
<h3>Create schedule</h3>
<p>Next,
 we want to periodically schedule this message. For this, we will 
configure a cron job in Google Cloud Scheduler to send a Pub/Sub event 
with the required parameters periodically.</p> <p>Before we create a schedule, we will have to enable the Cloud Scheduler API</p><figure>
  <pre><code class="language-shell">gcloud services enable --project ${PROJECT_ID} \
    cloudscheduler.googleapis.com</code></pre>
  </figure>
<p>To schedule the Pub/Sub trigger at 1600 hours UTC time every day:</p><figure>
  <pre><code class="language-shell">gcloud scheduler jobs create pubsub channelbot-job \
    --project ${PROJECT_ID} \
    --location ${REGION} \
    --schedule &quot;0 16 * * *&quot; \
    --time-zone &quot;UTC&quot; \
    --topic ${PUBSUB_TOPIC} \
    --message-body &#039;{&quot;channel&quot;: &quot;#general&quot;, &quot;max_days&quot;: 1}&#039;</code></pre>
  </figure>
<p>After this, a Pub/Sub event should be fired to the <code>channelbot-pubsub</code> topic every day, which should result in a slack message to <code>#general</code> with a list of channels created in the last day.</p><h2>Closing Thoughts</h2>
<p>Full code samples for this can be found in <a href="https://github.com/srijan/gcloud_slackbot" rel="noreferrer">this github repo</a>. I've also included a <code>Makefile</code> with targets split into sections associated with the different steps in this post.</p> <p>I
 also plan to follow this up with a part 2 where we will use slack's 
slash commands to allow the end-user of this bot to setup the channel 
and frequency of posting of the recent channels list, and even configure
 multiple schedules. Please comment below if this is something you will 
be interested in.</p>]]></content:encoded>
    <comments>https://srijan.ch/slackbot-google-cloud-part-1#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Automating custom routes and DNS setup on Windows</title>
    <description><![CDATA[How I automated setting up custom routes and DNS for FortiClient SSL VPN on Windows 10]]></description>
    <link>https://srijan.ch/automating-custom-routes-dns-windows</link>
    <guid isPermaLink="false">6092a2ba2a944a000154e7ba</guid>
    <category><![CDATA[windows]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 05 May 2021 16:25:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/automating-custom-routes-dns-windows/f4072ebf2b-1699621096/photo-1593642632823-8f785ba67e45.jpeg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/automating-custom-routes-dns-windows/f4072ebf2b-1699621096/photo-1593642632823-8f785ba67e45.jpeg" alt="Automating custom routes and DNS setup on Windows">
  
  </figure>
<p>One of the problems I've faced working from home for the last
 one year is the rigidity of the VPN software used at my work. If we 
were using something like <a href="https://openvpn.net/" rel="noreferrer">OpenVPN</a>, then I could modify the client config to setup any overrides I wanted to the network routing table or DNS, but we use <a href="https://www.fortinet.com/resources/cyberglossary/ssl-vpn" rel="noreferrer">FortiClient SSL VPN</a>, which does not have such a functionality. Also, I've been using Windows 10 on my work setup for some time now because <a href="https://docs.microsoft.com/en-us/windows/wsl/" rel="noreferrer">WSL</a> works very well for me.</p> <p>But first, why did I even need to modify the routing table or DNS at all?</p><ol><li>I
 use a slightly non-standard network setup for my home, and one of my 
home subnets actually clashes with one of the routed work subnets (which
 I don't need). So, the easy solution for me is to change this routing 
table entry to what works for me.</li><li>I use <a href="https://diversion.ch/diversion/diversion.html" rel="noreferrer">diversion</a> on my home router to do central ad-blocking, and wanted to leverage that even when connected to the work VPN.</li></ol><hr />
<p>Here is the <a href="https://docs.microsoft.com/en-us/powershell/" rel="noreferrer">PowerShell</a> script that does the changes I want:</p><figure>
  <pre><code class="language-powershell">Start-Transcript -Append -Path &quot;C:\Users\srijan\Apps\network-post-connect.log&quot;

if( (Get-NetConnectionProfile -InterfaceAlias Wi-Fi).Name -eq &quot;Home Wifi&quot; ) {
    echo &quot;Home Wifi is connected&quot;
    $FortinetAdapter = Get-NetAdapter -InterfaceDescription &quot;Fortinet SSL*&quot;
    if($FortinetAdapter.Status -eq &quot;Up&quot;) {
        echo &quot;Work VPN is connected&quot;
        $FortinetAdapter | Set-DnsClientServerAddress -ServerAddresses (&quot;192.168.2.1&quot;, &quot;8.8.8.8&quot;)
        echo &quot;[OK] DNS server set to 192.168.2.1,8.8.8.8&quot;
        Get-NetRoute -DestinationPrefix 192.168.2.0/24 -RouteMetric 0 | Set-NetRoute -RouteMetric 500
        echo &quot;[OK] 192.168.2.0/24 routed locally&quot;
    }
    else {
        echo &quot;Work VPN is not connected. Doing nothing.&quot;
    }
}
else {
    &quot;Home Wifi is not connected. Doing nothing.&quot;
}

Stop-Transcript</code></pre>
    <figcaption class="text-center">network-post-connect.ps</figcaption>
  </figure>
<p>Explanation of what it does:</p><ol><li>Uses Start-Transcript and Stop-Transcript to log the output to a file.</li><li>Checks if the system is connected to SSID "Home Wifi".</li><li>If so, checks if the adapter with description with the pattern "Fortinet SSL*" is up</li><li>If so, changes the DNS server address</li><li>In the routing table, it increases the route metric of <code>192.168.2.0/24</code> with metric <code>0</code> - sets it to <code>500</code>. This route is added by FortiClient, which I wanted to de-prioritize.</li></ol><p>From the <a href="https://docs.microsoft.com/en-us/powershell/module/nettcpip/set-netroute?view=windowsserver2019-ps#parameters" rel="noreferrer">Set-NetRoute docs</a>:</p><blockquote>
  The computer selects the route with the lowest combined value.  </blockquote>
<hr />
<p>Now, we just need to setup some automation to run this whenever the VPN is connected. For this, I used <a href="https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-task-scheduler" rel="noreferrer">Windows Task Scheduler</a>. Whenever windows activates any network profile, a <code>Microsoft-Windows-NetworkProfile/Operational</code> log is generated with Event Id <code>10000</code>. Windows Task Scheduler has the capability to schedule a task to be run whenever any event is triggered.</p> <p>Here are some screenshots of the task configuration:</p><figure data-ratio="auto">
  <ul>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/automating-custom-routes-dns-windows/cd90aa1b3c-1699621096/task-general-1.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/automating-custom-routes-dns-windows/cd494ad4f5-1699621096/task-trigger.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/automating-custom-routes-dns-windows/ce2aa5df17-1699621096/task-action.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/automating-custom-routes-dns-windows/00e4e2a59e-1699621096/task-settings.png">    </li>
      </ul>
  </figure>
<p>Here's the exported XML of the task:</p><figure>
  <pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-16&quot;?&gt;
&lt;Task version=&quot;1.2&quot; xmlns=&quot;http://schemas.microsoft.com/windows/2004/02/mit/task&quot;&gt;
  &lt;RegistrationInfo&gt;
    &lt;Date&gt;2021-01-19T23:24:57.7398228&lt;/Date&gt;
    &lt;Author&gt;srijan&lt;/Author&gt;
    &lt;Description&gt;Currently:
1. Set local DNS
2. Route 192.168.2.0/24 locally&lt;/Description&gt;
    &lt;URI&gt;\Network post connect automation&lt;/URI&gt;
  &lt;/RegistrationInfo&gt;
  &lt;Triggers&gt;
    &lt;EventTrigger&gt;
      &lt;Enabled&gt;true&lt;/Enabled&gt;
      &lt;Subscription&gt;&amp;lt;QueryList&amp;gt;&amp;lt;Query Id=&quot;0&quot; Path=&quot;Microsoft-Windows-NetworkProfile/Operational&quot;&amp;gt;&amp;lt;Select Path=&quot;Microsoft-Windows-NetworkProfile/Operational&quot;&amp;gt;*[System[Provider[@Name=&#039;Microsoft-Windows-NetworkProfile&#039;] and EventID=10000]]&amp;lt;/Select&amp;gt;&amp;lt;/Query&amp;gt;&amp;lt;/QueryList&amp;gt;&lt;/Subscription&gt;
      &lt;Delay&gt;PT10S&lt;/Delay&gt;
    &lt;/EventTrigger&gt;
  &lt;/Triggers&gt;
  &lt;Settings&gt;
    &lt;MultipleInstancesPolicy&gt;IgnoreNew&lt;/MultipleInstancesPolicy&gt;
    &lt;DisallowStartIfOnBatteries&gt;false&lt;/DisallowStartIfOnBatteries&gt;
    &lt;StopIfGoingOnBatteries&gt;true&lt;/StopIfGoingOnBatteries&gt;
    &lt;AllowHardTerminate&gt;true&lt;/AllowHardTerminate&gt;
    &lt;StartWhenAvailable&gt;false&lt;/StartWhenAvailable&gt;
    &lt;RunOnlyIfNetworkAvailable&gt;false&lt;/RunOnlyIfNetworkAvailable&gt;
    &lt;IdleSettings&gt;
      &lt;StopOnIdleEnd&gt;true&lt;/StopOnIdleEnd&gt;
      &lt;RestartOnIdle&gt;false&lt;/RestartOnIdle&gt;
    &lt;/IdleSettings&gt;
    &lt;AllowStartOnDemand&gt;true&lt;/AllowStartOnDemand&gt;
    &lt;Enabled&gt;true&lt;/Enabled&gt;
    &lt;Hidden&gt;false&lt;/Hidden&gt;
    &lt;RunOnlyIfIdle&gt;false&lt;/RunOnlyIfIdle&gt;
    &lt;WakeToRun&gt;false&lt;/WakeToRun&gt;
    &lt;ExecutionTimeLimit&gt;PT1H&lt;/ExecutionTimeLimit&gt;
    &lt;Priority&gt;7&lt;/Priority&gt;
  &lt;/Settings&gt;
  &lt;Actions Context=&quot;Author&quot;&gt;
    &lt;Exec&gt;
      &lt;Command&gt;powershell&lt;/Command&gt;
      &lt;Arguments&gt;-File C:\Users\srijan\Apps\network-post-connect.ps1 -WindowStyle Hidden&lt;/Arguments&gt;
    &lt;/Exec&gt;
  &lt;/Actions&gt;
&lt;/Task&gt;</code></pre>
  </figure>
<hr />
<p>I
 have gotten used to the ease of setting up things like this for Linux, 
but was pleasantly surprised that it's easy enough for Windows as well. 
Windows Task Scheduler actually supports a lot of different conditionals
 for tasks as well. For example, only starting the task if the computer 
has been idle for some time, or only starting if connected to AC power, 
etc..</p> <p>Let me know in the comments if you think there is an easier way, or if you have any improvement suggestions for the above.</p>]]></content:encoded>
    <comments>https://srijan.ch/automating-custom-routes-dns-windows#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>Erlang: Dialyzer HTML Reports using rebar3</title>
    <description><![CDATA[How I made a custom rebar3 plugin to generate HTML reports for dialyzer warnings]]></description>
    <link>https://srijan.ch/erlang-dialyzer-html-reports-rebar3</link>
    <guid isPermaLink="false">6072d47bb1237c000188be89</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 25 Apr 2021 17:10:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/erlang-dialyzer-html-reports-rebar3/b5893af741-1699621096/dialyzer-html-report.png" medium="image" />
    <content:encoded><![CDATA[<h2>Introduction</h2>
<p><a href="https://erlang.org/doc/man/dialyzer.html" rel="noreferrer">Dialyzer</a> is a static analysis tool for <a href="https://www.erlang.org/" rel="noreferrer">Erlang</a>
 that identifies software discrepancies, such as definite type errors, 
code that has become dead or unreachable because of programming errors, 
and unnecessary tests, in single Erlang modules or entire (sets of) 
applications.</p> <p>Dialyzer is integrated with <a href="https://github.com/erlang/rebar3" rel="noreferrer">rebar3</a> (a build tool for Erlang), and its default output looks like this:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/erlang-dialyzer-html-reports-rebar3/fcdab50184-1699621096/dialyzer-rebar3-default.png" alt="Rebar3 Dialyzer Default Output">
  
    <figcaption class="text-center">
    <code>rebar3 dialyzer</code> output  </figcaption>
  </figure>
<p>This is a good starting point, but it's not very useful in some cases:</p><ol><li>If you have lots of warnings, this output covers several screens, and it becomes difficult to parse through everything.</li><li>If you run this in some sort of continuous integration (like Jenkins), then the console output is not very friendly.</li></ol><p>One way to improve this is to generate an HTML report which can be published/emailed/opened in the browser.</p> <p>So,
 I build a rebar3 plugin that generates a nicely formatted color HTML 
report from the dialyzer output. The plugin can be found <a href="https://hex.pm/packages/rebar3_dialyzer_html" rel="noreferrer">on hex.pm</a>, or <a href="https://github.com/srijan/rebar3_dialyzer_html" rel="noreferrer">on github</a>.</p><h2>Usage</h2>
<p>Make sure you're using rebar3 version <code>3.15</code> or later.</p><ol><li>Add the plugin to your <code>rebar.config</code>:</li></ol><figure>
  <pre><code class="language-erlang">{plugins, [
    %% from hex
    {rebar3_dialyzer_html, &quot;0.2.0&quot;}
    
    %% or, latest from git
    {rebar3_dialyzer_html, {git, &quot;https://github.com/srijan/rebar3_dialyzer_html.git&quot;, {branch, &quot;main&quot;}}}
]}.</code></pre>
    <figcaption class="text-center">rebar.config snippet</figcaption>
  </figure>
<p>2. Select raw format for the dialyzer warnings file generated by rebar3 (this is a new flag available from rebar <code>3.15</code>):</p><figure>
  <pre><code class="language-erlang">{dialyzer, [
    {output_format, raw}
]}.</code></pre>
    <figcaption class="text-center">rebar.config snippet</figcaption>
  </figure>
<p>3. Run the <code>dialyzer_html</code> rebar3 command:</p><figure>
  <pre><code class="language-shellsession">$ rebar3 dialyzer_html          
===&gt; Generating Dialyzer HTML Report
===&gt; HTML Report written to _build/default/dialyzer_report.html</code></pre>
  </figure>
<p>Here's how the report looks:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/erlang-dialyzer-html-reports-rebar3/b5893af741-1699621096/dialyzer-html-report.png" alt="Dialyzer HTML Report Sample">
  
    <figcaption class="text-center">
    Sample HTML report for dialyzer  </figcaption>
  </figure>
<h2>How I built the plugin</h2>
<h3>rebar3 dialyzer</h3>
<p>The rebar3 built-in dialyzer plugin does the following:</p><ol><li>Runs dialyzer with the options configured in <code>rebar.config</code></li><li>Converts the output to ANSI color format and write it to console (it has a custom function for this formatting)</li><li>Converts the output to basic format (using built-in <code>dialyzer:format/2</code>) and write it to a dialyzer_warnings file.</li></ol><p>I wanted to find out the easiest way to get a nicely formatted HTML report, ideally without forking the rebar3 project itself.</p> <p>The
 first thing I needed was a way to save the raw (machine parse-able) 
dialyzer output to the warnings file instead of the default formatted 
output. For this, I <a href="https://github.com/erlang/rebar3/issues/2524" rel="noreferrer">submitted a new feature</a> to the rebar3 project, and it introduces a new config to enable this. So, this plugin needs rebar3 version <code>3.15</code> or later.</p><h3>Plugin vs Escript</h3>
<p>Next, to actually parse and output the HTML file, I would need to run some Erlang code. There are two options I considered:</p><ol><li><u>Escript called from Makefile/wrapper</u><br>This option works okay, but we cannot re-use any rebar3 internal function or State. I wanted to use rebar3's own custom function for formatting the dialyzer warnings, so decided to not go with this option.<br></li><li><u>Custom rebar3 plugin</u><br>Doing it this way makes it easy for anyone to use, and I can re-use things already implemented in rebar3 itself. So, I decided to use this option.<br></li></ol><h3>HTML output</h3>
<p>Now, in the custom rebar3 plugin, I needed to convert the ANSI color-coded output given by <code>rebar_dialyzer_format:format_warnings/2</code> into something for HTML.</p> <p>I thought of the following options:</p><ol><li>rebar3 uses the <a href="https://github.com/project-fifo/cf" rel="noreferrer">cf library</a> to convert tagged strings to ANSI color codes. I can use something like dependency injection to replace the <code>cf</code> module with my own module so that the tagged strings are directly converted to HTML without even going to the intermediate ANSI color-coded format.<br><br>This method seemed very hacky, so I decided not to pursue it. But, if rebar3 makes the dialyzer format interface configurable, I can reevaluate this approach.<br></li><li>Convert by writing a library in Erlang for ANSI code to HTML tags conversion.<br>There is a library called <a href="https://github.com/stephlow/ansi_to_html" rel="noreferrer">ansi_to_html</a> in elixir - but didn't want to add a huge dependency like that.<br>But writing a new Erlang library to do this can be a future optimization.<br></li><li>Convert using a JS library after page load. I found a javascript library called <a href="https://github.com/drudru/ansi_up" rel="noreferrer">ansi_up</a> which can convert ANSI codes to HTML color tags, or it can add CSS classes that can be styled as required.<br></li></ol><p>I
 opted for approach #3 because it was the easiest. I also grouped the 
warnings by app name so that all warnings for a single app are in one 
place, and the report includes the number of warnings per app.</p> <p>Also,
 if the JS library could not be loaded (for example due to no internet, 
or any security headers), then it will still show the basic formatted 
output using <code>dialyzer:format/2</code>.</p><h2>Future Improvements</h2>
<ol><li>I
 want to remove the dependency on Javascript, and want to write/use a 
pure Erlang library that can convert the ANSI codes to HTML.</li><li>Ideally,
 rebar3 itself can separate the dialyzer warning parsing and formatting 
into different functions, and make it possible to override the 
formatting function so that any plugin can pass its own formatting 
function into the dialyzer plugin.</li><li>This can even run <code>git</code>
 commands in the shell to figure out if any lines changed in the most 
recent commit involve a warning, and maybe highlight them especially in 
the report. This can be useful CI reports on pull requests.</li><li>Maybe make the format plug-able so the report can be saved in any JSON / XML or any custom format.</li></ol><hr />
<p>Let me know in the comments below, or on <a href="https://twitter.com/srijan4" rel="noreferrer">twitter</a>/<a href="https://github.com/srijan/rebar3_dialyzer_html" rel="noreferrer">github </a>if you have any suggestions for this plugin.</p>]]></content:encoded>
    <comments>https://srijan.ch/erlang-dialyzer-html-reports-rebar3#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Running multiple emacs daemons</title>
    <description><![CDATA[Run multiple emacs daemons for different purposes and set different themes/config based on daemon name]]></description>
    <link>https://srijan.ch/running-multiple-emacs-daemons</link>
    <guid isPermaLink="false">60671113b1237c000188bd2e</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 02 Apr 2021 14:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using <a href="https://www.gnu.org/software/emacs/" rel="noreferrer">Emacs</a> for several years, and these days I'm using it both for writing code and for working with my email (another post on that soon).</p> <p>As
 commonly suggested, I run Emacs in daemon-mode to keep things fast and 
snappy, with an alias to auto-start the daemon if it's not started, and 
connect to it if started:</p><figure>
  <pre><code class="language-shell">alias e=&#039;emacsclient -a &quot;&quot; -c&#039;</code></pre>
    <figcaption class="text-center">Config for single daemon</figcaption>
  </figure>
<p>But, this has some problems:</p><ol><li>The buffers for email and code projects get mixed together</li><li>Restarting the emacs server for code (for example) kills the open mail buffers as well</li><li>Emacs themes are global – they cannot be set per frame. For code, I prefer a dark theme (most of the time), but for email, a light theme works better for me (specially for HTML email).</li></ol><p>To
 solve this, I searched for a way to run multiple emacs daemons, 
selecting which one to connect to using shell aliases, and automatically
 setting the theme based on the daemon name. Here's my setup to achieve 
this:</p><h3>Custom run_emacs function in zshrc:</h3>
<figure>
  <pre><code class="language-shell">run_emacs() {
  if [ &quot;$1&quot; != &quot;&quot; ];
  then
    server_name=&quot;${1}&quot;
    args=&quot;${@:2}&quot;
  else
    server_name=&quot;default&quot;
    args=&quot;&quot;
  fi

  if ! emacsclient -s ${server_name} &quot;${@:2}&quot;;
  then
    emacs --daemon=${server_name}
    echo &quot;&gt;&gt; Server should have started. Trying to connect...&quot;
    emacsclient -s ${server_name} &quot;${@:2}&quot;
  fi
}</code></pre>
  </figure>
<p>This function takes an optional argument – the name to be used for the daemon. If not provided, it uses <code>default</code>
 as the name. Then, it tries to connect to a running daemon with the 
name. And if it's not running, it starts the daemon and then connects to
 it. It also passes any additional arguments to <code>emacsclient</code>.</p><h3>Custom aliases in zshrc:</h3>
<figure>
  <pre><code class="language-shell"># Create a new frame in the default daemon
alias e=&#039;run_emacs default -n -c&#039;

# Create a new terminal (TTY) frame in the default daemon
alias en=&#039;run_emacs default -t&#039;

# Open a file to edit using sudo
es() {
    e &quot;/sudo:root@localhost:$@&quot;
}

# Open a new frame in the `mail` daemon, and start notmuch in the frame
alias em=&quot;run_emacs mail -n -c -e &#039;(notmuch-hello)&#039;&quot;</code></pre>
  </figure>
<p>The first 3 aliases use the <code>default</code> daemon. The last one creates a new frame in the <code>mail</code> daemon and also uses <code>emacsclient</code>'s <code>-e</code> flag to start notmuch (the email package I use in Emacs).</p><h3>Emacs config:</h3>
<figure>
  <pre><code class="language-elisp">(cond
 ((string= &quot;mail&quot; (daemonp))
  (setq doom-theme &#039;modus-operandi)
 )
 (t
  (setq doom-theme &#039;modus-vivendi)
 )
)</code></pre>
  </figure>
<p>This checks the name of the daemon passed during 
startup, and sets the doom theme accordingly. The same pattern can be 
used to set any config based on the daemon name.</p> <p>Note that I'm using <a href="https://github.com/hlissner/doom-emacs" rel="noreferrer">doom emacs</a>, but the above method should work with or without any framework for Emacs. Tested with Emacs 27 and 28.</p>]]></content:encoded>
    <comments>https://srijan.ch/running-multiple-emacs-daemons#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>Erlang: find cross-app calls using xref</title>
    <description><![CDATA[Using xref magic to query compiled beam files and find cross-application function calls in Erlang]]></description>
    <link>https://srijan.ch/erlang-find-cross-app-calls-using-xref</link>
    <guid isPermaLink="false">606006e8b1237c000188badf</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 28 Mar 2021 09:05:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/erlang-find-cross-app-calls-using-xref/5390618c89-1699621096/omar-flores-moo6k3raiwe-unsplash.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/erlang-find-cross-app-calls-using-xref/5390618c89-1699621096/omar-flores-moo6k3raiwe-unsplash.jpg" alt="Erlang: find cross-app calls using xref">
  
  </figure>
<p>At work, we use the <a href="https://adoptingerlang.org/docs/development/umbrella_projects/" rel="noreferrer">multi-app project pattern</a> to organize our codebase. This lets us track everything in a single repository but still keep things isolated.</p> <p>For isolation, we wanted to restrict apps to only be able to call the public interfaces of other apps (similar to <a href="https://en.wikipedia.org/wiki/Facade_pattern" rel="noreferrer">facade pattern</a>).
 However, since everything in Erlang is in a global namespace, nothing 
prevents code in one app to call the (exported) functions from another 
app.</p> <p>Next best solution—detect the above scenario and raise warnings during code review/CI.</p> <p><a href="https://erlang.org/doc/apps/tools/xref_chapter.html" rel="noreferrer">Xref</a> to the rescue:</p><blockquote>
  Xref is a cross reference tool that can be used for finding dependencies between functions, modules, applications and releases.  </blockquote>
<p>Xref
 includes some predefined analysis patterns that perform some common 
tasks like searching for undefined functions, deprecated function calls,
 unused exported functions, etc.</p> <p>How it works: when <a href="https://erlang.org/doc/man/xref.html#xref_server" rel="noreferrer">xref server</a> is started and some modules/applications/releases are added for analysis, it builds a <strong>Call Graph</strong>: a directed graph data structure containing the calls between functions, modules, applications or releases. It also creates an <strong>Inter Call Graph</strong> which holds information about indirect calls (chain of calls). It exposes a very powerful <a href="https://erlang.org/doc/man/xref.html#query" rel="noreferrer">query language</a>, which can be used to extract any information we want from the above graph data structures.</p> <p>To demonstrate this, I created a sample multi-app repository: <a href="https://github.com/srijan/library_sample" rel="noreferrer">library_sample</a>. There are some cross-app function calls in this code that we want to detect.</p> <p>This repo is supposed to represent the functionality of a physical Library. It has four apps: <code>library</code>, <code>library_api</code>, <code>library_catalog</code>, and <code>library_inventory</code>. <code>library_catalog</code> has metadata about the books in the library, <code>library_inventory</code> has information about the availability of books, return dates, etc., <code>library_api</code> has HTTP handlers which call the above, and <code>library</code> is the main app which brings it all together.</p> <p>Let’s say we want that <code>library_api</code> can call <code>library_catalog</code> and <code>library_inventory</code> functions, but catalog and inventory cannot call each other directly.</p> <p>First, we clone the repo and run rebar3 shell:</p><figure>
  <pre><code class="language-shellsession">$ git clone https://github.com/srijan/library_sample
Cloning into &#039;library_sample&#039;...
remote: Enumerating objects: 29, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 29 (delta 3), reused 29 (delta 3), pack-reused 0
Unpacking objects: 100% (29/29), 910.62 KiB | 2.53 MiB/s, done.

$ cd library_sample

$ ./rebar3 shell
===&gt; Verifying dependencies...
===&gt; Analyzing applications...
===&gt; Compiling library_inventory
===&gt; Compiling library_catalog
===&gt; Compiling library
===&gt; Compiling library_api
Erlang/OTP 23 [erts-11.1.7] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Eshell V11.1.7  (abort with ^G)
1&gt;</code></pre>
  </figure>
<p>Then, we start xref and add our build directory for analysis:</p><figure>
  <pre><code class="language-erlang">1&gt; xref:start(s).
{ok,&lt;0.185.0&gt;}

2&gt; xref:add_directory(s, &quot;_build/default/lib&quot;, [{recurse, true}]).
{ok,[library_api,library_app,library_catalog,
     library_inventory,library_sample_app,library_sample_sup,
     library_sup]}</code></pre>
  </figure>
<p>Using <code>xref:q/2</code> for querying the constructed call graph:</p><figure>
  <pre><code class="language-erlang">3&gt; xref:q(s, &quot;E | library_inventory || library_catalog&quot;).
{ok,[]}

4&gt; xref:q(s, &quot;E | library_catalog || library_inventory&quot;).
{ok,[{{library_catalog,get_by_id,1},
      {library_inventory,get_available_copies,1}}]}</code></pre>
  </figure>
<p>This means that there are no direct calls from the <code>library_inventory</code> application to the <code>library_catalog</code> application. But, there is a direct call from <code>library_catalog:get_by_id/1</code> to <code>library_inventory:get_available_copies/1</code>.</p> <p>The query <code>E | library_catalog || library_inventory</code> can be read as:</p><ul><li><code>E</code> = All Call Graph Edges</li><li><code>|</code> = The subset of calls <strong>from</strong> any of the vertices. So <code>| library_catalog</code> creates a subset which contains calls from the <code>library_catalog</code> app.</li><li><code>||</code> = The subset of calls <strong>to</strong> any of the vertices. So, <code>|| library_inventory</code> further creates a subset of the previous subset which contains calls to the <code>library_inventory</code> app.</li></ul><p>To get both direct and indirect calls, <code>closure E</code> has to be used:</p><figure>
  <pre><code class="language-erlang">5&gt; xref:q(s, &quot;closure E | library_catalog || library_inventory&quot;).
{ok,[{{library_catalog,get_by_id,1},
      {library_inventory,get_all,0}},
     {{library_catalog,get_by_id,1},
      {library_inventory,get_available_copies,1}}]}</code></pre>
  </figure>
<p>This tells us that there is an indirect direct call from  <code>library_catalog:get_by_id/1</code> to <code>library_inventory:get_all/0</code>.</p> <p>The query language is very powerful, and there are more interesting examples in the <a href="https://erlang.org/doc/apps/tools/xref_chapter.html#expressions" rel="noreferrer">xref user’s guide</a>.</p> <p>But
 this only runs the required queries manually in Erlang shell. We want 
to be able to run it in continuous integration. Luckily, rebar3 comes 
with a way to <a href="https://rebar3.readme.io/docs/configuration#xref" rel="noreferrer">specify custom xref queries</a> to run when running <code>./rebar3 xref</code>, and to raise an error if they don’t match against the expected value defined.</p> <p>Here’s the xref section from my <code>rebar.config</code>:</p><figure>
  <pre><code class="language-erlang">{xref_queries, [
                {&quot;closure E | library_catalog || library_inventory&quot;, []},
                {&quot;closure E | library_inventory || library_catalog&quot;, []}
               ]}.</code></pre>
    <figcaption class="text-center">rebar.config</figcaption>
  </figure>
<p>This performs the two queries I want and matches them against the the target value of <code>[]</code>. Sample output:</p><figure>
  <pre><code class="language-shellsession">$ ./rebar3 xref
===&gt; Verifying dependencies...
===&gt; Analyzing applications...
===&gt; Compiling library_inventory
===&gt; Compiling library_catalog
===&gt; Compiling library
===&gt; Compiling library_api
===&gt; Running cross reference analysis...
===&gt; Query closure E | library_catalog || library_inventory
 answer []
 did not match [{{library_catalog,get_by_id,1},{library_inventory,get_all,0}},
                {{library_catalog,get_by_id,1},
                 {library_inventory,get_available_copies,1}}]</code></pre>
  </figure>
<p>So, now this is ready for automation.</p>]]></content:encoded>
    <comments>https://srijan.ch/erlang-find-cross-app-calls-using-xref#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>Advanced PostgreSQL monitoring using Telegraf, InfluxDB, Grafana</title>
    <description><![CDATA[My experience with advanced monitoring for PostgreSQL database using Telegraf, InfluxDB, and Grafana, using a custom postgresql plugin for Telegraf.]]></description>
    <link>https://srijan.ch/advanced-postgresql-monitoring-using-telegraf</link>
    <guid isPermaLink="false">603cefe38527ef00014f776d</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[postgresql]]></category>
    <category><![CDATA[monitoring]]></category>
    <category><![CDATA[telegraf]]></category>
    <category><![CDATA[influxdb]]></category>
    <category><![CDATA[ansible]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 11 Mar 2021 15:30:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/d28e269c6f-1699621096/grafana-postgresql-monitoring.png" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/54e29f97da-1699621096/photo-1564760055775-d63b17a55c44.jpeg" alt="Advanced PostgreSQL monitoring using Telegraf, InfluxDB, Grafana">
  
  </figure>
<h2>Introduction</h2>
<p>This post will go 
through my experience with setting up some advanced monitoring for 
PostgreSQL database using Telegraf, InfluxDB, and Grafana (also known as
 the TIG stack), the problems I faced, and what I ended up doing at the 
end.</p> <p>What do I mean by advanced? I liked <a href="https://www.datadoghq.com/blog/postgresql-monitoring/#key-metrics-for-postgresql-monitoring" rel="noreferrer">this Datadog article</a> about some key metrics for PostgreSQL monitoring. Also, this <a href="https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/db/postgresql" rel="noreferrer">PostgreSQL monitoring template for Zabbix</a>
 has some good pointers. I didn’t need everything mentioned in these 
links, but they acted as a good reference. I also prioritized monitoring
 for issues which I’ve myself faced in the past.</p> <p>Some key things that I planned to monitor:</p><ul><li>Active (and idle) connections vs. max connections configured</li><li>Size of databases and tables</li><li><a href="https://www.datadoghq.com/blog/postgresql-monitoring/#read-query-throughput-and-performance" rel="noreferrer">Read query throughput and performance</a> (sequential vs. index scans, rows fetched vs. returned, temporary data written to disk)</li><li><a href="https://www.datadoghq.com/blog/postgresql-monitoring/#write-query-throughput-and-performance" rel="noreferrer">Write query throughput and performance</a> (rows inserted/updated/deleted, locks, deadlocks, dead rows)</li></ul><p>There
 are a lot of resources online about setting up the data collection 
pipeline from Telegraf to InfluxDB, and creating dashboards on Grafana. 
So, I’m not going into too much detail on this part. This is what the 
pipeline looks like:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/7c74d1bdcd-1699621096/pg_telegraf_influx_grafana.png" alt="PostgreSQL to Telegraf to InfluxDB to Grafana">
  
    <figcaption class="text-center">
    PostgreSQL to Telegraf to InfluxDB to Grafana. <a href="https://www.planttext.com/?text=TP9RRu8m5CVV-oawdp2PCfCzBTkY8d4cA0OmcqzD1nqsmPRqacc6ttr5A7Etyz2UzlpE_vnUnb9XeVI-05UKfONEY1O5t2bLoZlN5VXzc5ErqwzQ4f5ofWXJmvJltOYcM6HyHKb92jUx7QmBpDHc6RY250HBueu6DsOVUIO9KqR4iAoh19Djk4dGyo9vGe4_zrSpfm_0b6kMON5qkBo6lJ3kzU47WCRYerHaZ_o3SfJHpGL-Cq3IkXtsXJgKbLePPb7FS5tedB9U_oT53YJD3ENNCrmBdX8fkVYNvrerik7P-SrrJaGADBDTs3BmWco0DjBfMk84EhMBiwVbo32UbehlRRTjGYqNMRc6go2KAgCCmke22XeLsr9b45FT4k04WBbKmZ8eQBvJe7g0tyoiasD9O0Mg-tWR9_uIJUV82uCmUgp3q3vAUpTdq7z9_6Wr2T0V6UUaCBR7CRmfthG0ncOml-KJ" target="_blank" rel="noreferrer">View Source</a>  </figcaption>
  </figure>
<p>And here’s what my final Grafana dashboard looks like</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/advanced-postgresql-monitoring-using-telegraf/d28e269c6f-1699621096/grafana-postgresql-monitoring.png" alt="Grafana dashboard sample for postgresql monitoring">
  
    <figcaption class="text-center">
    Grafana dashboard sample for PostgreSQL monitoring  </figcaption>
  </figure>
<h2>Research on existing solutions</h2>
<p>I found several solutions and articles online about monitoring PostgreSQL using Telegraf:</p><h3>1. Telegraf PostgreSQL input plugin</h3>
<p>Telegraf has a <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql" rel="noreferrer">PostgreSQL input plugin</a> which provides some built-in metrics from the <code>pg_stat_database</code> and <code>pg_stat_bgwriter</code>
 views. But this plugin cannot be configured to run any custom SQL 
script to gather the data that we want. And the built-in metrics are a 
good starting point, but not enough. So, I rejected it.</p><h3>2. Telegraf postgresql_extensible input plugin</h3>
<p>Telegraf has another PostgreSQL input plugin called <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql_extensible" rel="noreferrer">postgresql_extensible</a>.
 At first glance, this looks promising: it can run any custom query, and
 multiple queries can be defined in its configuration file.</p> <p>However, there is an <a href="https://github.com/influxdata/telegraf/issues/5009" rel="noreferrer">open issue</a>
 due to which this plugin does not run the specified query against all 
databases, but only against the database name specified in the 
connection string.</p> <p>One way this can still work is to specify multiple input blocks in the Telegraf config file, one for each database.</p><figure>
  <pre><code class="language-toml">[[inputs.postgresql_extensible]]
  address = &quot;host=localhost user=postgres dbname=database1&quot;
  [[inputs.postgresql_extensible.query]]
    script=&quot;db_stats.sql&quot;

[[inputs.postgresql_extensible]]
  address = &quot;host=localhost user=postgres dbname=database2&quot;
  [[inputs.postgresql_extensible.query]]
    script=&quot;db_stats.sql&quot;</code></pre>
  </figure>
<p>But, <strong>configuring this does not scale</strong>, especially if the database names are dynamic or we don’t want to hardcode them in the config.</p> <p>But I really liked the configuration method of this plugin, and I think this will work very well for my use case once the <a href="https://github.com/influxdata/telegraf/issues/5009" rel="noreferrer">associated Telegraf issue</a> gets resolved.</p><h3>3. Using a monitoring package like pgwatch2</h3>
<p>Another method I found was to use a package like <a href="https://github.com/cybertec-postgresql/pgwatch2" rel="noreferrer">pgwatch2</a>. This is a self-contained solution for PostgreSQL monitoring and includes dashboards as well.</p> <p>Its main components are</p><ol><li><u>A metrics collector service</u>.
 This can either be run centrally and “pull” metrics from one or more 
PostgreSQL instances, or alongside each PostgreSQL instance (like a 
sidecar) and “push” metrics to a metrics storage backend.</li><li><u>Metrics storage backend</u>. pgwatch2 supports multiple metrics storage backends like bare PostgreSQL, TimescaleDB, InfluxDB, Prometheus, and Graphite.</li><li><u>Grafana dashboards</u></li><li><u>A configuration layer</u> and associated UI to configure all of the above.</li></ol><p>I
 really liked this tool as well, but felt like this might be too complex
 for my needs. For example, it monitors a lot more than what I want to 
monitor, and it has some complexity to handle multiple PostgreSQL 
versions and multiple deployment configurations.</p> <p>But I will definitely keep this in mind for a more “batteries included” approach to PostgreSQL monitoring for future projects.</p><h2>My solution: custom Telegraf plugin</h2>
<p>Telegraf supports writing an external custom plugin, and running it via the <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/execd" rel="noreferrer">execd plugin</a>. The <code>execd</code> plugin runs an external program as a long-running daemon.</p> <p>This
 approach enabled me to build the exact features I wanted, while also 
keeping things simple enough to someday revert to using the Telegraf 
built-in plugin for PostgreSQL.</p> <p>The custom plugin code can be found at <a href="https://github.com/srijan/telegraf-execd-pg-custom" rel="noreferrer">this Github repo</a>. Note that I’ve also included the <code>line_protocol.py</code> file from influx python sdk so that I would not have to install the whole sdk just for line protocol encoding.</p> <p>What this plugin (and included configuration) does:</p><ol><li>Runs as a daemon using Telegraf execd plugin.</li><li>When
 Telegraf asks for data (by sending a newline on STDIN), it runs the 
queries defined in the plugin’s config file (against the configured 
databases), converts the results into Influx line format, and sends it 
to Telegraf.</li><li>Queries can be defined to run either on a single database, or on all databases that the configured pg user has access to.</li></ol><p>This
 plugin solves the issue with Telegraf’s postgresql_extensible plugin 
for me—I don’t need to manually define the list of databases to be able 
to run queries against all of them.</p> <p>This is what the custom plugin configuration looks like</p><figure>
  <pre><code class="language-toml">[postgresql_custom]
address=&quot;&quot;

[[postgresql_custom.query]]
sqlquery=&quot;select pg_database_size(current_database()) as size_b;&quot;
per_db=true
measurement=&quot;pg_db_size&quot;

[[postgresql_custom.query]]
script=&quot;queries/backends.sql&quot;
per_db=true
measurement=&quot;pg_backends&quot;

[[postgresql_custom.query]]
script=&quot;queries/db_stats.sql&quot;
per_db=true
measurement=&quot;pg_db_stats&quot;

[[postgresql_custom.query]]
script=&quot;queries/table_stats.sql&quot;
per_db=true
tagvalue=&quot;table_name,schema&quot;
measurement=&quot;pg_table_stats&quot;</code></pre>
  </figure>
<p>Any queries defined with <code>per_db=true</code> will be run against all databases. Queries can be specified either inline, or using a separate file.</p> <p>The <a href="https://github.com/srijan/telegraf-execd-pg-custom" rel="noreferrer">repository for this plugin</a>
 has the exact queries configured above. It also has the Grafana 
dashboard JSON which can be imported to get the same dashboard as above.</p><h2>Future optimizations</h2>
<ul><li>Monitoring related to replication is not added yet, but can be added easily</li><li>No need to use superuser account in PostgreSQL 10+</li><li>This does not support running different queries depending on version of the target PostgreSQL system.</li></ul><hr />
<p>Let me know in the comments below if you have any doubts or suggestions to make this better.</p>]]></content:encoded>
    <comments>https://srijan.ch/advanced-postgresql-monitoring-using-telegraf#comments</comments>
    <slash:comments>3</slash:comments>
  </item><item>
    <title>Running docker jobs inside Jenkins running on docker</title>
    <description><![CDATA[Run Jenkins inside docker, but also use docker containers to run jobs on that Jenkins]]></description>
    <link>https://srijan.ch/docker-jobs-inside-jenkins-on-docker</link>
    <guid isPermaLink="false">60362aece749840001df438e</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[jenkins]]></category>
    <category><![CDATA[docker]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 24 Feb 2021 10:30:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/docker-jobs-inside-jenkins-on-docker/ebd7e48a64-1699621096/photo-1595546440771-84f0b521a533.jpeg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/docker-jobs-inside-jenkins-on-docker/ebd7e48a64-1699621096/photo-1595546440771-84f0b521a533.jpeg" alt="Running docker jobs inside Jenkins running on docker">
  
  </figure>
<p><a href="https://www.jenkins.io/" rel="noreferrer">Jenkins</a> is a free and open source automation server, which is used to automate software building, testing, deployment, etc.</p> <p>I
 wanted to have a quick and easy way to run Jenkins inside docker, but 
also use docker containers to run jobs on the dockerized Jenkins. Using 
docker for jobs makes it easy to encode job runtime dependencies in the 
source code repo itself.</p> <p>The official document on <a href="https://www.jenkins.io/doc/book/installing/docker/" rel="noreferrer">running Jenkins in docker</a> is pretty comprehensive. But, I wanted a version using docker-compose (on Linux).</p> <p>So, I started with a basic compose file:</p><figure>
  <pre><code class="language-yaml">version: &#039;3.7&#039;
services:
  jenkins:
  	image: jenkins/jenkins:alpine
    ports:
      - 8081:8080
    container_name: jenkins
    volumes:
      - ./home:/var/jenkins_home</code></pre>
    <figcaption class="text-center">docker-compose.yml</figcaption>
  </figure>
<p>When using this ( <code>docker-compose up -d</code> ), things came up properly, but Jenkins did not have access to the docker daemon running on the host. Also, the docker cli binary is not present inside the container.</p><p>The way to achieve this was to mount the docker socket and cli binary to inside the container so that it can be accessed. So, we come to the following compose file:</p><figure>
  <pre><code class="language-yaml">version: &#039;3.7&#039;
services:
  jenkins:
    image: jenkins/jenkins:alpine
    ports:
      - 8081:8080
    container_name: jenkins
    volumes:
      - ./home:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
      - /usr/bin/docker:/usr/local/bin/docker</code></pre>
    <figcaption class="text-center">docker-compose.yml</figcaption>
  </figure>
<p>But, when trying to run <code>docker ps</code> inside the container with the above compose file, I was still getting the error: <code>Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock</code>. This is because the Jenkins container is running with the <code>jenkins</code> user, which does not have access to use that socket.</p><p>From my research, the commonly recommended ways to solve this problem were:</p><ul><li>Run the container as root user</li><li><code>chmod</code> the socket file to <code>777</code></li><li>Install <code>sudo</code> inside the container and give the <code>jenkins</code> user access to sudo without needing to enter password.</li></ul><p>A more secure way is to create the <code>docker</code> group inside the container, and add the <code>jenkins</code> user to that group. But, this requires us to build a custom image.</p> <p>Also, the group id of the <code>docker</code>
 group inside and outside the container have to be the same, so I had to
 add an extra check which deletes any existing group inside the 
container which uses the same group id, then creates the new <code>docker</code> group with the passed group id, and then adds the <code>jenkins</code> user to the <code>docker</code> group.</p> <p>So, the final <code>Dockerfile</code> is:</p><figure>
  <pre><code class="language-yaml">FROM jenkins/jenkins:alpine
ARG docker_group_id=999

USER root
RUN old_group=$(getent group $docker_group_id | cut -d: -f1) &amp;&amp; \
    ([ -z &quot;$old_group&quot; ] || delgroup &quot;$old_group&quot;) &amp;&amp; \
    addgroup -g $docker_group_id docker &amp;&amp; \
    addgroup jenkins docker

USER jenkins</code></pre>
    <figcaption class="text-center">Dockerfile</figcaption>
  </figure>
<p>And the final <code>docker-compose.yml</code> file is:</p><figure>
  <pre><code class="language-yaml">version: &#039;3.7&#039;
services:
  jenkins:
    build:
      context: .
      args:
        docker_group_id: 999
    ports:
      - 8081:8080
    container_name: jenkins
    volumes:
      - ./home:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
      - /usr/bin/docker:/usr/local/bin/docker</code></pre>
    <figcaption class="text-center">docker-compose.yml</figcaption>
  </figure>
<p>The <code>docker_group_id</code> argument can be edited in the compose file. Command to get the group id of docker:</p><figure>
  <pre><code class="language-shellsession">$ getent group docker | cut -d: -f3</code></pre>
  </figure>
<p>With the above, everything works:</p><figure>
  <pre><code class="language-shellsession">$ docker-compose up -d
Creating network &quot;jenkins_test_default&quot; with the default driver
Building jenkins
Step 1/6 : FROM jenkins/jenkins:alpine
alpine: Pulling from jenkins/jenkins
801bfaa63ef2: Pull complete
2b72e22c6786: Pull complete
8d16efe80b55: Pull complete
682cd8857a9a: Pull complete
29c6010e8988: Pull complete
fa466f5d199d: Pull complete
e047245de0ff: Pull complete
0cfb53380af7: Pull complete
c29612b1a095: Pull complete
cd7d4bd47719: Pull complete
21cd3d960a1f: Pull complete
f3962370d584: Pull complete
bd6f35a1ea17: Pull complete
bd0c271b250f: Pull complete
Digest: sha256:1c3d9a1ed55911f9b165dd122118bff5da57520effb180d36b5c19d2a0cfe645
Status: Downloaded newer image for jenkins/jenkins:alpine
 ---&gt; e14be04b79e8
Step 2/6 : ARG docker_group_id=999
 ---&gt; Running in f1922fa97177
Removing intermediate container f1922fa97177
 ---&gt; 79460069fb98
Step 3/6 : RUN echo &quot;Assuming docker group id: $docker_group_id&quot;
 ---&gt; Running in 11809f4ae767
Assuming docker group id: 999
Removing intermediate container 11809f4ae767
 ---&gt; e89b345f6c74
Step 4/6 : USER root
 ---&gt; Running in b2e311372bc9
Removing intermediate container b2e311372bc9
 ---&gt; 9d4d8c3ad5b2
Step 5/6 : RUN old_group=$(getent group $docker_group_id | cut -d: -f1) &amp;&amp;     ([ -z &quot;$old_group&quot; ] || delgroup &quot;$old_group&quot;) &amp;&amp;     addgroup -g $docker_group_id docker &amp;&amp;     addgroup jenkins docker
 ---&gt; Running in 357046a8ac49
Removing intermediate container 357046a8ac49
 ---&gt; 865b942324eb
Step 6/6 : USER jenkins
 ---&gt; Running in dbc2976f62c0
Removing intermediate container dbc2976f62c0
 ---&gt; c7e6fac0187c

Successfully built c7e6fac0187c
Successfully tagged jenkins_test_jenkins:latest
WARNING: Image for service jenkins was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating jenkins ... done

$ docker-compose exec jenkins docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                               NAMES
6c05ee1315e4   jenkins_test_jenkins   &quot;/sbin/tini -- /usr/&hellip;&quot;   47 seconds ago   Up 47 seconds   50000/tcp, 0.0.0.0:8081-&gt;8080/tcp   jenkins</code></pre>
  </figure>
<h2>Next Steps</h2>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-automate-jenkins-setup-with-docker-and-jenkins-configuration-as-code" rel="noreferrer">Here is an excellent guide</a>
 on how to setup Jenkins configuration as code. This will make this 
setup even better because nothing will need to be configured inside 
Jenkins manually - it will all be driven by code / files.</p>]]></content:encoded>
    <comments>https://srijan.ch/docker-jobs-inside-jenkins-on-docker#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Telegraf: dynamically adding custom tags</title>
    <description><![CDATA[Adding a custom tag to data coming in from an input plugin for telegraf]]></description>
    <link>https://srijan.ch/telegraf-dynamic-tags</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d7</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[telegraf]]></category>
    <category><![CDATA[influxdb]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 14 Oct 2020 00:00:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/telegraf-dynamic-tags/4aa8784b8f-1699621096/telegraf-plugin-interactions.png" medium="image" />
    <content:encoded><![CDATA[<h3>Background</h3>
<p>For a recent project, I wanted to add a custom tag to data coming in from a built-in input plugin for <a href="https://www.influxdata.com/time-series-platform/telegraf/" rel="noreferrer">telegraf</a>.</p> <p>The input plugin was the <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat" rel="noreferrer">procstat plugin</a>, and the custom data was information from <a href="https://clusterlabs.org/pacemaker/doc/" rel="noreferrer">pacemaker</a>
 (a clustering solution for linux). I wanted to add a tag indicating if 
the current host was the "active" host in my active/passive setup.</p> <p>For this, the best solution I came up with was to use a <a href="https://www.influxdata.com/blog/telegraf-1-15-starlark-nginx-go-redfish-new-relic-mongodb/" rel="noreferrer">recently released</a> <a href="https://github.com/influxdata/telegraf/tree/master/plugins/processors/execd" rel="noreferrer">execd processor</a> plugin for telegraf.</p><h3>How it works</h3>
<p>The execd processor plugin runs an external program as a separate 
process and pipes metrics in to the process's STDIN and reads processed 
metrics from its STDOUT.</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/telegraf-dynamic-tags/4aa8784b8f-1699621096/telegraf-plugin-interactions.png" alt="Telegraf plugins interaction diagram">
  
    <figcaption class="text-center">
    Telegraf plugins interaction. <a href="https://www.planttext.com/?text=TP9RRu8m5CVV-oawdp2PCfCzBTkY8d4cA0OmcqzD1nqsmPRqacc6ttr5A7Etyz2UzlpE_vnUnb9XeVI-05UKfONEY1O5t2bLoZlN5VXzc5ErqwzQ4f5ofWXJmvJltOYcM6HyHKb92jUx7QmBpDHc6RY250HBueu6DsOVUIO9KqR4iAoh19Djk4dGyo9vGe4_zrSpfm_0b6kMON5qkBo6lJ3kzU47WCRYerHaZ_o3SfJHpGL-Cq3IkXtsXJgKbLePPb7FS5tedB9U_oT53YJD3ENNCrmBdX8fkVYNvrerik7P-SrrJaGADBDTs3BmWco0DjBfMk84EhMBiwVbo32UbehlRRTjGYqNMRc6go2KAgCCmke22XeLsr9b45FT4k04WBbKmZ8eQBvJe7g0tyoiasD9O0Mg-tWR9_uIJUV82uCmUgp3q3vAUpTdq7z9_6Wr2T0V6UUaCBR7CRmfthG0ncOml-KJ" target="_blank" rel="noreferrer">View Source</a>  </figcaption>
  </figure>
<p>Telegraf's <a href="https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering" rel="noreferrer">filtering parameters</a> can be used to select or limit data from which input plugins will go to this processor.</p><h3>The external program</h3>
<p>The external program I wrote does the following:</p><ol><li>Get pacemaker status and cache it for 10 seconds</li><li>Read a line from stdin</li><li>Append the required information as a tag in the data</li><li>Write it to stdout</li></ol><p>The caching is just an optimization - it was more to do with decreasing polluting the logs than actual speed improvements.</p> <p>Also, I've done the Influxdb lineprotocol parsing in my code directly
 (because my usecase is simple), but this can be substituted by an 
actual library meant for handling lineprotocol.</p><figure>
  <pre><code class="language-python">#!/usr/bin/python

from __future__ import print_function
from sys import stderr
import fileinput
import subprocess
import time

cache_value = None
cache_time = 0
resource_name = &quot;VIP&quot;

def get_crm_status():
    global cache_value, cache_time, resource_name
    ctime = time.time()
    if ctime - cache_time &gt; 10:
        # print(&quot;Cache busted&quot;, file=stderr)
        try:
            crm_node = subprocess.check_output([&quot;sudo&quot;, &quot;/usr/sbin/crm_node&quot;, &quot;-n&quot;]).rstrip()
            crm_resource = subprocess.check_output([&quot;sudo&quot;, &quot;/usr/sbin/crm_resource&quot;, &quot;-r&quot;, resource_name, &quot;-W&quot;]).rstrip()
            active_node = crm_resource.split(&quot; &quot;)[-1]
            if active_node == crm_node:
                cache_value = &quot;active&quot;
            else:
                cache_value = &quot;inactive&quot;
        except (OSError, IOError) as e:
            print(&quot;Exception: %s&quot; % e, file=stderr)
            # Don&#039;t report active/inactive if crm commands are not found
            cache_value = None
        except Exception as e:
            print(&quot;Exception: %s&quot; % e, file=stderr)
            # Report as inactive in other cases by default
            cache_value = &quot;inactive&quot;
        cache_time = ctime
    return cache_value

def lineprotocol_add_tag(line, key, value):
    first_comma = line.find(&quot;,&quot;)
    first_space = line.find(&quot; &quot;)
    if first_comma &gt;= 0 and first_comma &lt;= first_space:
        split_str = &quot;,&quot;
    else:
        split_str = &quot; &quot;
    parts = line.split(split_str)
    first, rest = parts[0], parts[1:]
    first_new = first + &quot;,&quot; + key + &quot;=&quot; + value
    return split_str.join([first_new] + rest)

for line in fileinput.input():
    line = line.rstrip()
    crm_status = get_crm_status()
    if crm_status:
        try:
            new_line = lineprotocol_add_tag(line, &quot;crm_status&quot;, crm_status)
        except Exception as e:
            print(&quot;Exception: %s, Input: %s&quot; % (e, line), file=stderr)
            new_line = line
    else:
        new_line = line

    print(new_line)</code></pre>
    <figcaption class="text-center">pacemaker_status.py</figcaption>
  </figure>
<h3>Telegraf configuration</h3>
<p>Here's
 a sample telegraf configuration that routes data from "system" plugin 
to execd processor plugin, and finally outputs to influxdb.</p><figure>
  <pre><code class="language-toml">[agent]
  interval = &quot;30s&quot;

[[inputs.cpu]]

[[inputs.system]]

[[processors.execd]]
  command = [&quot;/usr/bin/python&quot;, &quot;/etc/telegraf/scripts/pacemaker_status.py&quot;]
  namepass = [&quot;system&quot;]

[[outputs.influxdb]]
  urls = [&quot;http://127.0.0.1:8086&quot;]
  database = &quot;telegraf&quot;</code></pre>
    <figcaption class="text-center">telegraf.conf</figcaption>
  </figure>
<h3>Other types of dynamic tags</h3>
<p>In this example, we wanted to get the value of the tag from an 
external program. If the tag can be calculated from the incoming data 
itself, then things are much simpler. There are <a href="https://github.com/influxdata/telegraf/tree/release-1.15/plugins/processors" rel="noreferrer">a lot of processor plugins</a>, and many things can be achieved using just those.</p>]]></content:encoded>
    <comments>https://srijan.ch/telegraf-dynamic-tags#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Install docker and docker-compose using Ansible</title>
    <description><![CDATA[Optimized way to install docker and docker-compose using Ansible]]></description>
    <link>https://srijan.ch/install-docker-and-docker-compose-using-ansible</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cd</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[docker]]></category>
    <category><![CDATA[ansible]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 11 Jun 2020 14:30:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/install-docker-and-docker-compose-using-ansible/b62b609bf9-1699621096/photo-1584444707186-b7831c11014f.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/install-docker-and-docker-compose-using-ansible/b62b609bf9-1699621096/photo-1584444707186-b7831c11014f.jpg" alt="">
  
  </figure>
<p>Updated for 2023: I've updated this post with the following changes:</p><p>1. Added a top-level sample playbook<br>2. Used ansible apt_module's <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_module.html#parameter-cache_valid_time" title="cache_time" rel="noreferrer">cache_time</a> parameter to prevent repeated apt-get updates<br>3. Install <code>docker-compose-plugin</code> using apt (provides docker compose v2)<br>4. Make installing docker compose v1 optional<br>5. Various fixes as suggested in comments<br>6. Tested against Debian 10,11,12 and Ubuntu 18.04 (bionic), 20.04 (focal), 22.04 (jammy) using Vagrant.</p><p>I've published a <a href="https://srijan.ch/testing-ansible-playbooks-using-vagrant" rel="noreferrer">new post on how I've done this testing</a>.</p><hr />
<p>I wanted a simple, but optimal (and fast) way to install 
docker and docker-compose using Ansible. I found a few ways online, but I
 was not satisfied.</p> <p>My requirements were:</p><ul><li>Support Debian and Ubuntu</li><li>Install docker and docker compose v2 using apt repositories</li><li>Prevent unnecessary <code>apt-get update</code> if it has been run recently (to make it fast)</li><li>Optionally install docker compose v1 by downloading from github releases<ul><li>But, don’t download if current version &gt;= the minimum version required</li></ul></li></ul><p>I feel trying to achieve these requirements gave me a very good idea of how powerful ansible can be.</p> <p>The final role and vars files can be seen in <a href="https://gist.github.com/srijan/2028af568459195cb9a3dae8d111e754">this gist</a>. But, I’ll go through each section below to explain what makes this better / faster.</p><h2>File structure</h2>
<figure>
  <pre><code class="language-treeview">playbook.yml
roles/
├── docker/
│    ├── defaults/
│    │   ├── main.yml
│    ├── tasks/
│    │   ├── main.yml
│    │   ├── docker_setup.yml</code></pre>
    <figcaption class="text-center">File structure</figcaption>
  </figure>
<h2>Playbook</h2>
<p>This is the top-level playbook. Any default vars mentioned below can be overridden here.</p><figure>
  <pre><code class="language-yaml">---
- hosts: all
  vars:
    - docker_compose_install_v1: true
    - docker_compose_version_v1: &quot;1.29.2&quot;
  tasks:
    - name: Docker setup
      block:
        - import_role: name=docker</code></pre>
    <figcaption class="text-center">playbook.yml</figcaption>
  </figure>
<h2>Variables</h2>
<p>First, we’ve defined some variables in <code>defaults/main.yml</code>. These will control which release channel of docker will be used and whether to install docker compose v1.</p><figure>
  <pre><code class="language-yaml">---
docker_apt_release_channel: stable
docker_apt_arch: amd64
docker_apt_repository: &quot;deb [arch={{ docker_apt_arch }}] https://download.docker.com/linux/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} {{ docker_apt_release_channel }}&quot;
docker_apt_gpg_key: https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg
docker_compose_install_v1: false
docker_compose_version_v1: &quot;1.29.2&quot;</code></pre>
    <figcaption class="text-center">roles/docker/defaults/main.yml</figcaption>
  </figure>
<h2>Role main.yml</h2>
<p>The <code>tasks/main.yml</code> file imports tasks from <code>tasks/docker_setup.yml</code> and turns on <a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html#using-become" rel="noreferrer">become</a> for the whole task.</p><figure>
  <pre><code class="language-yaml">---
- import_tasks: docker_setup.yml
  become: true</code></pre>
    <figcaption class="text-center">roles/docker/tasks/main.yml</figcaption>
  </figure>
<h2>Docker Setup</h2>
<p>This task is divided into the following sections:</p><h3>Install dependencies</h3>
<figure>
  <pre><code class="language-yaml">- name: Install packages using apt
  apt:
    name: 
        - apt-transport-https
        - ca-certificates
        - curl
        - gnupg2
        - software-properties-common
    state: present
    cache_valid_time: 86400</code></pre>
  </figure>
<p>Here the <code>state: present</code> makes sure that these packages are only installed if not already installed. I've set <code>cache_valid_time</code> to 1 day so that <code>apt-get update</code> is not run if it has already run recently.</p><h3>Add docker repository</h3>
<figure>
  <pre><code class="language-yaml">- name: Add Docker GPG apt Key
  apt_key:
    url: &quot;{{ docker_apt_gpg_key }}&quot;
    state: present

- name: Add Docker Repository
  apt_repository:
    repo: &quot;{{ docker_apt_repository }}&quot;
    state: present
    update_cache: true</code></pre>
  </figure>
<p>Here, the <code>state: present</code> and <code>update_cache: true</code> make sure that the cache is only updated if this state was changed. So, <code>apt-get update</code> is not run if the docker repo is already present.</p><h3>Install and enable docker and docker compose v2</h3>
<figure>
  <pre><code class="language-yaml">- name: Install docker-ce
  apt:
    name: docker-ce
    state: present
    cache_valid_time: 86400

- name: Run and enable docker
  service:
    name: docker
    state: started
    enabled: true

- name: Install docker compose
  apt:
    name: docker-compose-plugin
    state: present
    cache_valid_time: 86400</code></pre>
  </figure>
<p>Again, due to <code>state: present</code> and <code>cache_valid_time: 86400</code>, there are no extra cache fetches if docker and docker-compose-plugin are already installed.</p><h2>Docker Compose V1 Setup</h2>
<p>WARNING: docker-compose v1 is end-of-life, please keep that in mind and only install/use it if absolutely required.</p><p>This task is wrapped in an <a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_blocks.html" rel="noreferrer">ansible block</a> that checks if <code>docker_compose_install_v1</code> is true.</p><figure>
  <pre><code class="language-text">- name: Install docker-compose v1
  when:
    - docker_compose_install_v1 is defined
    - docker_compose_install_v1
  block:</code></pre>
  </figure>
<p>Inside the block, there are two sections:</p><h3>Check if docker-compose is installed and it’s version</h3>
<figure>
  <pre><code class="language-yaml">- name: Check current docker-compose version
  command: docker-compose --version
  register: docker_compose_vsn
  changed_when: false
  failed_when: false
  check_mode: no

- set_fact:
    docker_compose_current_version: &quot;{{ docker_compose_vsn.stdout | regex_search(&#039;(\\d+(\\.\\d+)+)&#039;) }}&quot;
  when:
    - docker_compose_vsn.stdout is defined</code></pre>
  </figure>
<p>The first block saves the output of <code>docker-compose --version</code> into a variable <code>docker_compose_vsn</code>. The <code>failed_when: false</code> ensures that this does not call a failure even if the command fails to execute. (See <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html">error handling in ansible</a>).</p> <p>Sample output when docker-compose is installed: <code>docker-compose version 1.26.0, build d4451659</code></p> <p>The second block parses this output and extracts the version number using a regex (see <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html">ansible filters</a>). There is a <code>when</code> condition which causes the second block to skip execution if the first block failed (See <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html">playbook conditionals</a>).</p><h3>Install or upgrade docker-compose if required</h3>
<figure>
  <pre><code class="language-yaml">- name: Install or upgrade docker-compose
  get_url: 
    url : &quot;https://github.com/docker/compose/releases/download/{{ docker_compose_version }}/docker-compose-Linux-x86_64&quot;
    dest: /usr/local/bin/docker-compose
    mode: &#039;a+x&#039;
    force: yes
  when: &gt;
    docker_compose_current_version == &quot;&quot;
    or docker_compose_current_version is version(docker_compose_version, &#039;&lt;&#039;)</code></pre>
  </figure>
<p>This just downloads the required docker-compose binary and saves it to <code>/usr/local/bin/docker-compose</code>,
 but it has a conditional that this will only be done if either 
docker-compose is not already installed, or if the installed version is 
less than the required version. To do version comparison, it uses 
ansible’s built-in <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_tests.html#version-comparison">version comparison function</a>.</p> <p>So,
 we used a few ansible features to achieve what we wanted. I’m sure 
there are a lot of other things we can do to make this even better and 
more fool-proof. Maybe a post for another day.</p>]]></content:encoded>
    <comments>https://srijan.ch/install-docker-and-docker-compose-using-ansible#comments</comments>
    <slash:comments>10</slash:comments>
  </item><item>
    <title>Riemann and Zabbix: Sending data from riemann to zabbix</title>
    <description><![CDATA[Tutorial for sending data from riemann to zabbix]]></description>
    <link>https://srijan.ch/sending-data-from-riemann-to-zabbix</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d3</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[monitoring]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 08 Jun 2018 18:55:00 +0000</pubDate>
    <content:encoded><![CDATA[<h3>Background</h3>
<p>At <a href="https://www.greyorange.com/" rel="noreferrer">my work</a>, we use <a href="http://riemann.io/" rel="noreferrer">Riemann</a> and <a href="https://www.zabbix.com/" rel="noreferrer">Zabbix</a> as part of our monitoring stack.</p><p>Riemann is a stream processing engine (written in Clojure) which can be used to monitor distributed systems. Although it can be used for defining alerts and sending notifications for those alerts, we currently use it like this:</p><ol><li>As a receiving point for metrics / data from a group of systems in an installation</li><li>Applying some filtering and aggregation at the installation level.</li><li>Sending the filtered / aggregated data to a central Zabbix system.</li></ol><p>The actual alerting mechanism is handled by Zabbix. Things like trigger definitions, sending notifications, handling acks and escalations, etc.</p><p>This might seem like Riemann is redundant (and there is definitely some overlap in functionality), but keeping Riemann in the data pipeline allows us to be more flexible operationally. This is specially in cases when the metrics data we need is coming from application code, and we need to apply some transformations to the data but cannot update the code.</p><h3>The Problem</h3>
<p>The first problem we faced when trying to do this is: sending data from Riemann to Zabbix is not that straightforward.</p><p>Surprisingly, the <a href="https://www.zabbix.com/documentation/3.4/manual/api" rel="noreferrer">Zabbix API</a> is not actually meant for sending data points to Zabbix - only for managing it's configuration and accessing historical data.</p><h3>Solutions</h3>
<p>The recommended way to send data to Zabbix is to use a command line application called <a href="https://www.zabbix.com/documentation/3.4/manpages/zabbix_sender" rel="noreferrer">zabbix_sender</a>.</p><p>Another way would be to write a custom zabbix client in Clojure which follows the <a href="https://www.zabbix.com/documentation/3.4/manual/appendix/items/activepassive" rel="noreferrer">Zabbix Agent protocol</a>, which uses JSON over TCP sockets.</p><p>The current solution we have taken for this is using <code>zabbix_sender</code> itself.</p><p>For this, we write filtered values to a predefined text file from Riemann in a format that <code>zabbix_sender</code> can understand.</p><figure>
  <pre><code class="language-clojure">(def zabbix-logger
  (io (zabbix-logger-init
       &quot;zabbix&quot; &quot;/var/log/riemann/to_zabbix.txt&quot;)))
       
(streams
  (where (tagged &quot;zabbix&quot;)
    (smap
     (fn [event]
       {:zhost  (:host event)
        :zkey   (:service event)
        :zvalue (:value event)})
     zabbix-sender)))

(defn zabbix-sender
  &quot;Sends events to zabbix via log file.
  Assumes that three keys are present in the incoming data:
    :zhost   -&gt; hostname for sending to zabbix
    :zkey    -&gt; item key for zabbix
    :zvalue  -&gt; value to send for the item key
  Requires zabbix_sender service running and tailing the log file&quot;
  [data]
  (io (zabbix-log-to-file
       zabbix-logger (str (:zhost data) &quot; &quot; (:zkey data) &quot; &quot; (:zvalue data)))))

;; Modified version of:
;; https://github.com/riemann/riemann/blob/68f126ff39819afc3296bb645243f888dab0943e/src/riemann/logging.clj
(defn zabbix-logger-init
  [log_key log_file]
  (let [logger (org.slf4j.LoggerFactory/getLogger log_key)]
    (.detachAndStopAllAppenders logger)
    (riemann.logging/configure-from-opts
     logger
     (org.slf4j.LoggerFactory/getILoggerFactory)
     {:file log_file})
    logger))

(defn zabbix-log-to-file
  [logger string]
  &quot;Log to file using `logger`&quot;
  (.info logger string))</code></pre>
  </figure>
<p>The above code writes data into the file <code>/var/log/riemann/to_zabbix.txt</code> in the following format:</p><figure>
  <pre><code class="language-log">INFO [2018-06-09 05:02:03,600] defaultEventExecutorGroup-2-7 - zabbix - host123 api.req-rate 200</code></pre>
  </figure>
<p>Then, the following script can be run to sending data from this file to Zabbix via <code>zabbix_sender</code>:</p><figure>
  <pre><code class="language-shellsession">$ tail -F /var/log/riemann/to_zabbix.txt | grep --line-buffered -oP &quot;(?&lt;=zabbix - ).*&quot; | zabbix_sender -z $ZABBIX_IP --real-time -i - -vv</code></pre>
  </figure>
<h3>Further Thoughts</h3>
<ul><li>There should probably be a check on Riemann whether data is correctly being delivered to Zabbix or not. If not, Riemann can send out alerts as well.</li><li>The current solution is a little fragile because it's first writing the data to a file and is dependent on an external service running to ship the data to Zabbix. A better solution would be to integrate directly as a Zabbix agent.</li></ul>]]></content:encoded>
    <comments>https://srijan.ch/sending-data-from-riemann-to-zabbix#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>My backup strategy to USB disk using duply</title>
    <description><![CDATA[Local system backup using duply]]></description>
    <link>https://srijan.ch/my-backup-strategy-part-1</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557ce</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 04 Aug 2016 17:55:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I don't have a lot of data to backup - just my home folder (on my 
Archlinux laptop) which just has configuration for all the tools I'm 
using and my programming work.</p> <p>For photos or videos taken from my phone, I use google photos for 
backup - which works pretty well. Even if I delete the original files 
from my phone, the photos app still keeps them online.</p> <p>Coming back to my laptop, I'm currently using <a href="http://duplicity.nongnu.org/">duplicity</a> (with the <a href="http://duply.net/">duply</a>
 wrapper) to backup to multiple destinations. Why multiple locations? I 
wanted one local copy so that I can restore fast, and at least one at a 
remote location so that I can still restore if the local disk fails.</p> <p>For off-site, I'm using the fantastic <a href="http://www.rsync.net/">rsync.net</a> service. For local backups, I'm using two destinations: a USB HDD at my home, and a NFS server at my work. <strong>Depending on where I am, the backup will be done to the correct destination</strong>.</p> <p>This post will deal with the backups to my local USB disk.</p> <p>Here's what I've been able to achieve: the backups will run every 
hour as long as the USB disk is connected. If it is not connected, the 
backup script will not even be triggered. I did not want to see backup 
failures in my logs if the HDD is not connected.</p> <p>I've done this using a systemd timer and service. I've defined these units in <a href="https://wiki.archlinux.org/index.php/Systemd/User">the user-level part for systemd</a> so that root privileges are not required.</p><h3>Mounting the USB Disk</h3>
<p>To automatically mount the USB disk, I've added the following line to my <code>/etc/fstab</code>:</p><figure>
  <pre><code class="language-ini">UUID=27DFA4B43C8C0635 /mnt/Ext01 ntfs-3g nosuid,nodev,nofail,auto,x-gvfs-show,permissions 0 0</code></pre>
  </figure>
<h3>Duply config for running the backup</h3>
<p>Here's my <strong>duply</strong> config file (kept at <code>~/.duply/ext01/conf</code>) (mostly self-explanatory):</p><figure>
  <pre><code class="language-ini">TARGET=&#039;file:///mnt/Ext01/Backups/&#039;
SOURCE=&#039;/home/srijan&#039;
MAX_AGE=1Y
MAX_FULL_BACKUPS=15
MAX_FULLS_WITH_INCRS=2
MAX_FULLBKP_AGE=1M
DUPL_PARAMS=&quot;$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE &quot;
VOLSIZE=4
DUPL_PARAMS=&quot;$DUPL_PARAMS --volsize $VOLSIZE &quot;
DUPL_PARAMS=&quot;$DUPL_PARAMS --exclude-other-filesystems &quot;</code></pre>
  </figure>
<p>This can be run manually using:</p><figure>
  <pre><code class="language-shellsession">$ duply ext01 backup</code></pre>
  </figure>
<p>Exclusions can be specified in the file <code>~/.config/ext01/exclude</code> in a glob-like format.</p><h3>Systemd Service for running the backup</h3>
<p>Next, here's the <strong>service file</strong> (kept at <code>~/.config/systemd/user/duply_ext01.service</code>):</p><figure>
  <pre><code class="language-ini">[Unit]
Description=Run backup using duply: ext01 profile
Requires=mnt-Ext01.mount
After=mnt-Ext01.mount

[Service]
Type=oneshot
ExecStart=/usr/bin/duply ext01 backup</code></pre>
  </figure>
<p>The <code>Requires</code> option says that this unit has a dependency on the mounting of Ext01. The <code>After</code> option specifies the order in which these two should be started (run this service <em>after</em> mounting).</p> <p>After this step, the service can be run manually (via systemd) using:</p><figure>
  <pre><code class="language-shellsession">$ systemctl --user start duply_ext01.service</code></pre>
  </figure>
<h3>Systemd timer for triggering the backup service</h3>
<p>Next step is triggering it automatically every hour. Here's the <strong>timer file</strong> (kept at <code>~/.config/systemd/user/duply_ext01.timer</code>):</p><figure>
  <pre><code class="language-ini">[Unit]
Description=Run backup using duply ext01 profile every hour
BindsTo=mnt-Ext01.mount
After=mnt-Ext01.mount

[Timer]
OnCalendar=hourly
AccuracySec=10m
Persistent=true

[Install]
WantedBy=mnt-Ext01.mount</code></pre>
  </figure>
<p>Here, the <code>BindsTo</code> option defines a dependency similar to the <code>Requires</code>
 option above, but also declares that this unit is stopped when the 
mount point goes away due to any reason. This is because I don't want 
the trigger to fire if the HDD is not connected.</p> <p>The <code>Persistent=true</code> option ensures that when the timer 
is activated, the service unit is triggered immediately if it would have
 been triggered at least once during the time when the timer was 
inactive. This is because I want to catch up on missed runs of the 
service when the disk was disconnected.</p> <p>After creating this file, I ran the following to actually link this timer to mount / unmount events for the Ext01 disk:</p><figure>
  <pre><code class="language-shellsession">$ systemctl --user enable duply_ext01.timer</code></pre>
  </figure>
<p>That's it. Now, whenever I connect the USB disk to my laptop, the 
timer is started. This timer triggers the backup service to run every 
hour. Also, it takes care that if some run was missed when the disk was 
disconnected, then it would be triggered as soon as the disk is 
connected without waiting for the next hour mark. Pretty cool!</p><h4>NOTES:</h4>
<ul><li>Changing any systemd unit file requires a <code>systemd --user daemon-reload</code> before systemd can recognize the changes.</li><li>The <a href="https://www.freedesktop.org/software/systemd/man/index.html">systemd documentation</a> was very helpful.</li></ul><h3>Coming Soon</h3>
<p>Although it would be similar, but I'll also document how to do the 
above with NFS or SSHFS filesystems (instead of local disks). The major 
difference would be handling loss of internet connectivity, timeouts, 
etc.</p>]]></content:encoded>
    <comments>https://srijan.ch/my-backup-strategy-part-1#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Clean boot in erlang relx release</title>
    <description><![CDATA[Booting Erlang release in clean or safe mode]]></description>
    <link>https://srijan.ch/clean-boot-in-erlang-relx-release</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557ca</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 15 Apr 2016 04:50:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>We use relx to release out erlang applications, and faced a problem:</p> <p>Our application was crashing at bootup, and therefore we were not 
able to even open a remote shell on which we can run any correction 
functions.</p> <p>One way to solve this (which we've been using till now) is to also 
install erlang on the machine which has the release, and open an erlang 
shell with the correct library path set.</p> <p>But, the release generated by relx provides another mechanism which does not need erlang installed.</p> <p>The solution is: erlang boot scripts.</p> <p>Detailed information about boot scripts can be found at: <a href="http://erlang.org/doc/system_principles/system_principles.html#id59026">http://erlang.org/doc/system_principles/system_principles.html#id59026</a></p> <p>relx ships a <code>start_clean.boot</code> boot script with the release, which loads the code for and starts the applications kernel and stdlib.</p> <p>Sample command:</p><figure>
  <pre><code class="language-shellsession">${RELEASE_DIR}/myapplication/bin/myapplication console_boot start_clean</code></pre>
  </figure>
]]></content:encoded>
    <comments>https://srijan.ch/clean-boot-in-erlang-relx-release#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>PostgreSQL replication using Bucardo</title>
    <description><![CDATA[Keeping a live replica of selected PostgreSQL tables using Bucardo]]></description>
    <link>https://srijan.ch/postgresql-replication-using-bucardo</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cf</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[postgresql]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 15 Sep 2015 18:05:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/postgresql-replication-using-bucardo/71791f08a7-1699621096/photo-1551356277-dbb545a2d493.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/postgresql-replication-using-bucardo/71791f08a7-1699621096/photo-1551356277-dbb545a2d493.jpg" alt="PostgreSQL Replication using Bucardo">
  
  </figure>
<p>There are many different ways to use replication in PostgreSQL, whether for high<br />
availability (using a failover), or load balancing (for scaling), or just for<br />
keeping a backup. Among the various tools I found online, I though bucardo is<br />
the best for my use case - keeping a live backup of a few important tables.</p>
<p>I've assumed the following databases:</p>
<ul>
<li>Primary: Hostname = <code>host_a</code>, Database = <code>btest</code></li>
<li>Backup: Hostname = <code>host_b</code>, Database = <code>btest</code></li>
</ul>
<p>We will install bucardo in the primary database (it required it's own database<br />
to keep track of things).</p>
<ol>
<li>
<p>Install postgresql</p>
<pre><code class="language-shell-session"> sudo apt-get install postgresql-9.4</code></pre>
</li>
<li>
<p>Install dependencies on <code>host_a</code></p>
<pre><code class="language-shell-session"> sudo apt-get install libdbix-safe-perl libdbd-pg-perl libboolean-perl build-essential postgresql-plperl-9.4</code></pre>
</li>
<li>
<p>On <code>host_a</code>, Download and extract bucardo source</p>
<pre><code class="language-shell-session"> wget https://github.com/bucardo/bucardo/archive/5.4.0.tar.gz
 tar xvfz 5.4.0.tar.gz</code></pre>
</li>
<li>
<p>On <code>host_a</code>, Build and Install</p>
<pre><code class="language-shell-session"> perl Makefile.PL
 make
 sudo make install
 sudo mkdir /var/run/bucardo
 sudo mkdir /var/log/bucardo</code></pre>
</li>
<li>
<p>Create bucardo user on all hosts</p>
<pre><code class="language-sql"> CREATE USER bucardo SUPERUSER PASSWORD 'random_password';
 CREATE DATABASE bucardo;
 GRANT ALL ON DATABASE bucardo TO bucardo;</code></pre>
<p>Note: All commands from now on are to be run on <code>host_a</code> only.</p>
</li>
<li>
<p>On <code>host_a</code>, set a password for the <code>postgres</code> user:</p>
<pre><code class="language-sql"> ALTER USER postgres PASSWORD 'random_password';</code></pre>
</li>
<li>
<p>On <code>host_a</code>, add this to the installation user's <code>~/.pgpass</code> file:</p>
<pre><code class="language-ini"> host_a:5432:*:postgres:random_password
 host_a:5432:*:bucardo:random_password</code></pre>
<p>Also add entries for the other hosts for which users were created in step 5.</p>
<p>Note: It is also a good idea to chmod the <code>~/.pgpass</code> file to <code>0600</code>.</p>
</li>
<li>
<p>Run the bucardo install command:</p>
<pre><code class="language-shell-session"> bucardo -h host_a install</code></pre>
</li>
<li>
<p>Copy schema from A to B:</p>
<pre><code class="language-shell-session"> psql -h host_b -U bucardo template1 -c "drop database if exists btest;"
 psql -h host_b -U bucardo template1 -c "create database btest;"
 pg_dump -U bucardo --schema-only -h host_a btest | psql -U bucardo -h host_b btest</code></pre>
</li>
<li>
<p>Add databases to bucardo config</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add db main db=btest user=bucardo pass=host_a_pass host=host_a
 bucardo -h host_a -U bucardo add db bak1 db=btest user=bucardo pass=host_b_pass host=host_b</code></pre>
<p>This will save database details (host, port, user, password) to bucardo<br />
database.</p>
</li>
<li>
<p>Add tables to be synced</p>
<p>To add all tables:</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add all tables db=main relgroup=btest_relgroup</code></pre>
<p>To add one table:</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add table table_name db=main relgroup=btest_relgroup</code></pre>
<p>Note: Only table which have a primary key can be added here. This is a<br />
limitation of bucardo.</p>
</li>
<li>
<p>Add db group</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add dbgroup btest_dbgroup main:source bak1:target</code></pre>
</li>
<li>
<p>Create sync</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo add sync btest_sync dbgroup=btest_dbgroup relgroup=btest_relgroup conflict_strategy=bucardo_source onetimecopy=2 autokick=0</code></pre>
</li>
<li>
<p>Start the bucardo service</p>
<pre><code class="language-shell-session"> sudo bucardo -h host_a -U bucardo -P random_password start</code></pre>
<p>Note that this command requires passing the password because it uses sudo,<br />
and root user's <code>.pgpass</code> file does not have the credentials saved for bucardo<br />
user.</p>
</li>
<li>
<p>Run sync once</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo kick btest_sync 0</code></pre>
</li>
<li>
<p>Set auto-kick on any changes</p>
<pre><code class="language-shell-session"> bucardo -h host_a -U bucardo update sync btest_sync autokick=1
 bucardo -h host_a -U bucardo reload config</code></pre>
</li>
</ol>
<p>That's it. Now, the tables specified in step 11 will be replicated from <code>host_a</code><br />
to <code>host_b</code>.</p>
<p>I also plan to write about other alternatives I've tried soon.</p>]]></content:encoded>
    <comments>https://srijan.ch/postgresql-replication-using-bucardo#comments</comments>
    <slash:comments>6</slash:comments>
  </item><item>
    <title>Slack bot for Phabricator Notifications</title>
    <description><![CDATA[Setting up a slack bot for phabricator]]></description>
    <link>https://srijan.ch/slack-bot-for-phabricator</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d4</guid>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 04 Aug 2015 18:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p><strong>NOTICE:</strong> The solution mentioned in this post no longer works because Slack has closed down the IRC gateway. I recommend using the <a href="https://github.com/etcinit/phabulous" rel="noreferrer">phabulous</a> project for this now.</p><p><a href="http://phabricator.org/" rel="noreferrer">Phabricator</a> is a collection of open source web <a href="https://phacility.com/phabricator/" rel="noreferrer">applications useful for software development</a> built on a single platform. We have been using phabricator tools for about a month now, and it seems great. The best thing is: all different components (code review, task/bug tracking, project management, repo browsing) are well-integrated with one another, and work really well together.</p><p>Except one thing, of course, and that is it's chat app (called Conpherence). This is what they say about it themselves:</p><blockquote>
  Like Slack, but nowhere as good.
Seriously, Slack is way better.  </blockquote>
<p>Well, we use <a href="https://slack.com/">Slack</a> ourselves in our organization, and I tried to find out a way to integrate phabricator with slack.</p> <p>My use case was something like this:</p><ol><li>There are project specific channels (rooms?) in our slack</li><li>Important updates related to a project should be auto-posted to this channel</li><li>Discussions in this channel regarding the project should be <strong>enhanced</strong> by auto-linking of task ids or code review ids mentioned, to their URLs.</li></ol><p>I found a few different ways:</p><h3>Phabricator bots on github</h3>
<p>There are a couple of projects on github which integrate phabricator with slack:</p><ul><li><a href="https://github.com/etcinit/phabricator-slack-feed">https://github.com/etcinit/phabricator-slack-feed</a></li><li><a href="https://github.com/psjay/ph-slack">https://github.com/psjay/ph-slack</a></li></ul><p>Both of these are good solutions for point 2 above, but don't 
(currently) solve point 3. A way to go forward would be to contribute 
new features to these projects.</p><h3>Phabricator's in-built chatbot</h3>
<p>Phabricator already has the concept of a <a href="https://secure.phabricator.com/book/phabdev/article/chatbot/">chatbot</a> which connects to IRC.</p> <p>This bot covers both points 2 and 3 from my requirement, and also has
 some extra features, like recording chatlogs which can be browsed in 
the Phabricator web interface, which can in turn be referred to in 
comments for tasks, etc.</p> <p>Slack has an <a href="https://slack.zendesk.com/hc/en-us/articles/201727913-Connecting-to-Slack-over-IRC-and-XMPP">IRC gateway</a> which can be used for this purpose.</p> <p>But the phabdev article on chatbot has an omnious note:</p><blockquote>
  <p>NOTE: The chat bot is somewhat experimental and not very mature.</p>  </blockquote>
<p>Digging a little further, I found this task: <a href="https://secure.phabricator.com/T7829">T7829: PhabricatorBotFeedNotificationHandler is completely broken and unusable</a>, which has one piece of bad news in the comments:</p><blockquote>
  <p>@epriestley: Bot stuff is generally a very low priority and I don't 
expect to review or merge any of it for a long time (roughly, around the
 Bot/API iteration of Conpherence, which is months/years away).</p>  </blockquote>
<p>To make it work, <a href="https://secure.phabricator.com/p/staticshock/">@staticshock</a> posted some <a href="https://secure.phabricator.com/T7829#120246">fixes</a>.</p> <p>I made some changes of my own to make the bot filter the feed by 
project, so that one channel gets updates for only one or some of the 
projects.</p> <p>My final diff can be found here: <a href="https://secure.phabricator.com/P1839">https://secure.phabricator.com/P1839</a>.</p> <p>And, my sample bot config is shared below:</p><figure>
  <pre><code class="language-json">{
  &quot;server&quot; : &quot;organization.irc.slack.com&quot;,
  &quot;port&quot; : 6667,
  &quot;nick&quot; : &quot;phabot&quot;,
  &quot;pass&quot;: &quot;random-password&quot;,
  &quot;ssl&quot;: true,
  &quot;join&quot; : [
    &quot;#project-updates&quot;,
  ],
  &quot;handlers&quot; : [
    &quot;PhabricatorBotObjectNameHandler&quot;,
    &quot;PhabricatorBotLogHandler&quot;,
    &quot;PhabricatorBotFeedNotificationHandler&quot;
  ],

  &quot;conduit.uri&quot; : &quot;http://phab.example.com&quot;,
  &quot;conduit.user&quot; : &quot;phabot&quot;,
  &quot;conduit.token&quot; : &quot;api-token&quot;,

  &quot;macro.size&quot; : 48,
  &quot;macro.aspect&quot; : 0.66,

  &quot;notification.channels&quot; : [&quot;#project-updates&quot;],
  &quot;notification.types&quot;: [&quot;task&quot;],
  &quot;notification.projects&quot;: [&quot;PHID-PROJ-ut55kdadskptl4he5iw39&quot;],
  &quot;notification.verbosity&quot;: 0
}</code></pre>
  </figure>
<p>We have to pass a list of project PHIDs in <code>notification.projects</code>.</p><h3>The way forward</h3>
<p>So, the version shared above works fine for me, for now. Currently, 
it does not support connecting to multiple channels, having different 
config per channel, detecting projects for things other than tasks, 
ability to enter project name instead of PHID in config file, etc. These
 are some things I would want to add to my patch in the future.</p> <p>Also, another good solution to all this would be to extend the 
chatbot code in phabricator in a generic way to be able to support bots 
for different services like slack, telegram, hipchat, etc.</p>]]></content:encoded>
    <comments>https://srijan.ch/slack-bot-for-phabricator#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Django, uWSGI, Nginx on Freebsd</title>
    <description><![CDATA[Setting up Django on Freebsd using uWSGI and Nginx]]></description>
    <link>https://srijan.ch/django-uwsgi-nginx-on-freebsd</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cb</guid>
    <category><![CDATA[devops]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Thu, 05 Mar 2015 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Here are the steps I took for configuring Django on Freebsd using uWSGI and Nginx.</p> <p>The data flow is like this:</p> <p>Web Request ---&gt; Nginx ---&gt; uWSGI ---&gt; Django</p> <p>I was undecided for a while on whether to choose uWSGI or gunicorn. There are <a href="http://cramer.io/2013/06/27/serving-python-web-applications/">some</a> <a href="http://mattseymour.net/blog/2014/07/uwsgi-or-gunicorn/">blog</a> <a href="http://blog.kgriffs.com/2012/12/18/uwsgi-vs-gunicorn-vs-node-benchmarks.html">posts</a> discussing the pros and cons of each. I chose uWSGI in the end.</p> <p>Also, to start uWSGI in freebsd, I found two methods: using <a href="http://amix.dk/blog/post/19689">supervisord</a>, or using a <a href="http://lists.freebsd.org/pipermail/freebsd-questions/2014-February/256073.html">custom freebsd init script</a> which could use uWSGI ini files. Currently using supervisord.</p><h2>Install Packages Required</h2>
<figure>
  <pre><code class="language-shellsession">$ sudo pkg install python py27-virtualenv nginx uwsgi py27-supervisor</code></pre>
  </figure>
<p>Also install any database package(s) required.</p><h2>Setup your Django project</h2>
<p>Choose a folder for setting up your Django project sources. <code>/usr/local/www/myapp</code> is suggested. Clone the sources to this folder, then setup the python virtual environment.</p><figure>
  <pre><code class="language-shellsession">$ virtualenv venv
$ source venv/bin/activate
$ pip install -r requirements.txt</code></pre>
  </figure>
<p>If required, also setup the database and run the migrations.</p><h2>Setup uWSGI using supervisord</h2>
<p>Setup the supervisord file at <code>/usr/local/etc/supervisord.conf</code>.</p> <p>Sample supervisord.conf:</p><figure>
  <pre><code class="language-ini">[unix_http_server]
file=/var/run/supervisor/supervisor.sock   

[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB       ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10          ; (num of main logfile rotation backups;default 10)
loglevel=info               ; (log level;default info; others: debug,warn,trace)
pidfile=/var/run/supervisor/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false              ; (start in foreground if true;default false)
minfds=1024                 ; (min. avail startup file descriptors;default 1024)
minprocs=200                ; (min. avail process descriptors;default 200)

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor/supervisor.sock
history_file=~/.sc_history  ; use readline history if available

[program:uwsgi_myapp]
directory=/usr/local/www/myapp/
command=/usr/local/bin/uwsgi -s /var/run/%(program_name)s%(process_num)d.sock
        --chmod-socket=666 --need-app --disable-logging --home=venv
        --wsgi-file wsgi.py --processes 1 --threads 10
stdout_logfile=&quot;syslog&quot;
stderr_logfile=&quot;syslog&quot;
startsecs=10
stopsignal=QUIT
stopasgroup=true
killasgroup=true
process_name=%(program_name)s%(process_num)d
numprocs=5</code></pre>
  </figure>
<p>supervisord.conf</p> <p>And start it:</p><figure>
  <pre><code class="language-shellsession">$ echo supervisord_enable=&quot;YES&quot; &gt;&gt; /etc/rc.conf
$ sudo service supervisord start
$ sudo supervisorctl tail -f uwsgi_myapp:uwsgi_myapp0</code></pre>
  </figure>
<h2>Setup Nginx</h2>
<p>Use the following line in <code>nginx.conf</code>'s http section to include all config files from <code>conf.d</code> folder.</p><figure>
  <pre><code class="language-nginx">include /usr/local/etc/nginx/conf.d/*.conf;</code></pre>
  </figure>
<p>Create a <code>myapp.conf</code> in <code>conf.d</code>.</p> <p>Sample myapp.conf:</p><figure>
  <pre><code class="language-nginx">upstream myapp {
    least_conn;
    server unix:///var/run/uwsgi_myapp0.sock;
    server unix:///var/run/uwsgi_myapp1.sock;
    server unix:///var/run/uwsgi_myapp2.sock;
    server unix:///var/run/uwsgi_myapp3.sock;
    server unix:///var/run/uwsgi_myapp4.sock;
}

server {
    listen       80;
    server_name  myapp.example.com;
 
    location /static {
        alias /usr/local/www/myapp/static;
    }

    location / {
        uwsgi_pass  myapp;
        include uwsgi_params;
    }
}</code></pre>
  </figure>
<p>myapp.conf</p> <p>And start Nginx:</p><figure>
  <pre><code class="language-shellsession">$ echo nginx_enable=&quot;YES&quot; &gt;&gt; /etc/rc.conf
$ sudo service nginx start
$ sudo tail -f /var/log/nginx-error.log</code></pre>
  </figure>
<p>Accessing <a href="http://myapp.example.com/">http://myapp.example.com/</a> should work correctly after this. If not, see the supervisord and Nginx logs opened and correct the errors.</p>]]></content:encoded>
    <comments>https://srijan.ch/django-uwsgi-nginx-on-freebsd#comments</comments>
    <slash:comments>5</slash:comments>
  </item><item>
    <title>Read only root on Linux</title>
    <description><![CDATA[Setting up a read-only root filesystem on Linux]]></description>
    <link>https://srijan.ch/read-only-root-on-linux</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d2</guid>
    <category><![CDATA[devops]]></category>
    <category><![CDATA[linux]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sat, 28 Feb 2015 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>In many cases, it is required to run a system in such a way that it 
is tolerant of uncontrolled power losses, resets, etc. After such an 
event occurs, it should atleast be able to boot up and connect to the 
network so that some action can be taken remotely.</p> <p>There are a few different ways in which this could be accomplished.</p><h3>Mounting the root filesystem with read-only flags</h3>
<p>Most parts of the linux root filesystem can be mounted read-only 
without much problems, but there are some parts which don't play well. <a href="https://wiki.debian.org/ReadonlyRoot">This debian wiki page</a> has some information about this approach. I thought this approach would not be very stable, so did not try it out completely.</p><h3>Using aufs/overlayfs</h3>
<p>aufs is a union file system for linux systems, which enables us to 
mount separate filesystems as layers to form a single merged filesystem.
 Using aufs, we can mount the root file system as read-only, create a 
writable tmpfs ramdisk, and combine these so that the system thinks that
 the root filesystem is writable, but changes are not actually saved, 
and don't survive a reboot.</p> <p>I found this method to be most suitable and stable for my task, and 
have been using this for the last 6 months. This system mounts the real 
filesytem at mountpoint <code>/ro</code> with read-only flag, creates a writable ramdisk at mountpoint <code>/rw</code>, and makes a union filesystem using these two at mountpoint <code>/</code>.</p> <p>The steps I followed for my implementation are detailed below. These are just a modified version of the steps in <a href="https://help.ubuntu.com/community/aufsRootFileSystemOnUsbFlash">this ubuntu wiki page</a>. I am using Debian in my implementation.</p><ol><li><p>Install debian using live cd or your preferred method.</p></li><li><p>After first boot, upgrade and configure the system as needed.</p></li><li><p>Install <code>aufs-tools</code>.</p></li><li><p>Add aufs to initramfs and setup <a href="https://gist.github.com/srijan/383a8d7af6860de6f9de">this script</a> to start at init.</p></li></ol><figure>
  <pre><code class="language-shellsession"># echo aufs &gt;&gt; /etc/initramfs-tools/modules
# wget https://cdn.rawgit.com/srijan/383a8d7af6860de6f9de/raw/ -O /etc/initramfs-tools/scripts/init-bottom/__rootaufs
# chmod 0755 /etc/initramfs-tools/scripts/init-bottom/__rootaufs</code></pre>
  </figure>
<ol><li>Remake the initramfs.</li></ol><figure>
  <pre><code class="language-shellsession"># update-initramfs -u</code></pre>
  </figure>
<ol><li>Edit grub settings in <code>/etc/default/grub</code> and add <code>aufs=tmpfs</code> to <code>GRUB_CMDLINE_LINUX_DEFAULT</code>, and regenerate grub.</li></ol><figure>
  <pre><code class="language-shellsession"># update-grub</code></pre>
  </figure>
<ol><li>Reboot</li></ol><h4>Making changes</h4>
<p>To change something trivial (like a file edit), just remount the <code>/ro</code> mountpoint as read-write, edit the file, and reboot.</p><figure>
  <pre><code class="language-shellsession"># mount -o remount,rw /ro</code></pre>
  </figure>
<p>To do something more complicated (like install os packages), press <code>e</code> in grub menu during bootup, remove <code>aufs=tmpfs</code> from the kernel line, and boot using <code>F10</code>. The system will boot up normally once.</p> <p>Another method could be to use a configuration management tool 
(puppet, chef, ansible, etc.) to make the required changes whenever the 
system comes online. The changes would be lost on reboot, but it would 
become much easier to manage multiple such systems.</p> <p>Also, if some part of the system is required to be writable (like <code>/var/log</code>), that directory could be mounted separately as a read-write mountpoint.</p>]]></content:encoded>
    <comments>https://srijan.ch/read-only-root-on-linux#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>Notes on atom feeds</title>
    <description><![CDATA[My notes on Atom feeds]]></description>
    <link>https://srijan.ch/notes-on-atom-feeds</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557c9</guid>
    <category><![CDATA[development]]></category>
    <category><![CDATA[feeds]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 21 Sep 2014 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>For implementing feeds for the <a href="http://posativ.org/isso/">Isso commenting server</a>, I was researching about atom feeds, and though I would jot down some notes on the topic.</p><h4>RSS2 vs Atom</h4>
<p>Both are mostly accepted everywhere now a days, and it <a href="http://wordpress.stackexchange.com/questions/2922/should-i-provide-rss-or-atom-feeds">seems like  a good idea to provide both</a>. This particular post only talks about Atom feeds.</p><h4>Nested Entries</h4>
<p>Comments are threaded, <a href="http://blog.codinghorror.com/web-discussions-flat-by-design/">at least to one level deep</a>,
 but Atom does not allow nested entries. So, for the feed page for a 
post, we have two choices: listing all comments, or just top level 
comments. If we have a feed page for each top level comment, then that 
would be a flat list of all replies to the comment.</p><h4>Feed URI</h4>
<p>Every Atom entry must have a unique ID. <a href="http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id">This page</a> has some intersting ways to generate the ID. I think the best way is to generate a <a href="http://en.wikipedia.org/wiki/Tag_URI">tag URI</a> at the time of comment creation, store it, and use it forever for that resource.</p><h4>Reduce load/bandwidth by using <code>If-None-Match</code></h4>
<p>If we give out <a href="http://en.wikipedia.org/wiki/HTTP_ETag">ETags</a>
 with the feeds, then a client can do conditional requests, for which 
the server only sends a full response if something has changed.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes-on-atom-feeds#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Trying Emacs</title>
    <description><![CDATA[Bare bones emacs configuration from when I first started using Emacs]]></description>
    <link>https://srijan.ch/trying-emacs</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d8</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 16 Aug 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using <a href="http://www.vim.org/">Vim</a> as my text editor for the last few years, and have been very happy with it. But lately, some features of <a href="http://www.gnu.org/software/emacs/">Emacs</a> have got me interested (especially <a href="http://orgmode.org/">org-mode</a>),
 and I wanted to try it out. After all, I won't know the difference 
until I actually try it, and opinions on text editors vary widely on the
 internet.</p> <p>So, I decided to give it a try. First I went through the built-in 
Emacs Tutorial, and it seemed easy enough. I got used to the basic 
commands fairly quickly. I guess the real benefits will start to show a 
little later, when I try to optimize some ways of doing things.</p> <p>For now, I just wanted to do some basic configuration so that I could
 start using emacs right now. So, I did the following changes (scroll to
 the bottom of this page for the full <code>init.el</code> file):</p><ul><li><p>Hide the menu, tool, and scroll bars</p></li><li><p>Add line numbers</p></li><li><p>Hide splash screen and banner</p></li><li><p>Setup <a href="http://marmalade-repo.org/">Marmalade</a><br />
Marmalade is a package archive for emacs, which makes it easier to install non-official packages.</p></li><li><p>Maximize emacs window on startup<br />
My emacs was not starting up maximized, and I did not want to maximize it manually every time I started it. I found <a href="http://www.emacswiki.org/emacs/FullScreen">this page</a> addressing this issue, and tried out one of the <a href="http://www.emacswiki.org/emacs/FullScreen#toc20">solutions for linux</a>, and it worked great.</p></li></ul><p>For now, it all looks good, and I can start using it with only this small configuration.</p> <p>For example, for writing this post, I installed <a href="http://jblevins.org/projects/markdown-mode/">markdown-mode</a> using marmalade, and I got syntax highlighting and stuff.</p> <p>I will keep using this, and adding to my setup as required, for a few
 weeks, and then evaluate whether I should switch completely.</p><h3>Complete ~/.emacs.d/init.el file:</h3>
<figure>
  <pre><code class="language-elisp">; init.el

; Remove GUI extras
(menu-bar-mode -1)
(tool-bar-mode -1)
(scroll-bar-mode -1)

; Add line numbers
(global-linum-mode 1)

; Hide splash screen and banner
(setq
 inhibit-startup-message t
 inhibit-startup-echo-area-message t)
(define-key global-map (kbd &quot;RET&quot;) &#039;newline-and-indent)

; Set up marmalade
(require &#039;package)
(add-to-list &#039;package-archives 
    &#039;(&quot;marmalade&quot; .
      &quot;http://marmalade-repo.org/packages/&quot;))
(package-initialize)

; Make window maximized
(shell-command &quot;wmctrl -r :ACTIVE: -btoggle,maximized_vert,maximized_horz&quot;)</code></pre>
  </figure>
]]></content:encoded>
    <comments>https://srijan.ch/trying-emacs#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Speeding up compilation times for Libreoffice / C++ projects</title>
    <description><![CDATA[Faster compile times for libreoffice (and other C/C++ projects)]]></description>
    <link>https://srijan.ch/speeding-up-compiles</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d6</guid>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 14 Aug 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I got interested in <a href="http://www.libreoffice.org/">LibreOffice</a> a few days ago, and wanted to contribute. I wanted to see how a large project is run, and the <a href="https://wiki.documentfoundation.org/Easy_Hacks">Easy Hacks</a> section looked easy enough to begin.</p> <p>But, there was one problem: LibreOffice is huge, and takes a long 
time to compile (especially for the first time). It took ~40 minutes to 
build on the best workstation I have access to (a 24 core intel server).
 It would take more than a day to build on my laptop, and I wanted to be
 able to build and iterate on my laptop.</p> <p>The <a href="https://wiki.documentfoundation.org/Development/How_to_build">How to Build</a> wiki had a few pointers, and I decided to look into them.</p><h3><a href="http://ccache.samba.org/">CCache</a></h3>
<p>As noted on their website, ccache is a compiler cache, and speeds up 
compilation by storing stuff, and reusing them on recompilation. This 
won't decrease the first compile time (in fact, it might increase it), 
but future compilations would be faster.</p> <p>To use ccache, I made an exports file (see below) which I source before doing any LibreOffice related stuff. Programs like <a href="http://swapoff.org/ondir.html">ondir</a> can help automate this. I decided on a max cache size of 8GB, and set it with:</p><figure>
  <pre><code class="language-shellsession">$ ccache --max-size 8G</code></pre>
  </figure>
<h3><a href="https://github.com/icecc/icecream">Icecream</a></h3>
<p>Icecream enables distributing the compiling load to multiple machines, like <a href="https://code.google.com/p/distcc/">distcc</a>. I decided to go with icecream because support for it is built into LibreOffice's autogen.sh.</p> <p>Using icecream turned out to be as simple as installing and starting services on the build machines, doing <code>./autogen.sh --enable-icecream</code>, followed by <code>make</code>. For projects that don't have such icecream flags, its enough to add icecream's bin directory to the beginning of <code>$PATH</code>, and everything works.</p> <p>Icecream can do a distributed build even if the machines in the cluster are of different types. <a href="https://github.com/icecc/icecream#using-icecream-in-heterogeneous-environments">This section of their readme</a> gives more information about that.</p> <p>Building LibreOffice on my laptop using icecream took about 50 minutes (for a clean build).</p><h3>My exports.sh file</h3>
<figure>
  <pre><code class="language-shell">export CCACHE_DIR=/mnt/archextra/libreoffice/ccache
export CCACHE_COMPRESS=1
export ICECC_VERSION=/mnt/archextra/libreoffice/i386.tar.gz</code></pre>
  </figure>
]]></content:encoded>
    <comments>https://srijan.ch/speeding-up-compiles#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Basic Implementation of A* in Erlang</title>
    <description><![CDATA[Implementing the path finding algorithm A* in Erlang]]></description>
    <link>https://srijan.ch/basic-implementation-of-a-in-erlang</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557c8</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sat, 03 Aug 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Recently I had to write some path finding algorithms in Erlang. The 
first version I chose was A*. But, there is no easy way to implent A* in
 a distributed way. So, this is the simplest implementation possible. I 
may rewrite it later if I find a better way.</p> <p>This code is mostly a modified version of <a href="http://stevegilham.blogspot.in/2008/10/first-refactoring-of-star-in-erlang.html">this one</a>.</p> <p>The code <a href="https://gist.github.com/srijan/6142366#file-astar-erl">hosted on gist</a> follows below, followed by some notes.</p><figure>
  <pre><code class="language-erlang">-module(astar).

-type cnode() :: {integer(), integer()}.

-define(MINX, 0).
-define(MINY, 0).
-define(MAXX, 10).
-define(MAXY, 10).

-export([
         astar/2,
         neighbour_nodes/2
        ]).

%% @doc Performs A* for finding a path from `Start&#039; node to `Goal&#039; node
-spec astar(cnode(), cnode()) -&gt; list(cnode()) | failure.
astar(Start, Goal) -&gt;
    ClosedSet = sets:new(),
    OpenSet   = sets:add_element(Start, sets:new()),

    Fscore    = dict:store(Start, h_score(Start, Goal), dict:new()),
    Gscore    = dict:store(Start, 0, dict:new()),

    CameFrom  = dict:store(Start, none, dict:new()),

    astar_step(Goal, ClosedSet, OpenSet, Fscore, Gscore, CameFrom).

%% @doc Performs a step of A*.
%% Takes the best element from `OpenSet&#039;, evaluates neighbours, updates scores, etc..
-spec astar_step(cnode(), set(), set(), dict(), dict(), dict()) -&gt; list(cnode()) | failure.
astar_step(Goal, ClosedSet, OpenSet, Fscore, Gscore, CameFrom) -&gt;
    case sets:size(OpenSet) of
        0 -&gt;
            failure;
        _ -&gt;
            BestStep = best_step(sets:to_list(OpenSet), Fscore, none, infinity),
            if
                Goal == BestStep -&gt;
                    lists:reverse(reconstruct_path(CameFrom, BestStep));
                true -&gt;
                    Parent     = dict:fetch(BestStep, CameFrom),
                    NextOpen   = sets:del_element(BestStep, OpenSet),
                    NextClosed = sets:add_element(BestStep, ClosedSet),
                    Neighbours = neighbour_nodes(BestStep, Parent),

                    {NewOpen, NewF, NewG, NewFrom} = scan(Goal, BestStep, Neighbours, NextOpen, NextClosed, Fscore, Gscore, CameFrom),
                    astar_step(Goal, NextClosed, NewOpen, NewF, NewG, NewFrom)
            end
    end.

%% @doc Returns the heuristic score from `Current&#039; node to `Goal&#039; node
-spec h_score(Current :: cnode(), Goal :: cnode()) -&gt; Hscore :: number().
h_score(Current, Goal) -&gt;
    dist_between(Current, Goal).

%% @doc Returns the distance from `Current&#039; node to `Goal&#039; node
-spec dist_between(cnode(), cnode()) -&gt; Distance :: number().
dist_between(Current, Goal) -&gt;
    {X1, Y1} = Current,
    {X2, Y2} = Goal,
    abs((X2-X1)) + abs((Y2-Y1)).

%% @doc Returns the best next step from `OpenSetAsList&#039;
%% TODO: May be optimized by making OpenSet an ordered set.
-spec best_step(OpenSetAsList :: list(cnode()), Fscore :: dict(), BestNodeTillNow :: cnode() | none, BestCostTillNow :: number() | infinity) -&gt; cnode().
best_step([H|Open], Score, none, infinity) -&gt;
    V = dict:fetch(H, Score),
    best_step(Open, Score, H, V);

best_step([], _Score, Best, _BestValue) -&gt;
    Best;

best_step([H|Open], Score, Best, BestValue) -&gt;
    Value = dict:fetch(H, Score),
    case Value &lt; BestValue of
        true -&gt;
            best_step(Open, Score, H, Value);
        false -&gt;
            best_step(Open, Score, Best, BestValue)
    end.

%% @doc Returns the neighbour nodes of `Node&#039;, and excluding its `Parent&#039;.
-spec neighbour_nodes(cnode(), cnode() | none) -&gt; list(cnode()).
neighbour_nodes(Node, Parent) -&gt;
    {X, Y} = Node,
    [
     {XX, YY} ||
     {XX, YY} &lt;- [{X-1, Y}, {X, Y-1}, {X+1, Y}, {X, Y+1}],
     {XX, YY} =/= Parent,
     XX &gt;= ?MINX,
     YY &gt;= ?MINY,
     XX =&lt; ?MAXX,
     YY =&lt; ?MAXY
    ].

%% @doc Scans the `Neighbours&#039; of `BestStep&#039;, and adds/updates the Scores and CameFrom dicts accordingly.
-spec scan(
        Goal :: cnode(),
        BestStep :: cnode(),
        Neighbours :: list(cnode()),
        NextOpen :: set(),
        NextClosed :: set(),
        Fscore :: dict(),
        Gscore :: dict(),
        CameFrom :: dict()
       ) -&gt;
    {NewOpen :: set(), NewF :: dict(), NewG :: dict(), NewFrom :: dict()}.
scan(_Goal, _X, [], Open, _Closed, F, G, From) -&gt;
    {Open, F, G, From};
scan(Goal, X, [Y|N], Open, Closed, F, G, From) -&gt;
    case sets:is_element(Y, Closed) of
        true -&gt;
            scan(Goal, X, N, Open, Closed, F, G, From);
        false -&gt;
            G0 = dict:fetch(X, G),
            TrialG = G0 + dist_between(X, Y),
            case sets:is_element(Y, Open) of
                true -&gt;
                    OldG = dict:fetch(Y, G),
                    case TrialG &lt; OldG of
                        true -&gt;
                            NewFrom = dict:store(Y, X, From),
                            NewG    = dict:store(Y, TrialG, G),
                            NewF    = dict:store(Y, TrialG + h_score(Y, Goal), F), % Estimated total distance from start to goal through y.
                            scan(Goal, X, N, Open, Closed, NewF, NewG, NewFrom);
                        false -&gt;
                            scan(Goal, X, N, Open, Closed, F, G, From)
                    end;
                false -&gt;
                    NewOpen = sets:add_element(Y, Open),
                    NewFrom = dict:store(Y, X, From),
                    NewG    = dict:store(Y, TrialG, G),
                    NewF    = dict:store(Y, TrialG + h_score(Y, Goal), F), % Estimated total distance from start to goal through y.
                    scan(Goal, X, N, NewOpen, Closed, NewF, NewG, NewFrom)
            end
    end.

%% @doc Reconstructs the calculated path using the `CameFrom&#039; dict
-spec reconstruct_path(dict(), cnode()) -&gt; list(cnode()).
reconstruct_path(CameFrom, Node) -&gt;
    case dict:fetch(Node, CameFrom) of
        none -&gt;
            [Node];
        Value -&gt;
            [Node | reconstruct_path(CameFrom, Value)]
    end.</code></pre>
  </figure>
<h3>Notes</h3>
<ul><li><p>Variables <code>MINX</code>, <code>MINY</code>, <code>MAXX</code> and <code>MAXY</code> can be modified to increase the size of the map. The function <code>neighbour_nodes/2</code> can be modified to add obstacles.</p></li><li><p>To test, enter in erlang shell:</p></li></ul><figure>
  <pre><code class="language-erlang">c(astar).
astar:astar({1, 1}, {10, 10}).</code></pre>
  </figure>
<ul><li><p>The <code>cnode()</code> structure represents some sort of coordinate. To use some other structure, the functions <code>neighbour_nodes/2</code>, <code>h_score/2</code>, and <code>distance_between/2</code> have to be modified for the new structure.</p></li><li><p>The current heuristic does not penalize for turns, so the resultant 
path tends to follow a diagonal looking shape. For correcting this, 
either diagonal movements can be allowed (by modifying the neighbours 
function), or turning could be penalized in the heuristic function 
(current direction would have to be tracked).</p></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/basic-implementation-of-a-in-erlang#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Erlang Profiling Tips</title>
    <description><![CDATA[Some erlang profiling tips / tools I've come across]]></description>
    <link>https://srijan.ch/erlang-profiling-tips</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cc</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 20 Feb 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using erlang recently for some of my work and private 
projects, and so I have decided to write about a few things there were 
hard to discover.</p> <p>Profiling is an essential part of programming in erlang. <a href="http://www.erlang.org/doc/efficiency_guide/profiling.html">Erlang's efficiency guide</a> says:</p><blockquote>
  Even experienced software developers often guess wrong about where the performance bottlenecks are in their programs.<br>Therefore, profile your program to see where the performance bottlenecks are and concentrate on optimizing them.  </blockquote>
<h2>Using profiling tools in releases (using rebar/reltool)</h2>
<p>So, after finishing a particularly complicated bit of code, I wanted 
to see how well it performed, and figure out any bottlenecks.</p> <p>But, I hit a roadblock. Following the <a href="http://www.erlang.org/doc/man/fprof.html">erlang manual for fprof</a>, I tried to start it, but it wouldn't start and was giving the error:</p><figure>
  <pre><code class="language-erlang">** exception error: undefined function fprof:start/0</code></pre>
  </figure>
<p>To make this work, I had to add <code>tools</code> to the list of apps in my <code>reltool.config</code> file. After adding this and regenerating, it all works.</p><h2>Better visualization of fprof output</h2>
<p>So, after I got the fprof output, I discovered it was a long file with a lot of data, and no easy way to make sense of it.</p> <p>I tried using <a href="http://www.erlang.org/doc/man/eprof.html">eprof</a> (which gives a condensed output), and it helped, but I was still searching for a better way.</p> <p>Then I stumbled upon <a href="http://stackoverflow.com/questions/14242607/eprof-erlang-profiling#comment19935708_14242607">a comment on stackoverflow</a>, which linked to <a href="https://github.com/isacssouza/erlgrind">erlgrind - a script to convert the fprof output to callgrind output</a>, which can be visualized using <a href="http://kcachegrind.sourceforge.net/">kcachegrind</a> or some such tool.</p><h3>Software Links</h3>
<ul><li><a href="http://www.erlang.org/doc/efficiency_guide/profiling.html">Erlang Profiling Guide</a></li><li><a href="https://github.com/isacssouza/erlgrind">Erlgrind</a></li><li><a href="http://kcachegrind.sourceforge.net/">Kcachegrind</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/erlang-profiling-tips#comments</comments>
    <slash:comments>0</slash:comments>
  </item></channel>
</rss>
