<?xml version="1.0" encoding="utf-8"?><rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:media="http://search.yahoo.com/mrss/"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
>
<channel>
  <title>Srijan Choudhary, all posts tagged: development</title>
  <link>https://srijan.ch/feed/all/tag:development</link>
  <lastBuildDate>Fri, 04 Nov 2022 19:15:00 +0000</lastBuildDate>
  
  <sy:updatePeriod>daily</sy:updatePeriod>
  <sy:updateFrequency>1</sy:updateFrequency>
  <generator>Kirby</generator>
  <atom:link href="https://srijan.ch/feed/all.xml/tag:development" rel="self" type="application/rss+xml" />
  <description>Srijan Choudhary&#039;s Articles and Notes Feed for tag: development</description>
  <item>
    <title>Slackbot using google cloud serverless functions</title>
    <description><![CDATA[Slack bot using Google Cloud Functions to post a roundup of recently created channels]]></description>
    <link>https://srijan.ch/slackbot-google-cloud-part-1</link>
    <guid isPermaLink="false">634164f0219ca50001581813</guid>
    <category><![CDATA[development]]></category>
    <category><![CDATA[cloud]]></category>
    <category><![CDATA[slack]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 04 Nov 2022 19:15:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/a7a4e31f92-1699621096/screenshot_20221009_161507.png" medium="image" />
    <content:encoded><![CDATA[<p>At my org, we wanted a simple Slack bot that posts a roundup 
of new channels created recently in the workspace to a channel. While 
writing this is easy enough, I wanted to do it using <a href="https://cloud.google.com/functions" rel="noreferrer">Google Cloud Functions</a> with Python, trying to follow best practices as much as possible.</p> <p>Here's how the overall flow will look like:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/1fc9e55238-1699621096/slackbot01.5.excalidraw.png" alt="">
  
    <figcaption class="text-center">
    Google Cloud Functions Slack Bot  </figcaption>
  </figure>
<p>We want this roundup post triggered on some schedule (maybe daily), so the <a href="https://cloud.google.com/scheduler" rel="noreferrer">Cloud Scheduler</a> is required to send an event to a <a href="https://cloud.google.com/pubsub" rel="noreferrer">Google Pub/Sub</a>
 topic that triggers our cloud function, which queries the slack API to 
get channels details, filter recently created, and post it back to a 
slack channel. <a href="https://cloud.google.com/secret-manager" rel="noreferrer">Secret Manager</a> is used to securely store slack's bot token and signing secret.</p> <p>Note that the credentials shown in any screenshots below are not valid.</p><h2>Create the slack app</h2>
<p>The
 first step will be to create the slack app. Go to https://api.slack.com
 and click on "Create an app". Choose "From scratch" in the first 
dialog; enter an app name and choose a workspace for your app in the 
second dialog. In the next screen, copy the "<strong>Signing Secret</strong>" from the "App Credentials" section and save it for later use.</p><figure data-ratio="auto">
  <ul>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/a7a4e31f92-1699621096/screenshot_20221009_161507.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/d075d74c25-1699621096/screenshot_20221009_161622-1.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/e0500cfbdf-1699621096/screenshot_20221009_162905.png">    </li>
      </ul>
  </figure>
<p>Next,
 go to the "OAuth and Permissions" tab from the left sidebar, and scroll
 down to "Scopes" -&gt; "Bot Token Scopes". Here, add the scopes:</p><ul><li><a href="https://api.slack.com/scopes/channels:read" rel="noopener noreferrer"><code>channels:read</code></a>: required to query public channels and find their creation times</li><li><a href="https://api.slack.com/scopes/chat:write" rel="noopener noreferrer"><code>chat:write</code></a>: required to write to a channel (where the bot is invited)</li></ul><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/0cf73d1181-1699621096/screenshot_20221009_162243.png" alt="">
  
  </figure>
<p>Next,
 scroll up on the same screen and click "Install to Workspace" to 
install to your workspace. Click "Allow" in the next screen to allow the
 installation. Next, copy the "<strong>Bot User OAuth Token</strong>" from the "OAuth Tokens for Your Workspace" section on the same page and save it for later use.</p><figure data-ratio="auto">
  <ul>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/a79785c4f6-1699621096/screenshot_20221009_162420.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/d075d74c25-1699621096/screenshot_20221009_161622-1.png">    </li>
        <li>
      <img alt="" src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/f0e359ac2e-1699621096/screenshot_20221009_163221.png">    </li>
      </ul>
  </figure>
<p>💡Keep track of the <strong>Bot User OAuth Token</strong> and <strong>Signing Secret</strong> you copied above.</p><h2>Post to a Slack channel from a Google Cloud Function</h2>
<p>Next, we will try to use the credentials copied above to enable a Google Cloud Function to send a message to a Slack channel.</p><h3>Google Cloud Basic Setup</h3>
<p>We will use gcloud cli for the following sections, so <a href="https://cloud.google.com/sdk/docs/install" rel="noreferrer">install</a> and <a href="https://cloud.google.com/sdk/docs/initializing" rel="noreferrer">initialize</a> the Google Cloud CLI if not done yet. If you already have gcloud cli, run <code>gcloud components update</code> to update it to the latest version.</p> <p>Create
 a new project for this if required, or choose an existing project, set 
it as default, and export the project id as a shell environment for 
using later. Also export the region you want to use.</p><figure>
  <pre><code class="language-shell">export PROJECT_ID=slackbot-project
export REGION=us-central1

gcloud config set project ${PROJECT_ID}</code></pre>
  </figure>
<p>You will have to enable billing for this project to be able to use some of the functionality we require.</p> <p>You
 may also have to enable the Secret Manager, Cloud Functions, Cloud 
Build, Artifact Registry, and Logging APIs if this is the first time 
you're using Functions in this project. Note that some services like 
Secret Manager need billing to be setup before they can be enabled.</p><figure>
  <pre><code class="language-shell">gcloud services enable --project slackbot-project \
        secretmanager.googleapis.com \
        cloudfunctions.googleapis.com \
        cloudbuild.googleapis.com \
        artifactregistry.googleapis.com \
        logging.googleapis.com</code></pre>
  </figure>
<h3>Create a service account</h3>
<p>By default, Cloud Functions uses a <a href="https://cloud.google.com/functions/docs/securing/function-identity#runtime_service_account" rel="noreferrer">default service account</a> as its identity for function execution. These default service accounts have the <strong>Editor</strong>
 role, which allows them broad access to many Google Cloud services. Of 
course, this is not recommended for production, so we will create a new 
service account for this and <a href="https://cloud.google.com/iam/docs/understanding-service-accounts#granting_minimum" rel="noreferrer">grant it the minimum permissions</a> that it requires.</p><figure>
  <pre><code class="language-shell">SA_NAME=channelbot-sa
SA_EMAIL=${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com

gcloud iam service-accounts create ${SA_NAME} \
    --description=&quot;Service Account for ChannelBot slackbot&quot; \
    --display-name=&quot;ChannelBot SlackBot SA&quot;</code></pre>
  </figure>
<h3>Store secrets and give permissions to service account</h3>
<p>First, we need to store the secrets in Secret Manager.</p><figure>
  <pre><code class="language-shell">printf $SLACK_BOT_TOKEN | gcloud secrets create \
    channelbot-slack-bot-token --data-file=- \
    --project=${PROJECT_ID} \
    --replication-policy=user-managed \
    --locations=${REGION}

printf $SLACK_SIGNING_SECRET | gcloud secrets create \
    channelbot-slack-signing-secret --data-file=- \
    --project=${PROJECT_ID} \
    --replication-policy=user-managed \
    --locations=${REGION}</code></pre>
  </figure>
<p>And give our service account the <code><a href="https://cloud.google.com/secret-manager/docs/access-control#secretmanager.secretAccessor" rel="noreferrer">roles/secretmanager.secretAccessor</a></code> role on these secrets.</p><figure>
  <pre><code class="language-shell">gcloud secrets add-iam-policy-binding \
    projects/${PROJECT_ID}/secrets/channelbot-slack-bot-token \
    --member serviceAccount:${SA_EMAIL} \
    --role roles/secretmanager.secretAccessor

gcloud secrets add-iam-policy-binding \
    projects/${PROJECT_ID}/secrets/channelbot-slack-signing-secret \
    --member serviceAccount:${SA_EMAIL} \
    --role roles/secretmanager.secretAccessor</code></pre>
  </figure>
<h3>Create and deploy the function</h3>
<p>Here's a simple HTTP function that sends a message to slack on any HTTP call:</p><figure>
  <pre><code class="language-python">import functions_framework
from slack_bolt import App

# process_before_response must be True when running on FaaS
app = App(process_before_response=True)

print(&#039;Function has started&#039;)

@functions_framework.http
def send_to_slack(request):
    print(&#039;send_to_slack triggered&#039;)
    channel = &#039;#general&#039;
    text = &#039;Hello from Google Cloud Functions!&#039;
    app.client.chat_postMessage(channel=channel, text=text)
    return &#039;Sent to slack!&#039;</code></pre>
    <figcaption class="text-center">src-v1/main.py</figcaption>
  </figure>
<figure>
  <pre><code class="language-text">functions-framework
slack_bolt</code></pre>
    <figcaption class="text-center">src-v1/requirements.txt</figcaption>
  </figure>
<p>Assuming <code>main.py</code> and <code>requirements.txt</code> are present in <code>src-v1</code> folder, deploy using:</p><figure>
  <pre><code class="language-shell">gcloud beta functions deploy channelbot-send-to-slack \
    --gen2 \
    --runtime python310 \
    --project=${PROJECT_ID} \
    --service-account=${SA_EMAIL} \
    --source ./src-v1 \
    --entry-point send_to_slack \
    --trigger-http \
    --allow-unauthenticated \
    --region ${REGION} \
    --memory=128MiB \
    --min-instances=0 \
    --max-instances=1 \
    --set-secrets &#039;SLACK_BOT_TOKEN=channelbot-slack-bot-token:latest,SLACK_SIGNING_SECRET=channelbot-slack-signing-secret:latest&#039; \
    --timeout 60s</code></pre>
  </figure>
<p>💡We're using <code>--allow-unauthenticated</code> here just to test it out. It will be removed in later sections.</p><h3>Test it out</h3>
<p>Once the deployment is complete, we can view the function logs using:</p><figure>
  <pre><code class="language-shell">gcloud beta functions logs read channelbot-send-to-slack \
	--project ${PROJECT_ID} --gen2</code></pre>
  </figure>
<p>If everything was successful above, once of the recent log statements should say: <code>Function has started</code>.</p> <p>Next, add the bot to the <code>#general</code> channel using <code>/invite @ChannelBot</code> in the general channel on your slack workspace.</p> <p>Next, find the service endpoint using:</p><figure>
  <pre><code class="language-shell">gcloud functions describe channelbot-send-to-slack \
    --project ${PROJECT_ID} \
    --gen2 \
    --region ${REGION} \
    --format &quot;value(serviceConfig.uri)&quot;</code></pre>
  </figure>
<p>This will give a URL like <code>https://channelbot-send-to-slack-ga6Ofi9to0-uc.a.run.app</code>.</p> <p>To trigger the channel post, just do <code>curl ${SERVICE_URL}</code>. This should result in a test message from ChannelBot to the <code>#general</code> channel.</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/27e2d0931e-1699621096/screenshot_20221018_235104.png" alt="">
  
    <figcaption class="text-center">
    ChannelBot message from Google Cloud Functions  </figcaption>
  </figure>
<h2>Trigger via Google Pub/Sub</h2>
<p>Now,
 instead of an unauthenticated HTTP trigger, we would like to trigger 
this via Google Pub/Sub. We would also like to pass the channel name and
 the message to post in the event.</p><h3>Google Pub/Sub basics</h3>
<p>Pub/Sub enables you to create systems of event producers and consumers, called <strong><strong>publishers</strong></strong> and <strong><strong>subscribers</strong></strong>. Publishers communicate with subscribers asynchronously by broadcasting events. Some core concepts:</p><ul><li><strong><strong>Topic.</strong></strong> A named resource to which messages are sent by publishers.</li><li><strong><strong>Subscription.</strong></strong>
 A named resource representing the stream of messages from a single, 
specific topic, to be delivered to the subscribing application.</li><li><strong><strong>Message.</strong></strong> The combination of data and (optional) attributes that a publisher sends to a topic and is eventually delivered to subscribers.</li><li><strong><strong>Publisher.</strong></strong> An application that creates and sends messages to a single or multiple topics.</li></ul><p>In
 this section, we will create a topic, create a subscription for our 
cloud function to listen to messages to that topic, and produce messages
 manually to that topic using <code>gcloud</code> cli. The message will 
contain the channel name and message to post, and the cloud function 
will post that message to the specified slack channel.</p><h3>Create pub/sub topic</h3>
<p>First, we need to create a topic.</p><figure>
  <pre><code class="language-shell">export PUBSUB_TOPIC=channelbot-pubsub
gcloud pubsub topics create ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID}</code></pre>
  </figure>
<h3>Grant permissions to the service account</h3>
<p>Next, we need to give the <code>roles/pubsub.editor</code> role to the service account we're using for the function execution so that it can create a subscription to this pub/sub topic.</p><figure>
  <pre><code class="language-shell">gcloud pubsub topics add-iam-policy-binding ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID} \
    --member serviceAccount:${SA_EMAIL} \
    --role roles/pubsub.editor</code></pre>
  </figure>
<h3>Update the function code</h3>
<p>Here's the <code>main.py</code> we'll need to listen to pub/sub events, extract <code>channel</code> and <code>text</code>, and sent it to slack:</p><figure>
  <pre><code class="language-python">import base64
import json
import functions_framework
from slack_bolt import App

# process_before_response must be True when running on FaaS
app = App(process_before_response=True)

print(&#039;Function has started&#039;)

# Triggered from a message on a Cloud Pub/Sub topic.
@functions_framework.cloud_event
def pubsub_handler(cloud_event):
    try:
        data = base64.b64decode(
            cloud_event.data[&quot;message&quot;][&quot;data&quot;]).decode()
        print(&quot;Received from pub/sub: %s&quot; % data)
        event_data = json.loads(data)
        channel = event_data[&quot;channel&quot;]
        text = event_data[&quot;text&quot;]
        app.client.chat_postMessage(channel=channel, text=text)
    except Exception as E:
        print(&quot;Error decoding message: %s&quot; % E)</code></pre>
    <figcaption class="text-center">src-v2/main.py</figcaption>
  </figure>
<p>Before deploying, we also need to enable the Eventarc API in this project.</p><figure>
  <pre><code class="language-shell">gcloud services enable --project ${PROJECT_ID} \
    eventarc.googleapis.com</code></pre>
  </figure>
<h3>Deploy and Test</h3>
<p>Now, there's a slightly modified version of the deploy command to deploy this:</p><figure>
  <pre><code class="language-shell">gcloud beta functions deploy channelbot-send-to-slack \
    --gen2 \
    --runtime python310 \
    --project ${PROJECT_ID} \
    --service-account ${SA_EMAIL} \
    --source ./src-v2 \
    --entry-point pubsub_handler \
    --trigger-topic ${PUBSUB_TOPIC} \
    --region ${REGION} \
    --memory 128MiB \
    --min-instances 0 \
    --max-instances 1 \
    --set-secrets &#039;SLACK_BOT_TOKEN=channelbot-slack-bot-token:latest,SLACK_SIGNING_SECRET=channelbot-slack-signing-secret:latest&#039; \
    --timeout 60s</code></pre>
  </figure>
<p>The main changes are:</p><ul><li>Changed entry-point to the new function <code>pubsub_handler</code></li><li>Replaced <code>--trigger-http</code> with <code>--trigger-topic</code></li><li>Removed <code>--allow-unauthenticated</code></li></ul><p>Before sending a pub/sub message, we will also need to give the <code>roles/run.invoker</code> role to our service account to be able to trigger our newly deployed function.</p><figure>
  <pre><code class="language-shell">gcloud run services add-iam-policy-binding channelbot-send-to-slack \
    --project ${PROJECT_ID} \
    --region ${REGION} \
    --member=serviceAccount:${SA_EMAIL} \
    --role=roles/run.invoker</code></pre>
  </figure>
<p>To test this out, we can send a pub/sub message using gcloud cli:</p><figure>
  <pre><code class="language-shell">gcloud pubsub topics publish ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID} \
    --message &#039;{&quot;channel&quot;: &quot;#general&quot;, &quot;text&quot;: &quot;Hello from Cloud Pub/Sub!&quot;}&#039;</code></pre>
  </figure>
<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/f184b7795b-1699621096/screenshot_20221104_232055.png" alt="">
  
    <figcaption class="text-center">
    ChannelBot message via pub/sub  </figcaption>
  </figure>
<h2>Post new channels roundup using cloud scheduler</h2>
<h3>Manually post recently created channels</h3>
<p>Now
 that we have gained the capability to trigger a message from pub/sub to
 slack, we can add some logic to fetch the recently created channels 
from slack and post it as a message on this trigger.</p> <p>Here's the modified <code>main.py</code> to do this:</p><figure>
  <pre><code class="language-python">import base64
import json
import time
import functions_framework
from slack_bolt import App

# process_before_response must be True when running on FaaS
app = App(process_before_response=True)

print(&#039;Function has started&#039;)

# Triggered from a message on a Cloud Pub/Sub topic.
@functions_framework.cloud_event
def pubsub_handler(cloud_event):
    try:
        data = base64.b64decode(
            cloud_event.data[&quot;message&quot;][&quot;data&quot;]).decode()
        print(&quot;Received from pub/sub: %s&quot; % data)
        event_data = json.loads(data)
        max_days = event_data[&quot;max_days&quot;] # Max age of channels
        channel = event_data[&quot;channel&quot;]
        recent_channels = get_recent_channels(app, max_days)
        if len(recent_channels) &gt; 0:
            blocks, text = format_channels(recent_channels, max_days)
            app.client.chat_postMessage(channel=channel, text=text,
                                        blocks=blocks)
        else:
            print(&quot;No recent channels&quot;)
    except Exception as E:
        print(&quot;Error decoding message: %s&quot; % E)


def get_recent_channels(app, max_days):
    max_age_s = max_days * 24 * 60 * 60
    result = app.client.conversations_list()
    all = result[&quot;channels&quot;]
    now = time.time()
    return [ c for c in all if (now - c[&quot;created&quot;] &lt;= max_age_s) ]

def format_channels(channels, max_days):
    text = (&quot;%s channels created in the last %s day(s):&quot; %
            (len(channels), max_days))
    blocks = [{
        &quot;type&quot;: &quot;header&quot;,
        &quot;text&quot;: {
            &quot;type&quot;: &quot;plain_text&quot;,
            &quot;text&quot;: text
        }
    }]
    summary = &quot;&quot;
    for c in channels:
        summary += &quot;\n*&lt;#%s&gt;*: %s&quot; % (c[&quot;id&quot;], c[&quot;purpose&quot;][&quot;value&quot;])
    blocks.append({
        &quot;type&quot;: &quot;section&quot;,
        &quot;text&quot;: {
            &quot;type&quot;: &quot;mrkdwn&quot;,
            &quot;text&quot;: summary
        }
    })
    return blocks, text</code></pre>
    <figcaption class="text-center">src-v3/main.py</figcaption>
  </figure>
<p>After deploying this with the same command above (just change <code>--source ./src-v2</code> to <code>--source ./src-v3</code>), we can send a pub/sub event to trigger it:</p><figure>
  <pre><code class="language-shell">gcloud pubsub topics publish ${PUBSUB_TOPIC} \
    --project ${PROJECT_ID} \
    --message &#039;{&quot;channel&quot;: &quot;#general&quot;, &quot;max_days&quot;: 7}&#039;</code></pre>
  </figure>
<p>And it posts a message like this:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/slackbot-google-cloud-part-1/7e9ecc0ef4-1699621096/screenshot_20221104_235500.png" alt="">
  
    <figcaption class="text-center">
    Recently created channels posted by ChannelBot  </figcaption>
  </figure>
<h3>Create schedule</h3>
<p>Next,
 we want to periodically schedule this message. For this, we will 
configure a cron job in Google Cloud Scheduler to send a Pub/Sub event 
with the required parameters periodically.</p> <p>Before we create a schedule, we will have to enable the Cloud Scheduler API</p><figure>
  <pre><code class="language-shell">gcloud services enable --project ${PROJECT_ID} \
    cloudscheduler.googleapis.com</code></pre>
  </figure>
<p>To schedule the Pub/Sub trigger at 1600 hours UTC time every day:</p><figure>
  <pre><code class="language-shell">gcloud scheduler jobs create pubsub channelbot-job \
    --project ${PROJECT_ID} \
    --location ${REGION} \
    --schedule &quot;0 16 * * *&quot; \
    --time-zone &quot;UTC&quot; \
    --topic ${PUBSUB_TOPIC} \
    --message-body &#039;{&quot;channel&quot;: &quot;#general&quot;, &quot;max_days&quot;: 1}&#039;</code></pre>
  </figure>
<p>After this, a Pub/Sub event should be fired to the <code>channelbot-pubsub</code> topic every day, which should result in a slack message to <code>#general</code> with a list of channels created in the last day.</p><h2>Closing Thoughts</h2>
<p>Full code samples for this can be found in <a href="https://github.com/srijan/gcloud_slackbot" rel="noreferrer">this github repo</a>. I've also included a <code>Makefile</code> with targets split into sections associated with the different steps in this post.</p> <p>I
 also plan to follow this up with a part 2 where we will use slack's 
slash commands to allow the end-user of this bot to setup the channel 
and frequency of posting of the recent channels list, and even configure
 multiple schedules. Please comment below if this is something you will 
be interested in.</p>]]></content:encoded>
    <comments>https://srijan.ch/slackbot-google-cloud-part-1#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Erlang: Dialyzer HTML Reports using rebar3</title>
    <description><![CDATA[How I made a custom rebar3 plugin to generate HTML reports for dialyzer warnings]]></description>
    <link>https://srijan.ch/erlang-dialyzer-html-reports-rebar3</link>
    <guid isPermaLink="false">6072d47bb1237c000188be89</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 25 Apr 2021 17:10:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/erlang-dialyzer-html-reports-rebar3/b5893af741-1699621096/dialyzer-html-report.png" medium="image" />
    <content:encoded><![CDATA[<h2>Introduction</h2>
<p><a href="https://erlang.org/doc/man/dialyzer.html" rel="noreferrer">Dialyzer</a> is a static analysis tool for <a href="https://www.erlang.org/" rel="noreferrer">Erlang</a>
 that identifies software discrepancies, such as definite type errors, 
code that has become dead or unreachable because of programming errors, 
and unnecessary tests, in single Erlang modules or entire (sets of) 
applications.</p> <p>Dialyzer is integrated with <a href="https://github.com/erlang/rebar3" rel="noreferrer">rebar3</a> (a build tool for Erlang), and its default output looks like this:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/erlang-dialyzer-html-reports-rebar3/fcdab50184-1699621096/dialyzer-rebar3-default.png" alt="Rebar3 Dialyzer Default Output">
  
    <figcaption class="text-center">
    <code>rebar3 dialyzer</code> output  </figcaption>
  </figure>
<p>This is a good starting point, but it's not very useful in some cases:</p><ol><li>If you have lots of warnings, this output covers several screens, and it becomes difficult to parse through everything.</li><li>If you run this in some sort of continuous integration (like Jenkins), then the console output is not very friendly.</li></ol><p>One way to improve this is to generate an HTML report which can be published/emailed/opened in the browser.</p> <p>So,
 I build a rebar3 plugin that generates a nicely formatted color HTML 
report from the dialyzer output. The plugin can be found <a href="https://hex.pm/packages/rebar3_dialyzer_html" rel="noreferrer">on hex.pm</a>, or <a href="https://github.com/srijan/rebar3_dialyzer_html" rel="noreferrer">on github</a>.</p><h2>Usage</h2>
<p>Make sure you're using rebar3 version <code>3.15</code> or later.</p><ol><li>Add the plugin to your <code>rebar.config</code>:</li></ol><figure>
  <pre><code class="language-erlang">{plugins, [
    %% from hex
    {rebar3_dialyzer_html, &quot;0.2.0&quot;}
    
    %% or, latest from git
    {rebar3_dialyzer_html, {git, &quot;https://github.com/srijan/rebar3_dialyzer_html.git&quot;, {branch, &quot;main&quot;}}}
]}.</code></pre>
    <figcaption class="text-center">rebar.config snippet</figcaption>
  </figure>
<p>2. Select raw format for the dialyzer warnings file generated by rebar3 (this is a new flag available from rebar <code>3.15</code>):</p><figure>
  <pre><code class="language-erlang">{dialyzer, [
    {output_format, raw}
]}.</code></pre>
    <figcaption class="text-center">rebar.config snippet</figcaption>
  </figure>
<p>3. Run the <code>dialyzer_html</code> rebar3 command:</p><figure>
  <pre><code class="language-shellsession">$ rebar3 dialyzer_html          
===&gt; Generating Dialyzer HTML Report
===&gt; HTML Report written to _build/default/dialyzer_report.html</code></pre>
  </figure>
<p>Here's how the report looks:</p><figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/erlang-dialyzer-html-reports-rebar3/b5893af741-1699621096/dialyzer-html-report.png" alt="Dialyzer HTML Report Sample">
  
    <figcaption class="text-center">
    Sample HTML report for dialyzer  </figcaption>
  </figure>
<h2>How I built the plugin</h2>
<h3>rebar3 dialyzer</h3>
<p>The rebar3 built-in dialyzer plugin does the following:</p><ol><li>Runs dialyzer with the options configured in <code>rebar.config</code></li><li>Converts the output to ANSI color format and write it to console (it has a custom function for this formatting)</li><li>Converts the output to basic format (using built-in <code>dialyzer:format/2</code>) and write it to a dialyzer_warnings file.</li></ol><p>I wanted to find out the easiest way to get a nicely formatted HTML report, ideally without forking the rebar3 project itself.</p> <p>The
 first thing I needed was a way to save the raw (machine parse-able) 
dialyzer output to the warnings file instead of the default formatted 
output. For this, I <a href="https://github.com/erlang/rebar3/issues/2524" rel="noreferrer">submitted a new feature</a> to the rebar3 project, and it introduces a new config to enable this. So, this plugin needs rebar3 version <code>3.15</code> or later.</p><h3>Plugin vs Escript</h3>
<p>Next, to actually parse and output the HTML file, I would need to run some Erlang code. There are two options I considered:</p><ol><li><u>Escript called from Makefile/wrapper</u><br>This option works okay, but we cannot re-use any rebar3 internal function or State. I wanted to use rebar3's own custom function for formatting the dialyzer warnings, so decided to not go with this option.<br></li><li><u>Custom rebar3 plugin</u><br>Doing it this way makes it easy for anyone to use, and I can re-use things already implemented in rebar3 itself. So, I decided to use this option.<br></li></ol><h3>HTML output</h3>
<p>Now, in the custom rebar3 plugin, I needed to convert the ANSI color-coded output given by <code>rebar_dialyzer_format:format_warnings/2</code> into something for HTML.</p> <p>I thought of the following options:</p><ol><li>rebar3 uses the <a href="https://github.com/project-fifo/cf" rel="noreferrer">cf library</a> to convert tagged strings to ANSI color codes. I can use something like dependency injection to replace the <code>cf</code> module with my own module so that the tagged strings are directly converted to HTML without even going to the intermediate ANSI color-coded format.<br><br>This method seemed very hacky, so I decided not to pursue it. But, if rebar3 makes the dialyzer format interface configurable, I can reevaluate this approach.<br></li><li>Convert by writing a library in Erlang for ANSI code to HTML tags conversion.<br>There is a library called <a href="https://github.com/stephlow/ansi_to_html" rel="noreferrer">ansi_to_html</a> in elixir - but didn't want to add a huge dependency like that.<br>But writing a new Erlang library to do this can be a future optimization.<br></li><li>Convert using a JS library after page load. I found a javascript library called <a href="https://github.com/drudru/ansi_up" rel="noreferrer">ansi_up</a> which can convert ANSI codes to HTML color tags, or it can add CSS classes that can be styled as required.<br></li></ol><p>I
 opted for approach #3 because it was the easiest. I also grouped the 
warnings by app name so that all warnings for a single app are in one 
place, and the report includes the number of warnings per app.</p> <p>Also,
 if the JS library could not be loaded (for example due to no internet, 
or any security headers), then it will still show the basic formatted 
output using <code>dialyzer:format/2</code>.</p><h2>Future Improvements</h2>
<ol><li>I
 want to remove the dependency on Javascript, and want to write/use a 
pure Erlang library that can convert the ANSI codes to HTML.</li><li>Ideally,
 rebar3 itself can separate the dialyzer warning parsing and formatting 
into different functions, and make it possible to override the 
formatting function so that any plugin can pass its own formatting 
function into the dialyzer plugin.</li><li>This can even run <code>git</code>
 commands in the shell to figure out if any lines changed in the most 
recent commit involve a warning, and maybe highlight them especially in 
the report. This can be useful CI reports on pull requests.</li><li>Maybe make the format plug-able so the report can be saved in any JSON / XML or any custom format.</li></ol><hr />
<p>Let me know in the comments below, or on <a href="https://twitter.com/srijan4" rel="noreferrer">twitter</a>/<a href="https://github.com/srijan/rebar3_dialyzer_html" rel="noreferrer">github </a>if you have any suggestions for this plugin.</p>]]></content:encoded>
    <comments>https://srijan.ch/erlang-dialyzer-html-reports-rebar3#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Running multiple emacs daemons</title>
    <description><![CDATA[Run multiple emacs daemons for different purposes and set different themes/config based on daemon name]]></description>
    <link>https://srijan.ch/running-multiple-emacs-daemons</link>
    <guid isPermaLink="false">60671113b1237c000188bd2e</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 02 Apr 2021 14:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using <a href="https://www.gnu.org/software/emacs/" rel="noreferrer">Emacs</a> for several years, and these days I'm using it both for writing code and for working with my email (another post on that soon).</p> <p>As
 commonly suggested, I run Emacs in daemon-mode to keep things fast and 
snappy, with an alias to auto-start the daemon if it's not started, and 
connect to it if started:</p><figure>
  <pre><code class="language-shell">alias e=&#039;emacsclient -a &quot;&quot; -c&#039;</code></pre>
    <figcaption class="text-center">Config for single daemon</figcaption>
  </figure>
<p>But, this has some problems:</p><ol><li>The buffers for email and code projects get mixed together</li><li>Restarting the emacs server for code (for example) kills the open mail buffers as well</li><li>Emacs themes are global – they cannot be set per frame. For code, I prefer a dark theme (most of the time), but for email, a light theme works better for me (specially for HTML email).</li></ol><p>To
 solve this, I searched for a way to run multiple emacs daemons, 
selecting which one to connect to using shell aliases, and automatically
 setting the theme based on the daemon name. Here's my setup to achieve 
this:</p><h3>Custom run_emacs function in zshrc:</h3>
<figure>
  <pre><code class="language-shell">run_emacs() {
  if [ &quot;$1&quot; != &quot;&quot; ];
  then
    server_name=&quot;${1}&quot;
    args=&quot;${@:2}&quot;
  else
    server_name=&quot;default&quot;
    args=&quot;&quot;
  fi

  if ! emacsclient -s ${server_name} &quot;${@:2}&quot;;
  then
    emacs --daemon=${server_name}
    echo &quot;&gt;&gt; Server should have started. Trying to connect...&quot;
    emacsclient -s ${server_name} &quot;${@:2}&quot;
  fi
}</code></pre>
  </figure>
<p>This function takes an optional argument – the name to be used for the daemon. If not provided, it uses <code>default</code>
 as the name. Then, it tries to connect to a running daemon with the 
name. And if it's not running, it starts the daemon and then connects to
 it. It also passes any additional arguments to <code>emacsclient</code>.</p><h3>Custom aliases in zshrc:</h3>
<figure>
  <pre><code class="language-shell"># Create a new frame in the default daemon
alias e=&#039;run_emacs default -n -c&#039;

# Create a new terminal (TTY) frame in the default daemon
alias en=&#039;run_emacs default -t&#039;

# Open a file to edit using sudo
es() {
    e &quot;/sudo:root@localhost:$@&quot;
}

# Open a new frame in the `mail` daemon, and start notmuch in the frame
alias em=&quot;run_emacs mail -n -c -e &#039;(notmuch-hello)&#039;&quot;</code></pre>
  </figure>
<p>The first 3 aliases use the <code>default</code> daemon. The last one creates a new frame in the <code>mail</code> daemon and also uses <code>emacsclient</code>'s <code>-e</code> flag to start notmuch (the email package I use in Emacs).</p><h3>Emacs config:</h3>
<figure>
  <pre><code class="language-elisp">(cond
 ((string= &quot;mail&quot; (daemonp))
  (setq doom-theme &#039;modus-operandi)
 )
 (t
  (setq doom-theme &#039;modus-vivendi)
 )
)</code></pre>
  </figure>
<p>This checks the name of the daemon passed during 
startup, and sets the doom theme accordingly. The same pattern can be 
used to set any config based on the daemon name.</p> <p>Note that I'm using <a href="https://github.com/hlissner/doom-emacs" rel="noreferrer">doom emacs</a>, but the above method should work with or without any framework for Emacs. Tested with Emacs 27 and 28.</p>]]></content:encoded>
    <comments>https://srijan.ch/running-multiple-emacs-daemons#comments</comments>
    <slash:comments>2</slash:comments>
  </item><item>
    <title>Erlang: find cross-app calls using xref</title>
    <description><![CDATA[Using xref magic to query compiled beam files and find cross-application function calls in Erlang]]></description>
    <link>https://srijan.ch/erlang-find-cross-app-calls-using-xref</link>
    <guid isPermaLink="false">606006e8b1237c000188badf</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 28 Mar 2021 09:05:00 +0000</pubDate>
    <media:content url="https://srijan.ch/media/pages/blog/erlang-find-cross-app-calls-using-xref/5390618c89-1699621096/omar-flores-moo6k3raiwe-unsplash.jpg" medium="image" />
    <content:encoded><![CDATA[<figure data-ratio="auto">
    <img src="https://srijan.ch/media/pages/blog/erlang-find-cross-app-calls-using-xref/5390618c89-1699621096/omar-flores-moo6k3raiwe-unsplash.jpg" alt="Erlang: find cross-app calls using xref">
  
  </figure>
<p>At work, we use the <a href="https://adoptingerlang.org/docs/development/umbrella_projects/" rel="noreferrer">multi-app project pattern</a> to organize our codebase. This lets us track everything in a single repository but still keep things isolated.</p> <p>For isolation, we wanted to restrict apps to only be able to call the public interfaces of other apps (similar to <a href="https://en.wikipedia.org/wiki/Facade_pattern" rel="noreferrer">facade pattern</a>).
 However, since everything in Erlang is in a global namespace, nothing 
prevents code in one app to call the (exported) functions from another 
app.</p> <p>Next best solution—detect the above scenario and raise warnings during code review/CI.</p> <p><a href="https://erlang.org/doc/apps/tools/xref_chapter.html" rel="noreferrer">Xref</a> to the rescue:</p><blockquote>
  Xref is a cross reference tool that can be used for finding dependencies between functions, modules, applications and releases.  </blockquote>
<p>Xref
 includes some predefined analysis patterns that perform some common 
tasks like searching for undefined functions, deprecated function calls,
 unused exported functions, etc.</p> <p>How it works: when <a href="https://erlang.org/doc/man/xref.html#xref_server" rel="noreferrer">xref server</a> is started and some modules/applications/releases are added for analysis, it builds a <strong>Call Graph</strong>: a directed graph data structure containing the calls between functions, modules, applications or releases. It also creates an <strong>Inter Call Graph</strong> which holds information about indirect calls (chain of calls). It exposes a very powerful <a href="https://erlang.org/doc/man/xref.html#query" rel="noreferrer">query language</a>, which can be used to extract any information we want from the above graph data structures.</p> <p>To demonstrate this, I created a sample multi-app repository: <a href="https://github.com/srijan/library_sample" rel="noreferrer">library_sample</a>. There are some cross-app function calls in this code that we want to detect.</p> <p>This repo is supposed to represent the functionality of a physical Library. It has four apps: <code>library</code>, <code>library_api</code>, <code>library_catalog</code>, and <code>library_inventory</code>. <code>library_catalog</code> has metadata about the books in the library, <code>library_inventory</code> has information about the availability of books, return dates, etc., <code>library_api</code> has HTTP handlers which call the above, and <code>library</code> is the main app which brings it all together.</p> <p>Let’s say we want that <code>library_api</code> can call <code>library_catalog</code> and <code>library_inventory</code> functions, but catalog and inventory cannot call each other directly.</p> <p>First, we clone the repo and run rebar3 shell:</p><figure>
  <pre><code class="language-shellsession">$ git clone https://github.com/srijan/library_sample
Cloning into &#039;library_sample&#039;...
remote: Enumerating objects: 29, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 29 (delta 3), reused 29 (delta 3), pack-reused 0
Unpacking objects: 100% (29/29), 910.62 KiB | 2.53 MiB/s, done.

$ cd library_sample

$ ./rebar3 shell
===&gt; Verifying dependencies...
===&gt; Analyzing applications...
===&gt; Compiling library_inventory
===&gt; Compiling library_catalog
===&gt; Compiling library
===&gt; Compiling library_api
Erlang/OTP 23 [erts-11.1.7] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Eshell V11.1.7  (abort with ^G)
1&gt;</code></pre>
  </figure>
<p>Then, we start xref and add our build directory for analysis:</p><figure>
  <pre><code class="language-erlang">1&gt; xref:start(s).
{ok,&lt;0.185.0&gt;}

2&gt; xref:add_directory(s, &quot;_build/default/lib&quot;, [{recurse, true}]).
{ok,[library_api,library_app,library_catalog,
     library_inventory,library_sample_app,library_sample_sup,
     library_sup]}</code></pre>
  </figure>
<p>Using <code>xref:q/2</code> for querying the constructed call graph:</p><figure>
  <pre><code class="language-erlang">3&gt; xref:q(s, &quot;E | library_inventory || library_catalog&quot;).
{ok,[]}

4&gt; xref:q(s, &quot;E | library_catalog || library_inventory&quot;).
{ok,[{{library_catalog,get_by_id,1},
      {library_inventory,get_available_copies,1}}]}</code></pre>
  </figure>
<p>This means that there are no direct calls from the <code>library_inventory</code> application to the <code>library_catalog</code> application. But, there is a direct call from <code>library_catalog:get_by_id/1</code> to <code>library_inventory:get_available_copies/1</code>.</p> <p>The query <code>E | library_catalog || library_inventory</code> can be read as:</p><ul><li><code>E</code> = All Call Graph Edges</li><li><code>|</code> = The subset of calls <strong>from</strong> any of the vertices. So <code>| library_catalog</code> creates a subset which contains calls from the <code>library_catalog</code> app.</li><li><code>||</code> = The subset of calls <strong>to</strong> any of the vertices. So, <code>|| library_inventory</code> further creates a subset of the previous subset which contains calls to the <code>library_inventory</code> app.</li></ul><p>To get both direct and indirect calls, <code>closure E</code> has to be used:</p><figure>
  <pre><code class="language-erlang">5&gt; xref:q(s, &quot;closure E | library_catalog || library_inventory&quot;).
{ok,[{{library_catalog,get_by_id,1},
      {library_inventory,get_all,0}},
     {{library_catalog,get_by_id,1},
      {library_inventory,get_available_copies,1}}]}</code></pre>
  </figure>
<p>This tells us that there is an indirect direct call from  <code>library_catalog:get_by_id/1</code> to <code>library_inventory:get_all/0</code>.</p> <p>The query language is very powerful, and there are more interesting examples in the <a href="https://erlang.org/doc/apps/tools/xref_chapter.html#expressions" rel="noreferrer">xref user’s guide</a>.</p> <p>But
 this only runs the required queries manually in Erlang shell. We want 
to be able to run it in continuous integration. Luckily, rebar3 comes 
with a way to <a href="https://rebar3.readme.io/docs/configuration#xref" rel="noreferrer">specify custom xref queries</a> to run when running <code>./rebar3 xref</code>, and to raise an error if they don’t match against the expected value defined.</p> <p>Here’s the xref section from my <code>rebar.config</code>:</p><figure>
  <pre><code class="language-erlang">{xref_queries, [
                {&quot;closure E | library_catalog || library_inventory&quot;, []},
                {&quot;closure E | library_inventory || library_catalog&quot;, []}
               ]}.</code></pre>
    <figcaption class="text-center">rebar.config</figcaption>
  </figure>
<p>This performs the two queries I want and matches them against the the target value of <code>[]</code>. Sample output:</p><figure>
  <pre><code class="language-shellsession">$ ./rebar3 xref
===&gt; Verifying dependencies...
===&gt; Analyzing applications...
===&gt; Compiling library_inventory
===&gt; Compiling library_catalog
===&gt; Compiling library
===&gt; Compiling library_api
===&gt; Running cross reference analysis...
===&gt; Query closure E | library_catalog || library_inventory
 answer []
 did not match [{{library_catalog,get_by_id,1},{library_inventory,get_all,0}},
                {{library_catalog,get_by_id,1},
                 {library_inventory,get_available_copies,1}}]</code></pre>
  </figure>
<p>So, now this is ready for automation.</p>]]></content:encoded>
    <comments>https://srijan.ch/erlang-find-cross-app-calls-using-xref#comments</comments>
    <slash:comments>1</slash:comments>
  </item><item>
    <title>Clean boot in erlang relx release</title>
    <description><![CDATA[Booting Erlang release in clean or safe mode]]></description>
    <link>https://srijan.ch/clean-boot-in-erlang-relx-release</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557ca</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 15 Apr 2016 04:50:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>We use relx to release out erlang applications, and faced a problem:</p> <p>Our application was crashing at bootup, and therefore we were not 
able to even open a remote shell on which we can run any correction 
functions.</p> <p>One way to solve this (which we've been using till now) is to also 
install erlang on the machine which has the release, and open an erlang 
shell with the correct library path set.</p> <p>But, the release generated by relx provides another mechanism which does not need erlang installed.</p> <p>The solution is: erlang boot scripts.</p> <p>Detailed information about boot scripts can be found at: <a href="http://erlang.org/doc/system_principles/system_principles.html#id59026">http://erlang.org/doc/system_principles/system_principles.html#id59026</a></p> <p>relx ships a <code>start_clean.boot</code> boot script with the release, which loads the code for and starts the applications kernel and stdlib.</p> <p>Sample command:</p><figure>
  <pre><code class="language-shellsession">${RELEASE_DIR}/myapplication/bin/myapplication console_boot start_clean</code></pre>
  </figure>
]]></content:encoded>
    <comments>https://srijan.ch/clean-boot-in-erlang-relx-release#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Slack bot for Phabricator Notifications</title>
    <description><![CDATA[Setting up a slack bot for phabricator]]></description>
    <link>https://srijan.ch/slack-bot-for-phabricator</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d4</guid>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Tue, 04 Aug 2015 18:20:00 +0000</pubDate>
    <content:encoded><![CDATA[<p><strong>NOTICE:</strong> The solution mentioned in this post no longer works because Slack has closed down the IRC gateway. I recommend using the <a href="https://github.com/etcinit/phabulous" rel="noreferrer">phabulous</a> project for this now.</p><p><a href="http://phabricator.org/" rel="noreferrer">Phabricator</a> is a collection of open source web <a href="https://phacility.com/phabricator/" rel="noreferrer">applications useful for software development</a> built on a single platform. We have been using phabricator tools for about a month now, and it seems great. The best thing is: all different components (code review, task/bug tracking, project management, repo browsing) are well-integrated with one another, and work really well together.</p><p>Except one thing, of course, and that is it's chat app (called Conpherence). This is what they say about it themselves:</p><blockquote>
  Like Slack, but nowhere as good.
Seriously, Slack is way better.  </blockquote>
<p>Well, we use <a href="https://slack.com/">Slack</a> ourselves in our organization, and I tried to find out a way to integrate phabricator with slack.</p> <p>My use case was something like this:</p><ol><li>There are project specific channels (rooms?) in our slack</li><li>Important updates related to a project should be auto-posted to this channel</li><li>Discussions in this channel regarding the project should be <strong>enhanced</strong> by auto-linking of task ids or code review ids mentioned, to their URLs.</li></ol><p>I found a few different ways:</p><h3>Phabricator bots on github</h3>
<p>There are a couple of projects on github which integrate phabricator with slack:</p><ul><li><a href="https://github.com/etcinit/phabricator-slack-feed">https://github.com/etcinit/phabricator-slack-feed</a></li><li><a href="https://github.com/psjay/ph-slack">https://github.com/psjay/ph-slack</a></li></ul><p>Both of these are good solutions for point 2 above, but don't 
(currently) solve point 3. A way to go forward would be to contribute 
new features to these projects.</p><h3>Phabricator's in-built chatbot</h3>
<p>Phabricator already has the concept of a <a href="https://secure.phabricator.com/book/phabdev/article/chatbot/">chatbot</a> which connects to IRC.</p> <p>This bot covers both points 2 and 3 from my requirement, and also has
 some extra features, like recording chatlogs which can be browsed in 
the Phabricator web interface, which can in turn be referred to in 
comments for tasks, etc.</p> <p>Slack has an <a href="https://slack.zendesk.com/hc/en-us/articles/201727913-Connecting-to-Slack-over-IRC-and-XMPP">IRC gateway</a> which can be used for this purpose.</p> <p>But the phabdev article on chatbot has an omnious note:</p><blockquote>
  <p>NOTE: The chat bot is somewhat experimental and not very mature.</p>  </blockquote>
<p>Digging a little further, I found this task: <a href="https://secure.phabricator.com/T7829">T7829: PhabricatorBotFeedNotificationHandler is completely broken and unusable</a>, which has one piece of bad news in the comments:</p><blockquote>
  <p>@epriestley: Bot stuff is generally a very low priority and I don't 
expect to review or merge any of it for a long time (roughly, around the
 Bot/API iteration of Conpherence, which is months/years away).</p>  </blockquote>
<p>To make it work, <a href="https://secure.phabricator.com/p/staticshock/">@staticshock</a> posted some <a href="https://secure.phabricator.com/T7829#120246">fixes</a>.</p> <p>I made some changes of my own to make the bot filter the feed by 
project, so that one channel gets updates for only one or some of the 
projects.</p> <p>My final diff can be found here: <a href="https://secure.phabricator.com/P1839">https://secure.phabricator.com/P1839</a>.</p> <p>And, my sample bot config is shared below:</p><figure>
  <pre><code class="language-json">{
  &quot;server&quot; : &quot;organization.irc.slack.com&quot;,
  &quot;port&quot; : 6667,
  &quot;nick&quot; : &quot;phabot&quot;,
  &quot;pass&quot;: &quot;random-password&quot;,
  &quot;ssl&quot;: true,
  &quot;join&quot; : [
    &quot;#project-updates&quot;,
  ],
  &quot;handlers&quot; : [
    &quot;PhabricatorBotObjectNameHandler&quot;,
    &quot;PhabricatorBotLogHandler&quot;,
    &quot;PhabricatorBotFeedNotificationHandler&quot;
  ],

  &quot;conduit.uri&quot; : &quot;http://phab.example.com&quot;,
  &quot;conduit.user&quot; : &quot;phabot&quot;,
  &quot;conduit.token&quot; : &quot;api-token&quot;,

  &quot;macro.size&quot; : 48,
  &quot;macro.aspect&quot; : 0.66,

  &quot;notification.channels&quot; : [&quot;#project-updates&quot;],
  &quot;notification.types&quot;: [&quot;task&quot;],
  &quot;notification.projects&quot;: [&quot;PHID-PROJ-ut55kdadskptl4he5iw39&quot;],
  &quot;notification.verbosity&quot;: 0
}</code></pre>
  </figure>
<p>We have to pass a list of project PHIDs in <code>notification.projects</code>.</p><h3>The way forward</h3>
<p>So, the version shared above works fine for me, for now. Currently, 
it does not support connecting to multiple channels, having different 
config per channel, detecting projects for things other than tasks, 
ability to enter project name instead of PHID in config file, etc. These
 are some things I would want to add to my patch in the future.</p> <p>Also, another good solution to all this would be to extend the 
chatbot code in phabricator in a generic way to be able to support bots 
for different services like slack, telegram, hipchat, etc.</p>]]></content:encoded>
    <comments>https://srijan.ch/slack-bot-for-phabricator#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Notes on atom feeds</title>
    <description><![CDATA[My notes on Atom feeds]]></description>
    <link>https://srijan.ch/notes-on-atom-feeds</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557c9</guid>
    <category><![CDATA[development]]></category>
    <category><![CDATA[feeds]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sun, 21 Sep 2014 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>For implementing feeds for the <a href="http://posativ.org/isso/">Isso commenting server</a>, I was researching about atom feeds, and though I would jot down some notes on the topic.</p><h4>RSS2 vs Atom</h4>
<p>Both are mostly accepted everywhere now a days, and it <a href="http://wordpress.stackexchange.com/questions/2922/should-i-provide-rss-or-atom-feeds">seems like  a good idea to provide both</a>. This particular post only talks about Atom feeds.</p><h4>Nested Entries</h4>
<p>Comments are threaded, <a href="http://blog.codinghorror.com/web-discussions-flat-by-design/">at least to one level deep</a>,
 but Atom does not allow nested entries. So, for the feed page for a 
post, we have two choices: listing all comments, or just top level 
comments. If we have a feed page for each top level comment, then that 
would be a flat list of all replies to the comment.</p><h4>Feed URI</h4>
<p>Every Atom entry must have a unique ID. <a href="http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id">This page</a> has some intersting ways to generate the ID. I think the best way is to generate a <a href="http://en.wikipedia.org/wiki/Tag_URI">tag URI</a> at the time of comment creation, store it, and use it forever for that resource.</p><h4>Reduce load/bandwidth by using <code>If-None-Match</code></h4>
<p>If we give out <a href="http://en.wikipedia.org/wiki/HTTP_ETag">ETags</a>
 with the feeds, then a client can do conditional requests, for which 
the server only sends a full response if something has changed.</p>]]></content:encoded>
    <comments>https://srijan.ch/notes-on-atom-feeds#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Trying Emacs</title>
    <description><![CDATA[Bare bones emacs configuration from when I first started using Emacs]]></description>
    <link>https://srijan.ch/trying-emacs</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d8</guid>
    <category><![CDATA[emacs]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Fri, 16 Aug 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using <a href="http://www.vim.org/">Vim</a> as my text editor for the last few years, and have been very happy with it. But lately, some features of <a href="http://www.gnu.org/software/emacs/">Emacs</a> have got me interested (especially <a href="http://orgmode.org/">org-mode</a>),
 and I wanted to try it out. After all, I won't know the difference 
until I actually try it, and opinions on text editors vary widely on the
 internet.</p> <p>So, I decided to give it a try. First I went through the built-in 
Emacs Tutorial, and it seemed easy enough. I got used to the basic 
commands fairly quickly. I guess the real benefits will start to show a 
little later, when I try to optimize some ways of doing things.</p> <p>For now, I just wanted to do some basic configuration so that I could
 start using emacs right now. So, I did the following changes (scroll to
 the bottom of this page for the full <code>init.el</code> file):</p><ul><li><p>Hide the menu, tool, and scroll bars</p></li><li><p>Add line numbers</p></li><li><p>Hide splash screen and banner</p></li><li><p>Setup <a href="http://marmalade-repo.org/">Marmalade</a><br />
Marmalade is a package archive for emacs, which makes it easier to install non-official packages.</p></li><li><p>Maximize emacs window on startup<br />
My emacs was not starting up maximized, and I did not want to maximize it manually every time I started it. I found <a href="http://www.emacswiki.org/emacs/FullScreen">this page</a> addressing this issue, and tried out one of the <a href="http://www.emacswiki.org/emacs/FullScreen#toc20">solutions for linux</a>, and it worked great.</p></li></ul><p>For now, it all looks good, and I can start using it with only this small configuration.</p> <p>For example, for writing this post, I installed <a href="http://jblevins.org/projects/markdown-mode/">markdown-mode</a> using marmalade, and I got syntax highlighting and stuff.</p> <p>I will keep using this, and adding to my setup as required, for a few
 weeks, and then evaluate whether I should switch completely.</p><h3>Complete ~/.emacs.d/init.el file:</h3>
<figure>
  <pre><code class="language-elisp">; init.el

; Remove GUI extras
(menu-bar-mode -1)
(tool-bar-mode -1)
(scroll-bar-mode -1)

; Add line numbers
(global-linum-mode 1)

; Hide splash screen and banner
(setq
 inhibit-startup-message t
 inhibit-startup-echo-area-message t)
(define-key global-map (kbd &quot;RET&quot;) &#039;newline-and-indent)

; Set up marmalade
(require &#039;package)
(add-to-list &#039;package-archives 
    &#039;(&quot;marmalade&quot; .
      &quot;http://marmalade-repo.org/packages/&quot;))
(package-initialize)

; Make window maximized
(shell-command &quot;wmctrl -r :ACTIVE: -btoggle,maximized_vert,maximized_horz&quot;)</code></pre>
  </figure>
]]></content:encoded>
    <comments>https://srijan.ch/trying-emacs#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Speeding up compilation times for Libreoffice / C++ projects</title>
    <description><![CDATA[Faster compile times for libreoffice (and other C/C++ projects)]]></description>
    <link>https://srijan.ch/speeding-up-compiles</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557d6</guid>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 14 Aug 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I got interested in <a href="http://www.libreoffice.org/">LibreOffice</a> a few days ago, and wanted to contribute. I wanted to see how a large project is run, and the <a href="https://wiki.documentfoundation.org/Easy_Hacks">Easy Hacks</a> section looked easy enough to begin.</p> <p>But, there was one problem: LibreOffice is huge, and takes a long 
time to compile (especially for the first time). It took ~40 minutes to 
build on the best workstation I have access to (a 24 core intel server).
 It would take more than a day to build on my laptop, and I wanted to be
 able to build and iterate on my laptop.</p> <p>The <a href="https://wiki.documentfoundation.org/Development/How_to_build">How to Build</a> wiki had a few pointers, and I decided to look into them.</p><h3><a href="http://ccache.samba.org/">CCache</a></h3>
<p>As noted on their website, ccache is a compiler cache, and speeds up 
compilation by storing stuff, and reusing them on recompilation. This 
won't decrease the first compile time (in fact, it might increase it), 
but future compilations would be faster.</p> <p>To use ccache, I made an exports file (see below) which I source before doing any LibreOffice related stuff. Programs like <a href="http://swapoff.org/ondir.html">ondir</a> can help automate this. I decided on a max cache size of 8GB, and set it with:</p><figure>
  <pre><code class="language-shellsession">$ ccache --max-size 8G</code></pre>
  </figure>
<h3><a href="https://github.com/icecc/icecream">Icecream</a></h3>
<p>Icecream enables distributing the compiling load to multiple machines, like <a href="https://code.google.com/p/distcc/">distcc</a>. I decided to go with icecream because support for it is built into LibreOffice's autogen.sh.</p> <p>Using icecream turned out to be as simple as installing and starting services on the build machines, doing <code>./autogen.sh --enable-icecream</code>, followed by <code>make</code>. For projects that don't have such icecream flags, its enough to add icecream's bin directory to the beginning of <code>$PATH</code>, and everything works.</p> <p>Icecream can do a distributed build even if the machines in the cluster are of different types. <a href="https://github.com/icecc/icecream#using-icecream-in-heterogeneous-environments">This section of their readme</a> gives more information about that.</p> <p>Building LibreOffice on my laptop using icecream took about 50 minutes (for a clean build).</p><h3>My exports.sh file</h3>
<figure>
  <pre><code class="language-shell">export CCACHE_DIR=/mnt/archextra/libreoffice/ccache
export CCACHE_COMPRESS=1
export ICECC_VERSION=/mnt/archextra/libreoffice/i386.tar.gz</code></pre>
  </figure>
]]></content:encoded>
    <comments>https://srijan.ch/speeding-up-compiles#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Basic Implementation of A* in Erlang</title>
    <description><![CDATA[Implementing the path finding algorithm A* in Erlang]]></description>
    <link>https://srijan.ch/basic-implementation-of-a-in-erlang</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557c8</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Sat, 03 Aug 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>Recently I had to write some path finding algorithms in Erlang. The 
first version I chose was A*. But, there is no easy way to implent A* in
 a distributed way. So, this is the simplest implementation possible. I 
may rewrite it later if I find a better way.</p> <p>This code is mostly a modified version of <a href="http://stevegilham.blogspot.in/2008/10/first-refactoring-of-star-in-erlang.html">this one</a>.</p> <p>The code <a href="https://gist.github.com/srijan/6142366#file-astar-erl">hosted on gist</a> follows below, followed by some notes.</p><figure>
  <pre><code class="language-erlang">-module(astar).

-type cnode() :: {integer(), integer()}.

-define(MINX, 0).
-define(MINY, 0).
-define(MAXX, 10).
-define(MAXY, 10).

-export([
         astar/2,
         neighbour_nodes/2
        ]).

%% @doc Performs A* for finding a path from `Start&#039; node to `Goal&#039; node
-spec astar(cnode(), cnode()) -&gt; list(cnode()) | failure.
astar(Start, Goal) -&gt;
    ClosedSet = sets:new(),
    OpenSet   = sets:add_element(Start, sets:new()),

    Fscore    = dict:store(Start, h_score(Start, Goal), dict:new()),
    Gscore    = dict:store(Start, 0, dict:new()),

    CameFrom  = dict:store(Start, none, dict:new()),

    astar_step(Goal, ClosedSet, OpenSet, Fscore, Gscore, CameFrom).

%% @doc Performs a step of A*.
%% Takes the best element from `OpenSet&#039;, evaluates neighbours, updates scores, etc..
-spec astar_step(cnode(), set(), set(), dict(), dict(), dict()) -&gt; list(cnode()) | failure.
astar_step(Goal, ClosedSet, OpenSet, Fscore, Gscore, CameFrom) -&gt;
    case sets:size(OpenSet) of
        0 -&gt;
            failure;
        _ -&gt;
            BestStep = best_step(sets:to_list(OpenSet), Fscore, none, infinity),
            if
                Goal == BestStep -&gt;
                    lists:reverse(reconstruct_path(CameFrom, BestStep));
                true -&gt;
                    Parent     = dict:fetch(BestStep, CameFrom),
                    NextOpen   = sets:del_element(BestStep, OpenSet),
                    NextClosed = sets:add_element(BestStep, ClosedSet),
                    Neighbours = neighbour_nodes(BestStep, Parent),

                    {NewOpen, NewF, NewG, NewFrom} = scan(Goal, BestStep, Neighbours, NextOpen, NextClosed, Fscore, Gscore, CameFrom),
                    astar_step(Goal, NextClosed, NewOpen, NewF, NewG, NewFrom)
            end
    end.

%% @doc Returns the heuristic score from `Current&#039; node to `Goal&#039; node
-spec h_score(Current :: cnode(), Goal :: cnode()) -&gt; Hscore :: number().
h_score(Current, Goal) -&gt;
    dist_between(Current, Goal).

%% @doc Returns the distance from `Current&#039; node to `Goal&#039; node
-spec dist_between(cnode(), cnode()) -&gt; Distance :: number().
dist_between(Current, Goal) -&gt;
    {X1, Y1} = Current,
    {X2, Y2} = Goal,
    abs((X2-X1)) + abs((Y2-Y1)).

%% @doc Returns the best next step from `OpenSetAsList&#039;
%% TODO: May be optimized by making OpenSet an ordered set.
-spec best_step(OpenSetAsList :: list(cnode()), Fscore :: dict(), BestNodeTillNow :: cnode() | none, BestCostTillNow :: number() | infinity) -&gt; cnode().
best_step([H|Open], Score, none, infinity) -&gt;
    V = dict:fetch(H, Score),
    best_step(Open, Score, H, V);

best_step([], _Score, Best, _BestValue) -&gt;
    Best;

best_step([H|Open], Score, Best, BestValue) -&gt;
    Value = dict:fetch(H, Score),
    case Value &lt; BestValue of
        true -&gt;
            best_step(Open, Score, H, Value);
        false -&gt;
            best_step(Open, Score, Best, BestValue)
    end.

%% @doc Returns the neighbour nodes of `Node&#039;, and excluding its `Parent&#039;.
-spec neighbour_nodes(cnode(), cnode() | none) -&gt; list(cnode()).
neighbour_nodes(Node, Parent) -&gt;
    {X, Y} = Node,
    [
     {XX, YY} ||
     {XX, YY} &lt;- [{X-1, Y}, {X, Y-1}, {X+1, Y}, {X, Y+1}],
     {XX, YY} =/= Parent,
     XX &gt;= ?MINX,
     YY &gt;= ?MINY,
     XX =&lt; ?MAXX,
     YY =&lt; ?MAXY
    ].

%% @doc Scans the `Neighbours&#039; of `BestStep&#039;, and adds/updates the Scores and CameFrom dicts accordingly.
-spec scan(
        Goal :: cnode(),
        BestStep :: cnode(),
        Neighbours :: list(cnode()),
        NextOpen :: set(),
        NextClosed :: set(),
        Fscore :: dict(),
        Gscore :: dict(),
        CameFrom :: dict()
       ) -&gt;
    {NewOpen :: set(), NewF :: dict(), NewG :: dict(), NewFrom :: dict()}.
scan(_Goal, _X, [], Open, _Closed, F, G, From) -&gt;
    {Open, F, G, From};
scan(Goal, X, [Y|N], Open, Closed, F, G, From) -&gt;
    case sets:is_element(Y, Closed) of
        true -&gt;
            scan(Goal, X, N, Open, Closed, F, G, From);
        false -&gt;
            G0 = dict:fetch(X, G),
            TrialG = G0 + dist_between(X, Y),
            case sets:is_element(Y, Open) of
                true -&gt;
                    OldG = dict:fetch(Y, G),
                    case TrialG &lt; OldG of
                        true -&gt;
                            NewFrom = dict:store(Y, X, From),
                            NewG    = dict:store(Y, TrialG, G),
                            NewF    = dict:store(Y, TrialG + h_score(Y, Goal), F), % Estimated total distance from start to goal through y.
                            scan(Goal, X, N, Open, Closed, NewF, NewG, NewFrom);
                        false -&gt;
                            scan(Goal, X, N, Open, Closed, F, G, From)
                    end;
                false -&gt;
                    NewOpen = sets:add_element(Y, Open),
                    NewFrom = dict:store(Y, X, From),
                    NewG    = dict:store(Y, TrialG, G),
                    NewF    = dict:store(Y, TrialG + h_score(Y, Goal), F), % Estimated total distance from start to goal through y.
                    scan(Goal, X, N, NewOpen, Closed, NewF, NewG, NewFrom)
            end
    end.

%% @doc Reconstructs the calculated path using the `CameFrom&#039; dict
-spec reconstruct_path(dict(), cnode()) -&gt; list(cnode()).
reconstruct_path(CameFrom, Node) -&gt;
    case dict:fetch(Node, CameFrom) of
        none -&gt;
            [Node];
        Value -&gt;
            [Node | reconstruct_path(CameFrom, Value)]
    end.</code></pre>
  </figure>
<h3>Notes</h3>
<ul><li><p>Variables <code>MINX</code>, <code>MINY</code>, <code>MAXX</code> and <code>MAXY</code> can be modified to increase the size of the map. The function <code>neighbour_nodes/2</code> can be modified to add obstacles.</p></li><li><p>To test, enter in erlang shell:</p></li></ul><figure>
  <pre><code class="language-erlang">c(astar).
astar:astar({1, 1}, {10, 10}).</code></pre>
  </figure>
<ul><li><p>The <code>cnode()</code> structure represents some sort of coordinate. To use some other structure, the functions <code>neighbour_nodes/2</code>, <code>h_score/2</code>, and <code>distance_between/2</code> have to be modified for the new structure.</p></li><li><p>The current heuristic does not penalize for turns, so the resultant 
path tends to follow a diagonal looking shape. For correcting this, 
either diagonal movements can be allowed (by modifying the neighbours 
function), or turning could be penalized in the heuristic function 
(current direction would have to be tracked).</p></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/basic-implementation-of-a-in-erlang#comments</comments>
    <slash:comments>0</slash:comments>
  </item><item>
    <title>Erlang Profiling Tips</title>
    <description><![CDATA[Some erlang profiling tips / tools I've come across]]></description>
    <link>https://srijan.ch/erlang-profiling-tips</link>
    <guid isPermaLink="false">6030d3dab5e0920001f557cc</guid>
    <category><![CDATA[erlang]]></category>
    <category><![CDATA[development]]></category>
    <dc:creator>Srijan Choudhary</dc:creator>
    <pubDate>Wed, 20 Feb 2013 00:00:00 +0000</pubDate>
    <content:encoded><![CDATA[<p>I have been using erlang recently for some of my work and private 
projects, and so I have decided to write about a few things there were 
hard to discover.</p> <p>Profiling is an essential part of programming in erlang. <a href="http://www.erlang.org/doc/efficiency_guide/profiling.html">Erlang's efficiency guide</a> says:</p><blockquote>
  Even experienced software developers often guess wrong about where the performance bottlenecks are in their programs.<br>Therefore, profile your program to see where the performance bottlenecks are and concentrate on optimizing them.  </blockquote>
<h2>Using profiling tools in releases (using rebar/reltool)</h2>
<p>So, after finishing a particularly complicated bit of code, I wanted 
to see how well it performed, and figure out any bottlenecks.</p> <p>But, I hit a roadblock. Following the <a href="http://www.erlang.org/doc/man/fprof.html">erlang manual for fprof</a>, I tried to start it, but it wouldn't start and was giving the error:</p><figure>
  <pre><code class="language-erlang">** exception error: undefined function fprof:start/0</code></pre>
  </figure>
<p>To make this work, I had to add <code>tools</code> to the list of apps in my <code>reltool.config</code> file. After adding this and regenerating, it all works.</p><h2>Better visualization of fprof output</h2>
<p>So, after I got the fprof output, I discovered it was a long file with a lot of data, and no easy way to make sense of it.</p> <p>I tried using <a href="http://www.erlang.org/doc/man/eprof.html">eprof</a> (which gives a condensed output), and it helped, but I was still searching for a better way.</p> <p>Then I stumbled upon <a href="http://stackoverflow.com/questions/14242607/eprof-erlang-profiling#comment19935708_14242607">a comment on stackoverflow</a>, which linked to <a href="https://github.com/isacssouza/erlgrind">erlgrind - a script to convert the fprof output to callgrind output</a>, which can be visualized using <a href="http://kcachegrind.sourceforge.net/">kcachegrind</a> or some such tool.</p><h3>Software Links</h3>
<ul><li><a href="http://www.erlang.org/doc/efficiency_guide/profiling.html">Erlang Profiling Guide</a></li><li><a href="https://github.com/isacssouza/erlgrind">Erlgrind</a></li><li><a href="http://kcachegrind.sourceforge.net/">Kcachegrind</a></li></ul>]]></content:encoded>
    <comments>https://srijan.ch/erlang-profiling-tips#comments</comments>
    <slash:comments>0</slash:comments>
  </item></channel>
</rss>
