<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[unenglishable]]></title><description><![CDATA[technology | art | science]]></description><link>https://unenglishable.com/</link><generator>Ghost 0.7</generator><lastBuildDate>Tue, 11 Feb 2025 12:18:34 GMT</lastBuildDate><atom:link href="https://unenglishable.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Hosting Elixir Docs on GitHub Pages with GitHub Actions]]></title><description><![CDATA[<p>If you're creating a project in Elixir and have been looking for a good way to host your documentation for the world to see, GitHub Pages offers a great solution that's free and fairly simple to set up.  This article provides a step-by-step guide on how to integrate it into</p>]]></description><link>https://unenglishable.com/hosting-elixir-docs-on-github-pages-with-github-actions/</link><guid isPermaLink="false">8b6c5d23-f8bd-4ec1-899b-6ff19c723240</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Tue, 22 Aug 2023 23:45:29 GMT</pubDate><content:encoded><![CDATA[<p>If you're creating a project in Elixir and have been looking for a good way to host your documentation for the world to see, GitHub Pages offers a great solution that's free and fairly simple to set up.  This article provides a step-by-step guide on how to integrate it into your GitHub Actions workflow.</p>

<h2 id="dependencies">Dependencies</h2>

<p>First off, we'll want to add <code>:ex_doc</code> to our dependencies. This is the library we'll be using to build our docs.</p>

<p>mix.exs  </p>

<pre><code class="language-elixir">  defp deps do
    [
      ...
      {:ex_doc, "~&gt; 0.30.5"},
      ...
    ]
</code></pre>

<h2 id="configuration">Configuration</h2>

<p>We'll need to generate a key pair in order to allow our repo to update its documentation branch.  To do this, we can run an ssh-keygen.</p>

<p><strong>Note: Remember to change the name of the <code>id_rsa</code> file name so that it does not overwrite an existing key pair.</strong>  In this example, I used <code>id_rsa_gh_pages</code> as the name</p>

<pre><code class="language-bash">&gt; ssh-keygen

Generating public/private rsa key pair.  
Enter file in which to save the key:  id_rsa_gh_pages  
</code></pre>

<p>Next, let's create a <code>Deploy Key</code> on our repo.  Copy the output of the <code>public key</code>.</p>

<pre><code class="language-bash">&gt; cat id_rsa_gh_pages.pub
# (copy output)
</code></pre>

<p>Navigate to <code>Settings &gt; Deploy keys</code> on the repo and click <code>Add deploy key</code>.  Paste the public key in and give it a name (something like <code>DOCS_DEPLOY_PUBKEY</code> will do).</p>

<p>Then, we'll have to add the secret key as a repository secret so that our workflow can access it.  Copy the output of the <code>private key</code>.  <strong>Do not share this key with anyone</strong></p>

<pre><code class="language-bash">&gt; cat id_rsa_gh_pages
# (copy output)
</code></pre>

<p>Now, navigate to <code>Settings &gt; Secrets and variables &gt; Actions</code> and click <code>New repository secret</code>.  Paste the secret key into the box and name it <code>DOCS_DEPLOY_KEY</code>.  We'll access this secret from inside our workflow.</p>

<p>Once we're finished with the public/private rsa keys, we can delete them.</p>

<pre><code class="language-bash">rm id_rsa_gh_pages.pub id_rsa_gh_pages  
</code></pre>

<h2 id="githubworkflow">GitHub Workflow</h2>

<p>This example assumes we're using <code>.tool-versions</code> to manage versions for elixir and erlang.  <code>erlef/setup-beam</code> reads the <code>.tool-versions</code> file to install the correct versions before running </p>

<p>I've taken the liberty to add a caveat in for building docs on the <code>elixir-docs</code> branch as well as <code>main</code>.  That way, we can test changest to the documentation build without having to merge with <code>main</code> - just push these changes to the <code>elixir-docs</code> branch and submit a PR later.</p>

<p>Once the beam environment is set up, <code>lee-dohm/generate-elixir-docs</code> runs and generates our docs. <br>
 <code>peaceiris/actions-gh-pages</code> then publishes our documentation to the <code>gh-pages</code> branch of our project using the <code>DOCS_DEPLOY_KEY</code> we provided as a GitHub Secret earlier.</p>

<p>.github/workflows/main.yml</p>

<pre><code class="language-yaml">  ...

  elixir_docs:
    if: ${{ github.ref == 'refs/heads/main' || github.ref == 'refs/heads/elixir-docs' }}
    name: Generate project documentation
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3.5.3
      - name: Sets up an Erlang/OTP environment
        uses: erlef/setup-beam@v1
        with:
          version-file: .tool-versions
          version-type: strict
      - name: Build docs
        uses: lee-dohm/generate-elixir-docs@v1.0.1
      - name: Publish to Pages
        uses: peaceiris/actions-gh-pages@v3.9.3
        with:
          deploy_key: ${{ secrets.DOCS_DEPLOY_KEY }}
          publish_dir: ./doc
          publish_branch: gh-pages

  ...
</code></pre>

<p>Run the workflow and make sure it succeeds.  Once it's done, our documentation should be available on the <code>gh-pages</code> branch.</p>

<h2 id="githubpages">GitHub Pages</h2>

<p>Now that our workflow is publishing docs to the <code>gh-pages</code> branch, let's go ahead and configure our repo to host our Pages site...</p>

<p>Navigate to <code>Settings &gt; Pages</code> and select <code>Deploy from a branch</code> under <code>Source</code>.  Under <code>Branch</code>, make sure <code>gh-pages</code> is selected and the directory is <code>/ (root)</code>.  Hit <code>Save</code> and it should publish the page - by default, the url will be <code>https://[username].github.io/[project]</code></p>

<h2 id="makeitaccessible">Make it accessible</h2>

<p>Now comes the fun part.  We can include the link to our documentation in our <code>README.md</code> and a link or badge - or use the project's <code>About</code> section to display the link.</p>

<p><strong>About section</strong></p>

<p>Navigate to the project's home page and find the <code>About</code> section (on the right side of the page).  Click the gear icon and under the <code>Website</code> section, tick the box next to <code>Use your GitHub Pages website</code>.</p>

<h2 id="wrappingitup">Wrapping it up</h2>

<p>In this tutorial, we set up our GitHub repo to host Elixir docs for our project.  There are tons of extra settings we can use to fine tune our deployed docs - including setting a custom URL for our documentation.</p>

<p>Hopefully, this guide had enough information to get you started. Have a look around at GitHub's documentation for your specific application, and don't be afraid to leave a comment :)</p>]]></content:encoded></item><item><title><![CDATA[[Dev Infrastructure] From Project to Product:  Revision Control, Heroku 12-Factor, and Production Deployment]]></title><description><![CDATA[<p>I’ve done DevOps work on several projects at Slickage, but the biggest one I’ve worked on is Epochtalk “Next generation forum software” (<a href="https://github.com/epochtalk">https://github.com/epochtalk</a>, <a href="https://github.com/epochtalk/epochtalk">https://github.com/epochtalk/epochtalk</a>, epochtalk.org).  It’s been through a lot of changes in architecture and backing technologies.  When I</p>]]></description><link>https://unenglishable.com/devops/</link><guid isPermaLink="false">bfd54488-5993-40bb-a8e2-841e398f88eb</guid><category><![CDATA[devops]]></category><category><![CDATA[developer infrastructure]]></category><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Wed, 23 Mar 2022 01:22:59 GMT</pubDate><content:encoded><![CDATA[<p>I’ve done DevOps work on several projects at Slickage, but the biggest one I’ve worked on is Epochtalk “Next generation forum software” (<a href="https://github.com/epochtalk">https://github.com/epochtalk</a>, <a href="https://github.com/epochtalk/epochtalk">https://github.com/epochtalk/epochtalk</a>, epochtalk.org).  It’s been through a lot of changes in architecture and backing technologies.  When I started working with it, we were using NodeJS and Angular framework, migrating data from SMF (forum software) MySQL into PostgreSQL.  It was running as a Node server on bare metal and was only available in development mode.</p>

<p>When I joined the team, we didn't have a formal revision style, and thus had a lot of trouble collaborating.  It was so bad at one point that one of my teammates took all of my commits in a pull request and squashed them down without asking me first; in the process, erasing loads of detail in why I did things a certain way and how I managed to fix bugs or overcome obstacles.</p>

<p>I believe the commit history is an extremely powerful tool in a small development team’s arsenal for tracing and fixing bugs.  I use it on a daily basis to compensate for our propensity to occasionally make short-sighted decisions; after all developers are only human, we are far from perfect.  I like to get things right the first time as much as possible but overall, it’s much better to make a mistake and go back and fix it than to have fear of mistakes be a limiting factor on progress.</p>

<p>Due to the fact that the “squasher” teammate was the project lead, I went along with his direction for a while.  After he left the company, I took charge and switched the team over to using a Pull Request and Merge based collaboration style.  My intent was to preserve the commit history and to preserve the information contained in the branches between commits.  I explained how branching would encourage us to work on features as units and lead to a more readable history than squash and rebase.  I personally even use sub-branches, since I work best by breaking down tasks into the smallest units possible, and I merge them back into my own branch as I complete a feature.  I’ve heard that Gerrit has the ability to squash and rebase while maintaining commit history, but I haven’t tried it out yet.  It might be something worth looking into if the scope of work relates to how developers collaborate.</p>

<p>I’m always on the lookout for best practices and I love sharing what I find with my team.  At the beginning of my DevOps experience, I discovered <a href="https://12factor.net/">Heroku’s 12-Factor</a> best practices and have been sticking to them ever since; making sure that all of the projects I work on are easily and dependably deployable.  Any time I'm setting up ops for a new project, I re-visit <a href="https://12factor.net/">the manifesto</a>.</p>

<p>Epochtalk was the first project I converted to comply with 12-Factor.  From consolidating configuration files and routines, to containerizing all the pieces, exposing configurations through the environment, locking down specific dependency versions, implementing automated versioning for internal dependencies, learning about linked containers and debugging their connections, automating builds in CI for testing and deployment, persisting data across restarts, unifying testing staging and production environments to ensure consistency, exposing administration functionality through cli, the list goes on and on.</p>

<p>In the end, it was not such an easy task, but each project since then has gone a lot smoother; especially now that I’ve established some good standards.  Each conversion step has its own challenges, and Epochtalk's case required restructuring the project as a whole, touching nearly every file.  However, as the Factors were put to work, the project started feeling a lot more polished for deployment - and promoted better development practices as well!</p>

<p>Once all of the pieces were using containers and container networking instead of bare metal deploys, the next step was to create a SAAS deployment strategy.  I tried using Kubernetes, but it was still in its early stages of development (they were not directly associated with Docker yet) and documentation was a bit lacking at the time.  I contacted Kubernetes’ dev team and joined their Slack, but it was a big headache and in the end I decided that I should build with Ansible instead.</p>

<p>Through some good old shell scripting, and a lot of trial and error, I was able to write playbooks to provision AWS resources (EC2, S3, Route53, RDS, CloudWatch), configure new virtual machines, and run our Dockerized project - all using customer information to create deployment namespaces and DNS entries.  It was a bit of a clunky implementation, but it got the job done.  On the bright side, I learned a lot about what it takes to get a production deployment up and running. <br>
 Additionally, having done all of this through scripting, I feel like I have a deeper understanding of how it all works.  I am, however, very much looking forward to using more standardized tools like Kubernetes and Terraform.</p>]]></content:encoded></item><item><title><![CDATA[[Dev Infrastructure] Testing, QA, and Triage]]></title><description><![CDATA[<h2 id="testing">Testing</h2>

<p>Automation is at the core of good testing.  If tests aren’t automatic, they are not going to get run.  The way I’ve set up all of the projects I’ve had a hand in at our company is to add testing to the Continuous Integration pipeline.  Whether</p>]]></description><link>https://unenglishable.com/testing-qa-and-triage/</link><guid isPermaLink="false">75b515b9-9d63-4828-af94-b9af1dea6be8</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Thu, 17 Mar 2022 19:39:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="testing">Testing</h2>

<p>Automation is at the core of good testing.  If tests aren’t automatic, they are not going to get run.  The way I’ve set up all of the projects I’ve had a hand in at our company is to add testing to the Continuous Integration pipeline.  Whether it’s as simple as making sure everything compiles and the bare server code runs properly, or API tests that ensure expected input and output are achieved, to simulated browser tests with tools like Selenium, all of these get run every single time someone submits code to a branch.  Branches are blocked from being merged until all tests have passed and the code has undergone peer review.</p>

<p>One thing that I wish I had done earlier, but haven’t been able to get around to for Epochtalk is documentation-based API tests.  Since we’ve got documentation generators based on code comments, it would be really nice to have a hard, almost mathematical, link from what API docs say the API call will do and what it actually does.  The overall idea there is to minimize chances for human error to cause a discrepancy between.  In any case where a human error can be made, that’s a potential for a bug to be generated and that in turn is a potential for hours to be spent fixing instead of implementing.</p>

<h2 id="qa">QA</h2>

<p>I haven’t worked on a team that uses a formal QA process for a while, but from what I remember from my previous job, the QA team does the hard work of making sure everything works as it’s supposed to.  It was a lot of repetitive tasks and generating “bad input”, which is part of the bug spotting process.  If we can give the QA team a way to generate input, whether it’s algorithmically or through templating, it reduces wear-and-tear on them and gives them a way to consistently do a good job and find new bugs quicker.  The overarching idea here is that tests are hard to write as an afterthought, but an entire suite of them is easy to obtain if they’re written in tandem with development.  When I was assigned to write tests for an existing untested monolithic project at my previous job, it was such a daunting task - not an experience I’d recommend!</p>

<p>Cooperation between QA and developers is difficult to achieve.  There can be a lot of friction due to the fact that QA might think developers are incompetent because of the errant dumb mistakes we can make, and similarly, developers might think QA is incompetent because they might lack the technical expertise required to explain an issue well.  One solution I can think of to aid in the cycle between QA and developers is to allow QA to create behavior driven tests.  If they find a bug and create a test for it that assumes it’s fixed, they can then push that branch as a “fix” branch.  That branch containing the test can only be merged once the tests pass and the bug is fixed.  That doesn’t quite solve the mutually implied incompetence issue, but it does give both of them common ground to work off of.</p>

<h2 id="triage">Triage</h2>

<p>How does triage work?  It’s at the intersection of testing and QA, and it’s a problem that every dev team has.  What do we fix first?  How do we tell what’s the most important and what can be worked on later?  What about all of these less important tasks that have been lying around for months?  As a developer on an open source project using open source technologies, I’ve seen so many different solutions.  One team in particular caught my eye; they immediately closed all submitted issues (strange to me, as it reduces visibility, but understandable) and had a voting system for closed issues that would tally up how many people had the issue.  They would use the voting information to decide on task importance, and whether to work on them or not.</p>

<p>I’ve used so many triage/tasking tools in my career including Jira (plus Atlassian suite), Kanban, Clickup, GitHub Issues and a plethora of others I can’t even recall at this point.  My favorite for our small team so far has been Kanban.  I liked the style of tasks with columns and no hard deadlines.  It gave me a sense of satisfaction to be able to drag a task over to the next column and watch it go from in-queue to in progress to review and finally completed.   I personally believe that deadlines stunt creative growth and stifle innovation by restricting perspective and reducing an individual’s propensity to be intrinsically motivated.</p>

<p>In terms of intrinsic motivation, I want to add a little piece of opinion; food for thought perhaps…  I’ll tie it back in to triage at the end:</p>

<p>Supervisors and managers tend to get in that position by being micromanagers.  Being able to demand set deadlines for a feature or bug fix is of utmost importance to them, even when those tasks are not critically important.  They see micromanaging as being productive and helping, but it can have quite the opposite effect.  My ideal strategy, aligning with best practices of the new era of creatives, involves nurturing people's intrinsic motivation to maximize their potential for creative solutions.</p>

<p>This means relaxing supervision and looking at the results rather than the progress; a bit like the analogy of watching a pot of water boil.  If you hire diligent, creative people who enjoy what they do, the job will get done; and if you want solid results, take care of them and they’ll take care of you.  Psychologists and business analysts have all but proved it time and time again, the new workforce is about creativity and not labor.  You can make a laborer work faster by “whipping” them for slow progress or rewarding them for good results, but you can’t make someone think harder or be more artistic by doing the same thing.  If you’ve ever tried to force yourself to think, you'll know that it doesn’t work that way.</p>

<p>To tie it back in, I’ll talk about my experiences with task delegation and deadlines, which are key vulnerabilities to micromanagement.  It’s my opinion that, if a feature or bug is not important or blocking, it can wait a while.  I’m not claiming that this is correct, but I think it is a key factor in reaching a team’s full potential.  With task picking, developers can choose what to do.  When we’re allowed to work on what we feel is important, we can put more dedication into it.  Speaking from a small team perspective, when tasks and deadlines are dictated by someone who lacks a perspective of the overall picture, that can greatly reduce motivation to get things done.  I’ve been forcefully moved from critically important tasks to ones that should have taken a back seat, and the effect is so jarring.</p>

<p>The pinnacle of triage to me, to be philosophical for a bit, is the ability to let go.  Some tasks are simply not worth doing; they will take much more time and effort than they will return in terms of benefit.  Talking to friends in the medical field, in a dire situation the goal is to save as many people as possible, but also to prioritize those who can be saved.  As much as it hurts to say “no” to a task, sometimes it’s the right call.  Of course, medical situations and software development differ in urgency, so it follows that the perspective shifts accordingly.</p>

<p>Communication and understanding are key; to make the right decisions, we need the big picture as well as the small details.  The overall point I’m trying to make is that there is no one solution to triage.  It’s a constant effort of balance and rebalance, and the team has to work together on it.  Those who understand this and are able to communicate well with all parties involved should be the ones making the call on what to do next.</p>]]></content:encoded></item><item><title><![CDATA[[DevOps] Automatic Renewal of SSL Certificates with Certbot, Nginx, and Docker compose]]></title><description><![CDATA[<p>Let's Encrypt's <a href="https://hub.docker.com/r/certbot/certbot/">Certbot Auto</a> is a great way to obtain free SSL certification, but renewal can be quite a pain, especially if you're trying to maintain several servers, and are renewing manually.  Since certificates expire so often, your mailbox may become inundated with emails about expirations coming up soon!</p>

<p>Of</p>]]></description><link>https://unenglishable.com/automatic-ssl-certificates-certbot-nginx-docker-compose/</link><guid isPermaLink="false">ccc281db-cd64-4fa8-bd4b-87cfb2ed36e3</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Fri, 26 Apr 2019 21:23:59 GMT</pubDate><content:encoded><![CDATA[<p>Let's Encrypt's <a href="https://hub.docker.com/r/certbot/certbot/">Certbot Auto</a> is a great way to obtain free SSL certification, but renewal can be quite a pain, especially if you're trying to maintain several servers, and are renewing manually.  Since certificates expire so often, your mailbox may become inundated with emails about expirations coming up soon!</p>

<p>Of course, you can create a cron job to schedule automatic renewal of certificates, but what if you also want to run Certbot's Docker container and use a web server like Nginx in Docker as well?</p>

<p>I've come up with a scheme that will incorporate all of these features, and I've packaged them into a format that allows anyone on my team to deploy Certbot for *any web service!</p>

<p>(*Updates will be made to handle services that don't fit well into the scheme.)</p>

<h2 id="thepremise">The Premise</h2>

<p>The previous examples I've seen that use Certbot and Docker are a bit kludgy to say the least.  The most useful one I've read was by <code>@pentacent</code> on Medium: <a href="https://medium.com/@pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71">Nginx and Let’s Encrypt with Docker in Less Than 5 Minutes</a></p>

<p>I had an idea to improve on this scheme.  One of the major pain points with automating Certbot's <code>HTTP-01</code> in tandem with Nginx is that Nginx needs to host the challenge file, but the certificates specified in its config must exist in order Nginx to even start. <code>@pentacent</code>'s solution was to create fake certificates, start Nginx, delete the fake certs, then get real ones with Certbot.</p>

<p>Basically, this strategy ended up hinging on executing a script that does all of this for you.  This is, of course, all good - presuming that everything works; but I wondered if I could omit the script step altogether?  An interesting challenge.</p>

<p>My solution was to create two Dockerfiles and run one initially with Nginx set up to only host the <code>/.well-known/acme_challenge</code> file.  Once that was done, I could theoretically add anything else to the Nginx config after the fact, as long as the containers all mount the same directory that holds the certificates.</p>

<h2 id="implementingthesolution">Implementing the solution</h2>

<p>As mentioned, I created two separate Dockerfiles to solve this issue.  The two are nearly identical, except that the first runs <code>certbot certonly</code> and the other runs <code>certbot renew</code>.  Additionally, for convenience, I created two sets of <code>nginx/conf.d</code> directories to mount.  They can technically use the same directories, but I also checked in the Nginx configurations for different projects into their respective branches.  By keeping a <code>/init-data/nginx/conf.d</code> separately, I can leave just the webroot configuration (acme_challenge) while having any additional configurations checked in to the repo under <code>/data/nginx/conf.d</code>.</p>

<p>Certbot will request certificates and store them in a mounted directory, which is read by the Nginx machine.  Once the entire system is up and running, you can just call <code>docker-compose up certbot-renew</code> again at any time to update the certs.</p>

<p>Instead of changing the entrypoint script for the Certbot container, I added a crontab generator that starts up the stopped <code>certbot-renew</code> container, which runs again and checks if any certificates need to be renewed.  This ensures that the container only runs when it needs to; a good way to improve efficiency. Once the renew script completes, the next step in the crontab script is to restart Nginx service (to reload with new certs).</p>

<p>To make use of all of this, the end user (whomever is setting up a new web server) pulls my base project, writes an Nginx config in the <code>/data/nginx/conf.d</code> directory, and modifies the <code>docker-compose.yml</code> file to include their project, making sure to add a <code>depends_on</code> line to the Nginx docker-compose spec to make sure their container is linked properly.  This way, we can refer to the new service by name in places like the Nginx configuration.</p>

<p>Once that's done, they start the services with <code>docker-compose up -d</code> and execute the crontab generator.</p>

<h2 id="caveatsofinterest">Caveats of interest</h2>

<h2 id="mountingetcnginxconfd">Mounting /etc/nginx/conf.d</h2>

<p>Why mount only the <code>/etc/nginx/conf.d</code> directory for Nginx and not the entire <code>/etc/nginx</code> directory?  The reason is actually not very apparent; it deals with how Docker handles mounted directories.  If something is already there, it doesn't get deleted, it just gets hidden under the new mount (ie, it exists, but is inaccessible to the filesystem).  This is good because you can recover what was there before, but it's also not quite intuitive...</p>

<p>If you expected Nginx to work when mounting <code>/etc/nginx</code> and adding your own files, you'll find that there are a bunch of files missing!  These files were previously in <code>/etc/nginx</code> but were hidden when you told Docker to mount. Luckily, the installed configuration for Nginx's Docker image includes all configurations in <code>/etc/nginx/conf.d</code>!  Therefore, you can safely mount <code>conf.d</code> and add all of your custom configurations there :)</p>

<h2 id="restartingnginx">Restarting Nginx</h2>

<p>My implementation restarts the entire Nginx container using a <code>docker-compose</code> command.  Why not just restart the service inside the container?  This one's pretty simple!</p>

<p>When the Nginx container starts, its entrypoint is the Nginx server command itself.  If that command finishes (in this case, it is terminated in order to do the restart), then Docker will consider the task "complete" and terminate the container.  Because of this a service restart will end up in container termination!  Better to just use <code>docker-compose</code> to restart the container in the first place ;]</p>]]></content:encoded></item><item><title><![CDATA[[Dev Infrastructure] Simple continuous deployment with Drone.io and Docker Compose]]></title><description><![CDATA[<p>This past week, I had some time to help my coworkers with a bit of <strong>Continuous Integration</strong> and <strong>Continuous Deployment</strong> for their project.  I'm not directly working on the same project, but I do have a handful of automated ops experience, and was interested in using it to boost the</p>]]></description><link>https://unenglishable.com/simple-continuous-deployment-with-drone-io-and-docker-compose/</link><guid isPermaLink="false">ac512347-e740-4e1d-91a5-7ae01f7c87e9</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Thu, 21 Mar 2019 02:53:31 GMT</pubDate><media:content url="https://unenglishable.com/content/images/2019/03/image-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://unenglishable.com/content/images/2019/03/image-1.png" alt="[Dev Infrastructure] Simple continuous deployment with Drone.io and Docker Compose"><p>This past week, I had some time to help my coworkers with a bit of <strong>Continuous Integration</strong> and <strong>Continuous Deployment</strong> for their project.  I'm not directly working on the same project, but I do have a handful of automated ops experience, and was interested in using it to boost the efficiency of their workflow - side note:  I like AirBnb's internal theme of "Helping others is top priority", since it allows employees with specialized skills to contribute to other projects and make them better!</p>

<p>Anyhow, their existing deployment scheme already included <strong>Docker Compose</strong>, automated builds on a deployed <a href="https://drone.io/">Drone.io</a> instance via GitHub webhook, and image hosting on <strong>AWS ECR</strong>.  but could use a little TLC in the form of automation.  The main idea was to supplant the steps where one would have to log in to either the staging server or the production server and manually pull, tag, and deploy Docker containers.  In this post, I go into detail about what the issues were and how I solved them.</p>

<h2 id="issue1ecrimagetags">Issue #1:  ECR Image tags</h2>

<p>I started work by fixing up the image tagging on the pushes to AWS ECR.  Basically, it wasn't working because of <em>indenting issues</em>.  Be sure to <strong>pay attention to indenting when working with YAML!!</strong>  Incidentally, something this simple was enough to break the functionality of tagging.</p>

<p><img src="https://imgur.com/jNclTpR.jpg" alt="[Dev Infrastructure] Simple continuous deployment with Drone.io and Docker Compose"></p>

<p>Easy peasy, <a href="https://youtu.be/_FwVYeeY1Ew">sneezy, deezy... Mc... Deluxe</a>.</p>

<h2 id="issue2betastagingserverautomaticdeployment">Issue #2:  Beta (staging) server automatic deployment</h2>

<p>My next task was to make staging server deployments as automaticated as possible - because who wants to log in to a server with <code>ssh</code> and pull docker containers??  In all seriousness, this is actually a pain in the ass and impedes progress:  specifically because developers would need to wait for the containers to finish building before they can do so.  If you make it difficult to test code, <a href="https://youtu.be/waEC-8GFTP4?t=28">nobody will do it</a>.</p>

<p>Due to the splendid availability of continuous integration tooling already integrated into the pipeline, I wanted to identify some ways to use what was already there.  Drone has a bunch-o-modules and doohickeys that help with automation.  At first, I thought of setting up an API endpoint on the staging server to listen for an incoming webhook from successful Drone builds...  That idea evolved into thinking "Hey, why don't I use Docker's built in API" to "Hey, why don't I just use Kubernetes"...  -- <strong>STOP!</strong>  Too much complication for such a simple task!</p>

<p>Why do too much when you can do just enough?</p>

<p>Luckily, some boy wrote a Drone plugin that can execute commands via <code>ssh</code> on a remote server:  <a href="http://plugins.drone.io/appleboy/drone-ssh/">appleboy/drone-ssh</a>!  HECK'N YEAH!!  With this, I could have the Drone server do my bidding.  I configured the <code>.drone.yml</code> and beckoned the <code>drone-ssh</code> beast forth:</p>

<pre><code>- name: publish-beta
  image: appleboy/drone-ssh
  settings:
    host: [REDACTED]
    port: 22
    username:
      from_secret: ssh_username
    key:
      from_secret: ssh_key
    script:
      - /bin/bash ./update.sh
  when:
    branch:
      - beta
    event:
      - push
</code></pre>

<p>Again, pay close attention to your indentation.  All of the configurations for <code>appleboy/drone-ssh</code> are nested under <code>settings:</code> and the configurations I pull from Drone secrets are specified with <code>from_secret:</code>.</p>

<p>I lucked out even more, that Drone YAML steps are executed sequentially!  This means that by the time the <code>ssh</code> script kicks off, the image has already been pushed to ECR.  No need to add complicated waiting or polling code to handle parallel steps!</p>

<p>And about <code>update.sh</code>...  I wrote up a quick script to pull the latest Docker image from ECR and replace the current running container (old image) with the new image.  I found that doing <code>docker-compose up</code> automagically does this in one step - no need to spin everything down first.</p>

<pre><code>docker-clean

docker-compose pull web  
docker-compose up -d web  
docker-compose exec -T web bundle exec rake db:migrate

docker-clean  
</code></pre>

<p>In this case, I modified the beta server's <code>docker-compose.yml</code> file to point to the <code>:beta</code> image tag.  This specification is important - otherwise, it will pull from <code>:latest</code>.  The <code>docker-clean</code> command is just a lil somethin-somethin I put together that removes containers that aren't being used anymore, and cleans up intermediates and old versions of images.  We use tiny virtual machines, about 8Gb disk space, so it's imperative to clean up those old images, which can exceed 1Gb each!</p>

<p><strong>Caveats galore!</strong>  This script worked locally, but initially didn't work when run by <code>drone-ssh</code>.  Since we're using ECR, docker-compose needs access to our AWS credentials.  Unfortunately, a non-interactive shell doesn't load the <code>~/.bash_profile</code> that would set up the path to <code>aws-cli</code> executables.  Read about the complicated laws that govern interactive/non-login/non-interactive bash execution <a href="https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html">here</a>.  NO PROBLEM!  For now, I just copied the <code>$PATH</code> modification into <code>~/.bashrc</code>.  There's probably a more correct way to fix this issue - maybe doing <code>--login</code> would cause the <code>~/.bash_profile</code> to load?</p>

<p>Additionally, within the Docker container itself, using <code>docker-compose exec</code> to run the <code>rake</code> migrations causes the command to get rekt because, infamously, <code>the input device is not a TTY</code>!  Nearly the same issue I just noted, but <a href="https://youtu.be/dQw4w9WgXcQ">Inception</a> style!  Obviously, fix it by running with <code>-T</code> to disable pseudo-tty allocation.  By the way, what ever happened to good 'ole non-virtual TTY logins?  Why does everything gotta be virtual now??</p>

<p>Annnd that's about it for the beta/staging deployment server.  Took a while to reach a nice simple solution, but things are running smoothly and nobody has to lift a finger now to get the latest code deployed.</p>

<p>The overall pipeline is now:</p>

<ul>
<li><p>Push code to <code>beta</code> branch on GitHub</p></li>
<li><p>Build and tag image on Drone server</p></li>
<li><p>Drone server pushes image to ECR</p></li>
<li><p>Drone server then kicks off the <code>update.sh</code> script over <code>ssh</code></p></li>
<li><p><code>update.sh</code> (on the beta server)</p>

<ul><li><p>Cleans up old containers/images</p></li>
<li><p>Pulls the newest <code>:beta</code> image</p></li>
<li><p>Replaces the running container with the new image</p></li>
<li><p>Runs <code>rake</code> migrations</p></li>
<li><p>Cleans up the old image</p></li></ul></li>
</ul>

<p>I also added a step after that in the Drone config that pings one of our Slack channels to notify everyone that there's a new version of the beta site up (or notify us that something broke).  Nifty!</p>

<h2 id="issue3productionserverautomateddeployment">Issue #3:  Production server automated deployment</h2>

<p>This one is still in the works...</p>]]></content:encoded></item><item><title><![CDATA[[Software] Elixir Distributed System Resources (notes)]]></title><description><![CDATA[<p>My notes on Elixir's resources for distributed systems...  These notes are not exhaustive, and are not necessarily accurate - they represent my understanding and opinions of said resources.</p>

<p>These notes will probably be most helpful for anyone getting started with implementing distributed systems using Elixir.  If you have questions or</p>]]></description><link>https://unenglishable.com/elixir-distributed-system-resources-notes/</link><guid isPermaLink="false">16647ca8-443a-4aed-bf37-517dce389a7d</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Tue, 06 Nov 2018 01:39:31 GMT</pubDate><content:encoded><![CDATA[<p>My notes on Elixir's resources for distributed systems...  These notes are not exhaustive, and are not necessarily accurate - they represent my understanding and opinions of said resources.</p>

<p>These notes will probably be most helpful for anyone getting started with implementing distributed systems using Elixir.  If you have questions or opinions as well as fixes I should include, please leave a comment below!</p>

<h2 id="supervision">Supervision</h2>

<ul>
<li><p><a href="https://hexdocs.pm/elixir/Task.html">Hex docs:  Task</a></p>

<ul><li><p>Good for running some code that doesn't need supervision</p></li>
<li><p>Tasks generally don't interact with other processes, and we don't care about the return value</p></li></ul></li>
<li><p><a href="https://hexdocs.pm/elixir/Task.Supervisor.html">Hex docs:  Task.Supervisor</a></p>

<ul><li><p>Dynamically supervise <code>Task</code>s</p></li>
<li><p>Good for keeping a handle on running a <code>Task</code> without orphaning it (&lt;- my opinion)</p></li></ul></li>
<li><p><a href="https://hexdocs.pm/elixir/Supervisor.html">Hex docs:  Supervisor</a></p>

<ul><li><p>Good for supervision of <strong>known processes</strong> (created at server start)</p></li>
<li><p>Children are defined in the code and started when <code>Supervisor.start_link</code> is called</p></li>
<li><p>Not conventional for supervising processes created at runtime (deprecated in favor of <code>Dynamic Supervisor</code>)</p></li></ul></li>
<li><p><a href="https://hexdocs.pm/elixir/DynamicSupervisor.html">Hex docs:  Dynamic Supervisor</a></p>

<ul><li><p>Good for supervising processes started at runtime</p></li>
<li><p>Starts with no children, and spawns new children with <code>DynamicSupervisor.start_child</code></p></li>
<li><p>Note that children should have unique identifiers (<code>:name</code>) - see <a href="https://unenglishable.com/elixir-distributed-system-resources-notes/#process_registration">Process Registration</a> section</p></li></ul></li>
</ul>

<h2 id="processregistration">Process Registration</h2>

<ul>
<li><p><a href="https://hexdocs.pm/elixir/GenServer.html#module-name-registration">Name Registration (via GenServer)</a></p>

<ul><li><p>Used by <code>Supervisor</code>, <code>Dynamic Supervisor</code>, <code>GenServer</code></p></li>
<li><p><code>__MODULE__</code> uses this module's own name</p>

<ul><li>Can only have one instance registered when using <code>__MODULE__</code></li></ul></li>
<li><p><code>:(atom)</code> registers via <code>Process.register</code> with the atom as the name</p>

<ul><li>Again, you can only have one instance registered since the name is not unique</li></ul></li>
<li><p><code>{:global, term}</code> registers via "the functions in the :global module", where <code>term</code> can be basically anything</p>

<ul><li><p>Register any number of processes!</p></li>
<li><p>Find processes by name using <code>GenServer.whereis</code></p></li>
<li><p>Must define the names such that they don't collide</p></li></ul></li>
<li><p><code>{:via, module, term}</code> use a custom module to define a registry.  Explained well in <a href="https://www.brianstorti.com/process-registry-in-elixir/">Brian Storti's Blog on Process Registry</a></p></li></ul></li>
<li><p><a href="https://hexdocs.pm/elixir/Registry.html">Hex docs:  Registry</a></p>

<ul><li>Registry module</li></ul></li>
<li><p><a href="https://www.brianstorti.com/process-registry-in-elixir/">Custom Process Registry (blog; Brian Storti)</a></p>

<ul><li>Custom local implementation using a <code>Map</code> in the registry's <code>state</code></li></ul></li>
<li><p><a href="https://github.com/esl/gproc">gproc</a></p>

<ul><li>Noted in Brian Storti blog; uses "central server and an ordered-set ets table"</li></ul></li>
<li><p><a href="https://elixirforum.com/t/agent-vs-registry/14211/4">Elixir forum:  Agent vs Registry</a></p></li>
</ul>

<h2 id="distributedsupervisionregistration">Distributed Supervision/Registration</h2>

<p><a href="https://medium.com/@derek.kraan2/introducing-horde-a-distributed-supervisor-in-elixir-4be3259cc142">Horde (medium@derek.kraan2)</a></p>]]></content:encoded></item><item><title><![CDATA[[Software] Elixir:  GenServer vs Task.Supervisor]]></title><description><![CDATA[<p>Have you ever wondered if you should be using a <code>GenServer</code> or a <code>Task.Supervisor</code>?  You're not alone!  Here, I'll share my experience with making this choice - hopefully it helps you decide ;)</p>

<h2 id="thepremise">The Premise</h2>

<p>I recently ran into a situation where I used a <code>GenServer</code> to prototype some functionality</p>]]></description><link>https://unenglishable.com/elixir-genserver-vs-task-supervisor/</link><guid isPermaLink="false">0b2fed30-630d-46d2-81f0-40718c01b776</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Wed, 24 Oct 2018 00:02:33 GMT</pubDate><content:encoded><![CDATA[<p>Have you ever wondered if you should be using a <code>GenServer</code> or a <code>Task.Supervisor</code>?  You're not alone!  Here, I'll share my experience with making this choice - hopefully it helps you decide ;)</p>

<h2 id="thepremise">The Premise</h2>

<p>I recently ran into a situation where I used a <code>GenServer</code> to prototype some functionality before implementing it in a larger project.  It worked excellently and implementing the server (which handles and stores the state of TCP connections, and accepts GenServer <code>call</code>s - <a href="https://unenglishable.com/elixir-genserver-vs-task-supervisor/#callcastnotes">note on call/cast</a>) went without a hitch.  Great!</p>

<p>However, when I went to back integrate this feature, there was a bit of an issue.  Instead of <br>
implementing a GenServer, the original code uses a listen loop to spawn TCP socket-handling functions and hands them off to a <code>Task Supervisor</code>.</p>

<p>At this point, my approach is to just implement <code>call</code>s in the original code and modify the internals to match the desired behaviour.  However, my coworker was bothered by this, as there's a difference between Task Supervisor and GenServer.  His idea was that we should implement the manager from scratch using GenServer, but is that the best way to do it?  Maybe.  I'm not really sure yet.  I'll be updating this document to reflect what I learn.  You better hope OP delivers!</p>

<p><img src="https://imgur.com/41o8H2j.jpg" alt="op deliveries"></p>

<h3 id="anamecallcastnotesacallcastnotes"><a name="callcastnotes"></a>Call/Cast notes</h3>

<p>As far as I have seen, the general consensus is that you should start by using <code>call</code> and only use <code>cast</code> after careful algorithmic analysis, and if you're sure it's the right choice.  Check out <a href="https://medium.com/@adammokan/elixir-genserver-call-vs-cast-ba89fafd8847">this great Medium article</a> by @adammokan for a better explanation.</p>

<p>(Tags:  GenServer, Gen Server, Task.Supervisor, Task Supervisor, Elixir, Erlang)</p>]]></content:encoded></item><item><title><![CDATA[[Software] Just Software Developer Things...]]></title><description><![CDATA[<p>Have you ever typed <code>ls</code> into your browser's search bar?</p>

<p>We've all been there!  What's your favorite <em>just developer thing?</em></p>

<p>If I think of more, I'll try to remember to write them here :D</p>]]></description><link>https://unenglishable.com/just-software-developer-things/</link><guid isPermaLink="false">f92f08dc-6ad5-467c-b980-673ba60e3b03</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Wed, 26 Sep 2018 23:37:54 GMT</pubDate><content:encoded><![CDATA[<p>Have you ever typed <code>ls</code> into your browser's search bar?</p>

<p>We've all been there!  What's your favorite <em>just developer thing?</em></p>

<p>If I think of more, I'll try to remember to write them here :D</p>]]></content:encoded></item><item><title><![CDATA[[Cafe/Travel] Onion:  Get your Hipster on in Seoul ;)]]></title><description><![CDATA[<h2 id="whatisonion">What is Onion?</h2>

<p><img src="https://i.imgur.com/5jDdSzF.jpg" alt="Imgur"></p>

<p>Onion is a paradise built from ruin.  A combination <strong>bakery</strong> and <strong>cafe</strong> that has taken what appears to be a rundown house or apartment building, and freshened it up just a bit with enough decor to give it a comfortable feeling.</p>

<p><img src="https://i.imgur.com/IU57Nwq.jpg" alt="Imgur"></p>

<p>The architectural designer has ingeniously converted</p>]]></description><link>https://unenglishable.com/onion-a-hip-bakery-and-cafe-in-seoul/</link><guid isPermaLink="false">3cae479c-fa4e-46e6-94fc-b7bbe0505916</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Tue, 07 Nov 2017 14:38:30 GMT</pubDate><content:encoded><![CDATA[<h2 id="whatisonion">What is Onion?</h2>

<p><img src="https://i.imgur.com/5jDdSzF.jpg" alt="Imgur"></p>

<p>Onion is a paradise built from ruin.  A combination <strong>bakery</strong> and <strong>cafe</strong> that has taken what appears to be a rundown house or apartment building, and freshened it up just a bit with enough decor to give it a comfortable feeling.</p>

<p><img src="https://i.imgur.com/IU57Nwq.jpg" alt="Imgur"></p>

<p>The architectural designer has ingeniously converted the space into a brightly lit coffee shop and bakery by removing the doors and shutters from the walls and windows; letting in as much natural light as possible.</p>

<h3 id="aneasilyacquiredtaste">An (easily) acquired taste</h3>

<p><img src="https://i.imgur.com/ZaXbybo.jpg" alt="Imgur"></p>

<p>The shop appears to be a bit unkempt at first sight, but as you spend your time lounging here, you start to realize how amazing it really is.  The owners have kept the original parts of the building, choosing to blend the furniture and decor in with the existing crumbling bits of brick, cement block, and reinforcement.  Dirty, broken walls and windows become the perfect backdrop for photos that will cause your Instagram followers to practically ooze jealousy.</p>

<p><img src="https://i.imgur.com/34Mh2RK.jpg" alt="Imgur"></p>

<p>The building’s various stairways lead to the second floor, where you can find an open air upper deck with a fairly decent view of the area.  It’s by no means the tallest building in the area, so you won’t be able to see too far.  Right now, the sky is clear and bright, the birds are chirping, the sunlight is warm, and the breeze is cool.  You couldn’t ask for much more in a spot to have a pastry and a spot of espresso…  except that all the electrical outlets are indoors.</p>

<p><img src="https://i.imgur.com/npgGrYm.jpg" alt="Imgur"></p>

<p>Heading inside, the space is a bit of a labyrinth - but excitingly so.  Upon turning each corner, you’re greeted by views of other patrons; sometimes through glass windows, sometimes through just a hole in the wall.  Wherever you decide to sit inside, there are outlets in the center of just about every table.  However, most of the seats are pretty uncomfortable if you’re looking to do several hours of work on a laptop.  My recommendation?  If you’re planning to be here for a while, set yourself up <em>next to</em> the seat you think would be the most comfortable to lounge at, because that spot is most likely taken.  When the current occupants get up and go, swoop in for the prize and feast on the succulent view of envious faces as they watch you scoop it up.</p>

<p><img src="https://i.imgur.com/YAOFFz6.jpg" alt="Imgur"></p>

<p>The foyer out back neatly tucks away a long community table; accompanied by a few small trees and crumbling bits of brick that must have been part of the old building there.  The interior of the connecting room, separated by a giant pane glass window, contains a similarly shaped table, giving it the appearance of extending out from the room into the foyer outside, and providing an ultra spacious feel.</p>

<p><img src="https://i.imgur.com/zEbBXyp.jpg" alt="Imgur"></p>

<p>I can’t stress enough how well the original materials fit into the theme of the shop itself.  It’s as if you are a part of the bakery, frozen in time to allow you to sit, dine, and get some work done.  In my honest opinion, it’s <strong>#rusticAF</strong>.</p>

<h3 id="andthefood">And the food?...</h3>

<p><img src="https://i.imgur.com/FaEX0jM.jpg" alt="Imgur"></p>

<p>The pastries from the bakery are amazing, and the coffee is, well, coffee.  Enough said.  I don’t drink <em>a latte</em> and I usually end up getting an espresso everywhere I go, so I wouldn’t trust myself with reviewing the frothier drinks.  I decided on the avocado toasted bread and some kind of powdered sugar mountain that I saw everyone else walking around with.  They were both excellent, and both very messy.  To be considerate, I ate outside before heading in to work.  The birds were in a veritable frenzy due to the continuous eruption of crumbs as I struggled to cut the delicious confections with my plastic fork and knife.  Anyhow, I’m sure you’ll enjoy whatever you end up choosing to get here.</p>

<h3 id="cozyattwilight">Cozy, at twilight</h3>

<p><img src="https://i.imgur.com/jcQubIc.jpg" alt="Imgur"></p>

<p>That’s about it for now.  Come here, get some cafe food goodness, and get some work done.  I promise it will be worthwhile.</p>]]></content:encoded></item><item><title><![CDATA[[Software/Git] Git mastery:  Your local repo is also a remote repo!]]></title><description><![CDATA[<h2 id="saywhat">Say what??</h2>

<p>Here is an interesting tidbit about your local Git repo.  You can push to it and pull/fetch from it.  Anything you can do with a remote repo, you can do it with your local repo.</p>

<h3 id="howdoidothis">How do I do this?</h3>

<p>Just specify <code>.</code> where you would normally specify</p>]]></description><link>https://unenglishable.com/git-your-local-repo-is-also-a-remote-repo/</link><guid isPermaLink="false">517794b7-95f3-42e9-880c-f90d9d877f91</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Sat, 07 Oct 2017 03:40:32 GMT</pubDate><content:encoded><![CDATA[<h2 id="saywhat">Say what??</h2>

<p>Here is an interesting tidbit about your local Git repo.  You can push to it and pull/fetch from it.  Anything you can do with a remote repo, you can do it with your local repo.</p>

<h3 id="howdoidothis">How do I do this?</h3>

<p>Just specify <code>.</code> where you would normally specify a remote repo.</p>

<p><code>git push origin [local src]:[dest]</code> would push your local branch at <code>local src</code> to the remote <code>origin</code> branch <code>dest</code>.</p>

<p><code>git push . [local src]:[local dest]</code> would "push" your local branch at <code>local src</code> to the local branch <code>local dest</code></p>

<p>Of course, you can replace <code>local src</code> with a commit hash as well...</p>

<h3 id="whydoesthismatter">Why does this matter?</h3>

<p>I'm glad you asked.  This is actually an incredible resource; for the simple fact that you can <strong>push, pull, and fetch branches that you are not currently on</strong>.  In essence, this allows you to manipulate other branches <strong>without checking them out</strong>.</p>

<p>If you are like me, and need to update tons of branches while you have uncommitted code (Why though?  <em>Because reasons!</em>), being able to do so without changing your current <code>HEAD</code> is a lifesaver.  Just imagine you are pushing code to update remote branches, and again, replace the remote name with <code>.</code> and you're good to go!</p>

<h3 id="whointheirrightmindwouldeven">Who in their right mind would even...?</h3>

<p>Actually, if you use Git a lot and become interested in maintaining a good record of the <strong>history</strong> of code you have written (the <em>lines</em> in your git tree), you'll eventually want to be able to manipulate branches with precision.</p>

<p>If you're juggling changes that aren't checked in or committed, you <em>could</em> use a shit ton of <code>git stash</code> commands - which I have done on many occasions.  But that gets pretty messy and you have to mentally keep track of what's going on and where all of your code is.  If you can master this paradigm of "local repo as a remote repo", you won't have to dick around with the stash when you want to manipulate other branches.  Save <code>stash</code>ing for testing code in other branches ;)</p>]]></content:encoded></item><item><title><![CDATA[[Dev Infrastructure] Docker-compose logging in CircleCi]]></title><description><![CDATA[<p>I'm running <code>docker-compose</code> services as daemons in my CircleCI configuration.  It looks a bit like this:</p>

<pre><code># circle.yml
version: 2  
jobs:  
  build:
    ...
    steps:
      ...
      - run:
          name: Start docker resources
          command: |
            set -e
            docker-compose -f circleci-docker-compose.yml up -d (service)
            ...
      - run:
          name: run test
          ...
</code></pre>

<p>Here's the problem though:  there was</p>]]></description><link>https://unenglishable.com/docker-compose-logging-in-circleci/</link><guid isPermaLink="false">356912a3-6922-446f-ac59-d5c7c1230169</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Wed, 04 Oct 2017 03:02:20 GMT</pubDate><content:encoded><![CDATA[<p>I'm running <code>docker-compose</code> services as daemons in my CircleCI configuration.  It looks a bit like this:</p>

<pre><code># circle.yml
version: 2  
jobs:  
  build:
    ...
    steps:
      ...
      - run:
          name: Start docker resources
          command: |
            set -e
            docker-compose -f circleci-docker-compose.yml up -d (service)
            ...
      - run:
          name: run test
          ...
</code></pre>

<p>Here's the problem though:  there was a problem happening in one of the services I'm running, but there's no way to debug it the way it is written because I can't see the console output of the docker containers run with the <code>-d</code> option.</p>

<p>At first, I tried creating a log artifact by piping the <code>docker-compose log</code> output to a file, then using <code>- store_artifacts</code> to save those files as CircleCI artifacts.  I could see the logs, but it turned out to be a huge pain in the ass to actually view the log files...</p>

<p>To view them, you have to click on the artifacts section in the build summary window and then download the file.  This is pretty obscure, and it is also not apparent to someone viewing the output, trying to understand why a test failed.</p>

<p>I fixed this by writing the output of the container log to the console!  A much better solution, but I found out about it in a really roundabout way.  It turns out that <code>- run</code> steps have an optional field called <code>when</code>, which default to <code>on_success</code>, ensuring that the step is only run if the previous steps succeed.  Setting the value to <code>on_fail</code> or <code>always</code> tells Circle to run the step even if previous steps failed.  You can use this to output the Docker container logs if the build/test fails:</p>

<pre><code># circle.yml
version: 2  
jobs:  
  build:
    ...
    steps:
      ...
      - run:
          name: Start docker resources
          command: |
            set -e
            docker-compose -f circleci-docker-compose.yml up -d (service)
            ...
      - run:
          name: run test
          ...
      - run:
          name: "Failure: output container logs to console"
          command: |
            docker-compose -f circleci-docker-compose.yml logs (service)
          when: on_fail
</code></pre>

<p>Viola.  You can now view the Docker logs in the regular view, making it much simpler to debug.</p>]]></content:encoded></item><item><title><![CDATA[[Language] Ambinyms]]></title><description><![CDATA[<p>Today’s language nerdification post concerns the English word <strong>overlook</strong>.</p>

<p>Overlook falls into a peculiar category of words which already have a name (several actually), but which I will now call <strong>ambinyms</strong>.  These words can have opposite meanings depending on their usage.</p>

<p>Overlook word can mean either <em>to fail to</em></p>]]></description><link>https://unenglishable.com/ambinyms/</link><guid isPermaLink="false">a9cfa559-d312-41c7-a083-bcdf5f7a70fd</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Thu, 21 Sep 2017 02:18:57 GMT</pubDate><content:encoded><![CDATA[<p>Today’s language nerdification post concerns the English word <strong>overlook</strong>.</p>

<p>Overlook falls into a peculiar category of words which already have a name (several actually), but which I will now call <strong>ambinyms</strong>.  These words can have opposite meanings depending on their usage.</p>

<p>Overlook word can mean either <em>to fail to notice something</em> or <em>to have a view of something (from above)</em>.  This stems mainly from the word <strong>over</strong> having the usages describing something that is stationed above, as well as something that is passing by above.</p>

<p>For a few more examples and a bit of etymological insight, check out <a href="https://blogs.transparent.com/english/what-is-a-janus-word/">this other blog</a>.  In fact, that blog has also listed <strong>oversee</strong> as an ambinym - a nearly identical word, which still does have different meanings.</p>

<p>Quite peculiar, indeed.</p>]]></content:encoded></item><item><title><![CDATA[[Photography] Portrait Photography Portfoliø]]></title><description><![CDATA[<h3 id="digitalphotographysamples">Digital Photography Samples</h3>

<p><img src="https://i.imgur.com/tr5M5Jn.jpg" alt="a sample">
<img src="https://i.imgur.com/XkQihL8.jpg" alt="">
<img src="https://i.imgur.com/GlZd2YF.jpg" alt=""></p>

<p><img src="https://i.imgur.com/tz5OLVl.jpg" alt="">
<img src="https://i.imgur.com/3s2PK8q.jpg" alt=""></p>

<p><img src="https://i.imgur.com/r2EgJKe.jpg" alt="">
<img src="https://scontent-yyz1-1.cdninstagram.com/t51.2885-15/e35/13298027_1744955565750918_66498746_n.jpg" alt="sample"></p>

<p><img src="https://scontent-yyz1-1.cdninstagram.com/t51.2885-15/e35/13413415_180674999002424_1232002006_n.jpg" alt="sample"></p>

<p><img src="https://scontent-yyz1-1.cdninstagram.com/t51.2885-15/e35/14717549_1609464459358998_7099271887326281728_n.jpg" alt="sample"></p>

<p><img src="https://i.imgur.com/WvJmQEi.jpg" alt=""></p>

<p><img src="https://i.imgur.com/VviNddC.jpg" alt=""></p>

<h3 id="editing">Editing</h3>

<p>Raw image <br>
<img src="https://i.imgur.com/aLoB3vA.jpg" alt="example"></p>

<p>Optimized for Instagram (Increased brightness and color) <br>
<img src="https://i.imgur.com/GlZd2YF.jpg" alt=""></p>

<h3 id="35mmfilm">35mm Film</h3>

<p><img src="https://i.imgur.com/N2u6LRz.jpg" alt=""></p>

<p><img src="https://i.imgur.com/Pkdbawb.jpg" alt=""></p>]]></description><link>https://unenglishable.com/portfolio/</link><guid isPermaLink="false">07f22030-31ec-4554-8d04-52a156c9bb64</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Tue, 12 Sep 2017 04:58:16 GMT</pubDate><content:encoded><![CDATA[<h3 id="digitalphotographysamples">Digital Photography Samples</h3>

<p><img src="https://i.imgur.com/tr5M5Jn.jpg" alt="a sample">
<img src="https://i.imgur.com/XkQihL8.jpg" alt="">
<img src="https://i.imgur.com/GlZd2YF.jpg" alt=""></p>

<p><img src="https://i.imgur.com/tz5OLVl.jpg" alt="">
<img src="https://i.imgur.com/3s2PK8q.jpg" alt=""></p>

<p><img src="https://i.imgur.com/r2EgJKe.jpg" alt="">
<img src="https://scontent-yyz1-1.cdninstagram.com/t51.2885-15/e35/13298027_1744955565750918_66498746_n.jpg" alt="sample"></p>

<p><img src="https://scontent-yyz1-1.cdninstagram.com/t51.2885-15/e35/13413415_180674999002424_1232002006_n.jpg" alt="sample"></p>

<p><img src="https://scontent-yyz1-1.cdninstagram.com/t51.2885-15/e35/14717549_1609464459358998_7099271887326281728_n.jpg" alt="sample"></p>

<p><img src="https://i.imgur.com/WvJmQEi.jpg" alt=""></p>

<p><img src="https://i.imgur.com/VviNddC.jpg" alt=""></p>

<h3 id="editing">Editing</h3>

<p>Raw image <br>
<img src="https://i.imgur.com/aLoB3vA.jpg" alt="example"></p>

<p>Optimized for Instagram (Increased brightness and color) <br>
<img src="https://i.imgur.com/GlZd2YF.jpg" alt=""></p>

<h3 id="35mmfilm">35mm Film</h3>

<p><img src="https://i.imgur.com/N2u6LRz.jpg" alt=""></p>

<p><img src="https://i.imgur.com/Pkdbawb.jpg" alt=""></p>]]></content:encoded></item><item><title><![CDATA[[DevOps] Running Ghost Blog on www and non www Host URLs]]></title><description><![CDATA[<p>I ran into a slight problem when running Ghost on my server.  The issue occurred when I tried to log into the admin interface (<code>/ghost</code>).  It complained about my URL, giving me the following error:</p>

<p><code>Access Denied from url: unenglishable.com. Please use the url configured in config.js</code></p>

<p>I</p>]]></description><link>https://unenglishable.com/running-ghost-both-on-www-and-non-www-host-urls/</link><guid isPermaLink="false">2e0f84f6-45a7-4c02-a393-8bee8c55b930</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Tue, 28 Feb 2017 07:08:56 GMT</pubDate><content:encoded><![CDATA[<p>I ran into a slight problem when running Ghost on my server.  The issue occurred when I tried to log into the admin interface (<code>/ghost</code>).  It complained about my URL, giving me the following error:</p>

<p><code>Access Denied from url: unenglishable.com. Please use the url configured in config.js</code></p>

<p>I popped open the Chrome developer console and found that I was getting redirected from <code>www.unenglishable.com</code> to <code>unenglishable.com</code>.  My config.js url was set to <code>https://www.unenglishable.com</code>, but my certificate is only valid for <code>unenglishable.com</code>, so Chrome was redirecting to where the certificate was valid for.  (I recently switched to <a href="https://letsencrypt.org/">Let's Encrypt's</a> SSL service, <a href="https://github.com/certbot/certbot">Certbot</a>)</p>

<p>I fixed this issue by changing my nginx config to point both unencrypted <code>http://www.unenglishable.com</code> and <code>http://unenglishable.com</code> to SSL-encrypted <code>https://unenglishable.com</code> via reverse proxy.  I then updated my config.js to have <code>url:  'https://unenglishable.com</code>.  With redirect, I can now visit either www or non www and everything works just fine.</p>

<p>Hope this helps!</p>

<p>Resolves: <br>
<code>Access Denied from url: [url]. Please use the url configured in config.js</code></p>]]></content:encoded></item><item><title><![CDATA[[Software/Git] Git:  Delete unused branches and remote trackers]]></title><description><![CDATA[<p>If you're like me, and like to keep your git tree clean and free of unruly unused branches, you always click "Delete this branch" when merging a pull request on GitHub.</p>

<p>But what about your local machine!!?!?!?  How do we keep the homegrounds nice and clean?</p>

<p>To remove the tracking</p>]]></description><link>https://unenglishable.com/git-delete-unused-branches-and-remote-trackers/</link><guid isPermaLink="false">1da1fb29-a4c2-49b0-968c-80fbcf24d9b8</guid><dc:creator><![CDATA[unenglishable]]></dc:creator><pubDate>Sat, 17 Dec 2016 01:42:33 GMT</pubDate><content:encoded><![CDATA[<p>If you're like me, and like to keep your git tree clean and free of unruly unused branches, you always click "Delete this branch" when merging a pull request on GitHub.</p>

<p>But what about your local machine!!?!?!?  How do we keep the homegrounds nice and clean?</p>

<p>To remove the tracking branches (origin/whatever) after you've merged and deleted them use: <br>
<code>git fetch --prune</code></p>

<p>This fetching command will get the latest changes from your remote repository, while "pruning" your tree of deleted (origin/whatever) branches.</p>

<p>As for the merged local branches, use: <br>
<code>git diff --merged | grep -v master | xargs git branch -d</code></p>

<p>This command deletes your local branches that have been merged remotely.  The <code>grep -v</code> filters out <code>master</code> so that you don't delete your local master branch if it's up to date with <code>origin/master</code>.</p>

<p>I just deleted about 50 excess branches with those two commands.  So, needless to say, these are quite useful for me.</p>

<p><strong><em>As a caution:</em></strong></p>

<p>Only use those if your team is okay with deleting unused branches.  My team isn't okay with that, but for personal projects, I like to keep my branch count to a minimum.</p>

<p>Cheers!</p>]]></content:encoded></item></channel></rss>