<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Containers</title><link>https://jwheel.org/tags/containers/</link><description>Homepage of Justin Wheeler, an Open Source contributor and Free Software advocate from Georgia, USA.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>Justin Wheeler</managingEditor><lastBuildDate>Thu, 19 Mar 2020 00:00:00 +0000</lastBuildDate><atom:link href="https://jwheel.org/rss/tags/containers/index.xml" rel="self" type="application/rss+xml"/><item><title>TeleIRC v2.0.0: March 2020 progress update</title><link>https://jwheel.org/blog/2020/03/teleirc-v2-0-0-march-2020-progress-update/</link><pubDate>Thu, 19 Mar 2020 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2020/03/teleirc-v2-0-0-march-2020-progress-update/</guid><description><![CDATA[<p>Since September 2019, the <a href="https://ritlug.com/">RITlug</a> TeleIRC team is hard at work on the <a href="https://github.com/RITlug/teleirc/milestone/8">v2.0.0 release</a> of TeleIRC. This blog post is a short update on what is coming in TeleIRC v2.0.0, our progress so far, and when to expect the next major release.</p>

<h2 id="whats-coming-in-teleirc-v200">What&rsquo;s coming in TeleIRC v2.0.0?&nbsp;<a class="hanchor" href="#whats-coming-in-teleirc-v200" aria-label="Anchor link for: What&rsquo;s coming in TeleIRC v2.0.0?">🔗</a></h2>
<p>TeleIRC v2.0.0 is a complete rewrite of TeleIRC. The team is migrating the code base <a href="https://github.com/RITlug/teleirc/issues/163">from NodeJS to Go</a>. In September 2019, the team began scoping the requirements and how to approach this large task. TeleIRC v2.0.0 does not add new features, but aims to have feature parity with the v1.x.x version of TeleIRC.</p>
<p>You might be asking, why bother with a total rewrite? What does this actually accomplish for the project? To answer this question, some historical context is needed!</p>

<h3 id="teleirc-v100-was-an-experiment">TeleIRC v1.0.0 was an experiment.&nbsp;<a class="hanchor" href="#teleirc-v100-was-an-experiment" aria-label="Anchor link for: TeleIRC v1.0.0 was an experiment.">🔗</a></h3>
<p><a href="https://github.com/RITlug/teleirc/releases/tag/v1.0.0">TeleIRC v1.0.0</a> was originally created and released in September 2016 by RIT alum <a href="https://github.com/repkam09">Mark Repka</a>. Mark created TeleIRC as a cool project for the RIT Linux Users Group (RITlug) when he was a student and vice president of RITlug. The project was written in hackathon spirit: to prove that something that was not yet common wasn&rsquo;t that hard to do.</p>
<p>Fast forward to today: TeleIRC ends up being pretty popular! As do chat bridges (Matterbridge, Matrix/Riot, etc.) as a whole. The <a href="https://docs.fedoraproject.org/en-US/project/">Fedora Project</a> is one of our largest users, with a dedicated <a href="https://docs.fedoraproject.org/en-US/teleirc-sig/">Special Interest Group</a> to manage the bots. The <a href="https://www.libreoffice.org/about-us/who-are-we/">LibreOffice community</a> is another one of our biggest users. Several international communities also adopted TeleIRC to make their chat rooms more accessible to a new generation of open source fans. Some example users are Linux and BSD user groups and hackerspaces in Argentina, Albania, and across Asia. You can see the <a href="https://docs.teleirc.com/en/latest/about/who-uses-teleirc/">full list of TeleIRC users</a> for yourself.</p>
<p>TeleIRC has grown in a way we never thought it would. Which is awesome! But the project was not originally designed to grow or scale the way it has. Additionally, by being at a university, contributors come and go as students graduate and move on to industry. We also have to think about how to maintain TeleIRC beyond the typical student life-cycle common in the academic world.</p>

<h3 id="lets-approach-teleirc-v200-as-engineers">Let&rsquo;s approach TeleIRC v2.0.0 as engineers.&nbsp;<a class="hanchor" href="#lets-approach-teleirc-v200-as-engineers" aria-label="Anchor link for: Let&rsquo;s approach TeleIRC v2.0.0 as engineers.">🔗</a></h3>
<p>A full rewrite allows us to fully leverage our knowledge as software engineers. In 2020, we know TeleIRC has a large user community and is an important part of how many open source communities communicate. We also know that breaking code into smaller, more modular pieces makes it easier to maintain and bring in new contributors. A full rewrite allows us to apply the lessons the team has learned over the years, in a way that incremental feature releases does not allow.</p>
<p>A few areas are in clear focus for the TeleIRC v2.0.0 rewrite:</p>
<ol>
<li>Write clean, simple code that is easy to understand</li>
<li>Test the code so it is easy to tell when things are working and when they aren&rsquo;t</li>
<li>Think about how to bring in new contributors to continue the project in the future</li>
</ol>
<p>But maybe you are also asking, why the jump to Go?</p>

<h3 id="a-go-rewrite-distinguishes-our-project">A Go rewrite distinguishes our project.&nbsp;<a class="hanchor" href="#a-go-rewrite-distinguishes-our-project" aria-label="Anchor link for: A Go rewrite distinguishes our project.">🔗</a></h3>
<p>When Mark and I launched the project in 2016, we didn&rsquo;t look around to see if anything else like RITlug&rsquo;s TeleIRC already existed. Turns out, there was <a href="https://github.com/FruitieX/teleirc">another NodeJS project</a> with the same name. Skip forward a few years, and there are also projects like <a href="https://github.com/42wim/matterbridge">Matterbridge</a>, <a href="https://github.com/sfan5/pytgbridge">pytgbridge</a>, and <a href="https://github.com/xypiie/teleirc">other implementations</a>. So, with all this commotion out there these days, why bother with our version of yet another chat bridge?</p>
<p>First, there is one design principle guiding our project from others like us: to do one thing and to do it well. Matterbridge is an excellent tool, and we even use it in conjunction with TeleIRC at our university. However, it is a complex tool with many features and options. For some people, this is a non-issue. But the TeleIRC team likes to think there is beauty in simplicity. Instead of offering a tool with the most features and configuration options, we aspire to do a single thing and to do it really well: connect Telegram groups and IRC channels together.</p>
<p>Second, although the FruitieX/teleirc project is archived today, it was once the biggest alternative to our project, also written in NodeJS. When we decided to launch TeleIRC v2.0.0 development, it had a larger community and user base then ours. So instead of offering a &ldquo;similar but different&rdquo; NodeJS project, we would be the first Telegram-IRC bridge written in Go. (Yes, Matterbridge is also written in Go, but see the above paragraph.)</p>
<p>Third… many of the existing maintainers of TeleIRC simply wanted an excuse to learn Go. It is an opportunity to expand our knowledge, experience, and skills, especially since we are students preparing to enter the industry.</p>

<h3 id="go-has-a-better-story-for-kubernetes--openshift">Go has a better story for Kubernetes / OpenShift.&nbsp;<a class="hanchor" href="#go-has-a-better-story-for-kubernetes--openshift" aria-label="Anchor link for: Go has a better story for Kubernetes / OpenShift.">🔗</a></h3>
<p>Finally, we are carefully considering the needs of one of our biggest downstream users: the <strong>Fedora Project</strong>. Several TeleIRC developers also support Fedora&rsquo;s TeleIRC SIG. Recently, the Fedora Infrastructure team launched an OpenShift instance for the Fedora community, called <a href="https://fedoraproject.org/wiki/Infrastructure/Communishift">Communishift</a>. All existing infrastructure in Fedora is gradually moving from virtual machines or OpenStack to OpenShift. To support this migration, we want to make a Go-based TeleIRC as easy to deploy in OpenShift as possible.</p>
<p>And fortunately, Go has a great story in the container orchestration world. Kubernetes and OpenShift are also Go-based projects. Go is the dominant language of this ecosystem. Its excellent performance in the niche of networking makes it a great choice for what TeleIRC does.</p>
<p>Now that you know more about the &ldquo;why is this happening,&rdquo; let&rsquo;s talk on where things are and what you can expect!</p>

<h2 id="teleirc-v200-progress-so-far">TeleIRC v2.0.0: Progress so far&nbsp;<a class="hanchor" href="#teleirc-v200-progress-so-far" aria-label="Anchor link for: TeleIRC v2.0.0: Progress so far">🔗</a></h2>
<p><strong>TeleIRC v2.0.0 is approximately 76% complete</strong>. All progress is tracked in the <a href="https://github.com/RITlug/teleirc/milestone/8">v2.0.0 milestone</a> on GitHub. <a href="https://github.com/RITlug/teleirc/milestone/8?closed=1">46 issues and pull requests were closed</a> since we began in September 2019. At publishing time, about 16 more issues and pull requests are left before we cut the v2.0.0 release.</p>
<p>Earlier in 2019, the maintainer team consisted of <a href="https://github.com/justwheel">Justin Wheeler</a>, <a href="https://github.com/Tjzabel">Tim Zabel</a>, <a href="https://github.com/xforever1313">Seth Hendrick</a>, <a href="https://github.com/thenaterhood">Nate Levesque</a>, <a href="https://github.com/nic-hartley">Nic Hartley</a>, and <a href="https://github.com/robbyoconnor">Robby O&rsquo;Connor</a>. Now joining the committer group, we are happy to welcome <strong><a href="https://github.com/Zedjones">Nicholas Jones</a>, <a href="https://github.com/10eMyrT">Kevin Assogba</a>, and <a href="https://github.com/kennedy">Kennedy Kong</a></strong> to the team. The current core group of maintainers for v2.0.0 are Justin, Tim, Nicholas, Kevin, and Kennedy.</p>

<h2 id="when-to-expect-teleirc-v200">When to expect TeleIRC v2.0.0&nbsp;<a class="hanchor" href="#when-to-expect-teleirc-v200" aria-label="Anchor link for: When to expect TeleIRC v2.0.0">🔗</a></h2>
<p>TeleIRC v2.0.0 is targeted for a release date of <strong>Friday, May 15th, 2020</strong>. At this point, we expect to have full feature parity with the v1.x.x version. We will recommend all existing users to upgrade to the latest release then.</p>
<p>In the meanwhile, the team is getting ready to <a href="https://github.com/RITlug/teleirc/issues/265">cut a v2.0.0-pre1 release</a>, our first &ldquo;pre-release&rdquo; of the Go port. We expect this release to be available on our <em><a href="https://github.com/RITlug/teleirc/releases">Releases</a></em> by Saturday, March 28th. Along with the v2.0.0-pre1 release, there are a few other details to note:</p>
<ol>
<li><a href="https://github.com/RITlug/teleirc/milestone/9?closed=1">TeleIRC v1.5.0</a>, the final version of the NodeJS version, will be released.</li>
<li>No future contributions will be accepted to the NodeJS version.</li>
<li><code>master</code> branch in git will reflect the latest Go version of TeleIRC.</li>
</ol>
<p>Once the v2.0.0-pre1 release is available, we want help to take it for a test drive! If TeleIRC is critical for you, we do not recommend using it yet, as it does not have full feature parity yet. But your early feedback can help improve the future of the next release while we are in active development.</p>

<h2 id="get-involved-with-teleirc">Get involved with TeleIRC!&nbsp;<a class="hanchor" href="#get-involved-with-teleirc" aria-label="Anchor link for: Get involved with TeleIRC!">🔗</a></h2>
<p>You can be a part of the upcoming TeleIRC v2.0.0 release. We&rsquo;d love your help! There is no formal commitment to contributing, although we ask for participation through a single sprint cycle.</p>
<p>Read our <a href="https://docs.teleirc.com/en/latest/dev/contributing/"><em>Contributing guidelines</em></a> on how to get started with TeleIRC. <a href="https://rit.bluejeans.com/564315135">Virtual developer meetings</a> take place every Saturday at 15:00 US EDT, so anyone can join and participate.</p>
<p>Come say hello in our developer chat rooms, either on <a href="https://webchat.freenode.net/#ritlug-teleirc">IRC</a> or in <a href="https://t.me/teleirc">Telegram</a>!</p>
<hr>
<p><em><a href="https://unsplash.com/photos/guiQYiRxkZY">Background photo</a> by <a href="https://unsplash.com/@epicantus">Daria Nepriakhina</a> on <a href="https://unsplash.com/">Unsplash</a>.</em></p>]]></description></item><item><title>HPC workloads in containers: Comparison of container run-times</title><link>https://jwheel.org/blog/2019/08/hpc-workloads-containers/</link><pubDate>Tue, 20 Aug 2019 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2019/08/hpc-workloads-containers/</guid><description><![CDATA[<p>Recently, I worked on an interesting project to evaluate different container run-times for high-performance computing (HPC) clusters. HPC clusters are what we once knew as <a href="https://en.wikipedia.org/wiki/Supercomputer">supercomputers</a>. Today, instead of giant mainframes, they are hundreds, thousands, or tens of thousands of <a href="https://en.wikipedia.org/wiki/Massively_parallel">massively parallel</a> systems. Since performance is critical, virtualization with tools like virtual machines or Docker containers was not realistic. The overhead was too much compared to bare metal.</p>
<p>However, the times are a-changing! <a href="https://jwfblog.wpenginepowered.com/tag/containers/">Containers</a> are entering as real players in the HPC space. Previously, containers were brushed off as incompatible with most HPC workflows. Now, several open source projects are emerging with unique approaches to enabling containers for HPC workloads. This blog post evaluates four container run-times in an HPC context, as they stand in July 2019:</p>
<ul>
<li>Charliecloud</li>
<li>Shifter</li>
<li>Singularity</li>
<li>Podman</li>
</ul>

<h2 id="research-requirements">Research requirements&nbsp;<a class="hanchor" href="#research-requirements" aria-label="Anchor link for: Research requirements">🔗</a></h2>
<p>My research focused around a specific set of requirements. To receive a favorable review, a container run-time needed to meet three basic requirements:</p>
<ul>
<li>Support CentOS/RHEL 7.5+</li>
<li>Compatibility with <a href="https://en.wikipedia.org/wiki/Univa_Grid_Engine">Univa GridEngine</a></li>
<li>Support for very large numbers of users</li>
</ul>
<p>Obviously there are security concerns with the third requirement. This is one reason containers have not made a strong showing in the HPC world yet. With the Docker security model, root access is a requirement to build and run containers. In a production HPC environment where users do not trust other users, this is a hard blocker.</p>
<p>Other HPC environments may differ. If you are an HPC administrator and also considering containers in your environment, consider my requirements. My research was exclusively framed through these three requirements.</p>

<h2 id="charliecloud">Charliecloud&nbsp;<a class="hanchor" href="#charliecloud" aria-label="Anchor link for: Charliecloud">🔗</a></h2>
<p><a href="https://github.com/hpc/charliecloud">Charliecloud</a> is an open source project based on a user-defined software stack (UDSS). Like most container implementations, it uses Linux user namespaces to run unprivileged containers. It is designed to be as minimal and lightweight as possible, to the point of not adding features that could conflict with any specific use cases. This can be a positive or a negative, depending on how complex your environment is.</p>
<p>However, I abandoned my research on Charliecloud early on after reading this <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0177459">PLOS research paper</a>:</p>
<blockquote>
<p>The software makes use of kernel namespaces that are not deemed stable by multiple prominent distributions of Linux (e.g. <strong>no versions of Red Hat Enterprise Linux or compatibles support it</strong>), and may not be included in these distributions for the foreseeable future.</p>
<p>The software is emphasized for its simplicity and being less than 500 lines of code, and this is an indication of having a lack of user-driven features. The containers are not truly portable because they must be extracted from Docker and configured by an external C executable before running, and even after this step, all file ownership and permissions are dependent on the user running the workflow.</p>
<p><a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0177459">Singularity: Scientific containers for mobility of compute</a>, May 2017 (Gregory M. Kurtzer, Vanessa Sochat, Michael W. Bauer)</p>
</blockquote>
<p>However, it is worth noting this paper was written in support of Singularity. It was also written by the Singularity project lead and others from the Singularity open source community. If you are conducting your own independent research, consider looking closer at Charliecloud, since at the time of writing it is still actively developed. The research paper was written in May 2017.</p>
<p><em>Edit</em>: This situation already changed and Charliecloud is probably worth a deeper look:</p>
<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">This is a fantastic write up, one thing to mention is that abandoning the CharlieCloud research based solely on lack of support of the user kernel namespace is no longer a blocker. For example, PodMan now uses the same technology and it was released in RHEL8.</p>&mdash; Apptainer (formerly Singularity) (@SingularityApp) <a href="https://twitter.com/SingularityApp/status/1163846727700344834?ref_src=twsrc%5Etfw">August 20, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>



<h2 id="shifter">Shifter&nbsp;<a class="hanchor" href="#shifter" aria-label="Anchor link for: Shifter">🔗</a></h2>
<p><a href="https://github.com/NERSC/shifter">Shifter</a> is another container run-time implementation focused on HPC users. At time of writing, it is almost exclusively backed by the <a href="https://www.nersc.gov/">National Energy Research Scientific Computing Center</a> and <a href="https://www.cray.com/">Cray</a>. Most documented use cases use <a href="https://slurm.schedmd.com/">Slurm</a> for cluster management / job scheduling. Instead of a Docker/OCI format, it uses its own Shifter-specific format, but this is reverse-compatible with Docker container images. It requires hosting a registry service and a <strong>Shifter Image Gateway</strong>.</p>
<p>The Shifter Image Gateway is a REST interface implemented with <a href="https://palletsprojects.com/p/flask/">Python Flask</a>. It pulls images from the registry service and converts them to the Shifter image format. MPI integration is supported but its implementation is MPICH-centric.</p>
<p>The downside to Shifter is lack of community. There are not many other organizations other than NERSC and Cray that appear to support Shifter. <a href="https://github.com/NERSC/shifter/tree/master/doc">Documentation exists</a>, but at writing time (July 2019), the last significant contribution was April 2018. Some bugs and feature requests are triaged, but there is not much of a maintainer presence in these issues. Most follow-up discussion to new issues are from a handful of outside contributors without commit access.</p>
<p>Additionally, there are several signs of stagnant development, such as <a href="https://github.com/NERSC/shifter/pull/172">NERSC/shifter#172</a> to add better MPI integration. However, the PR stalled out since it was first opened in April 2017. Furthermore, there is a high bus factor: most contributions and pull requests come from the same two developers, indicating low engagement from the wider HPC community. Code is <a href="https://travis-ci.org/NERSC/shifter">regularly tested</a>, but integration tests <a href="https://travis-ci.org/NERSC/shifter/jobs/541868408#L880-L969">only exist for Slurm</a>. For more details, check out the <a href="https://github.com/NERSC/shifter/pulse">GitHub project pulse</a>.</p>
<p>A detail worth noting is Shifter was one of the first real container run-times for HPC. A former Shifter collaborator branched off from Shifter to start Singularity (and eventually, a for-profit company to support it, Sylabs). It invites room for personal biases when evaluating Shifter and Singularity, specifically if you are not a newcomer in the HPC community.</p>

<h2 id="singularity">Singularity&nbsp;<a class="hanchor" href="#singularity" aria-label="Anchor link for: Singularity">🔗</a></h2>
<p><a href="https://sylabs.io/singularity/">Singularity</a> is the third and last HPC-specific player in the container run-time world. The vendor is <a href="https://sylabs.io/about-us/mission">Sylabs Inc</a>. There are a few different factors that make Singularity interesting, and in my opinion, the most promising HPC container implementation.</p>

<h3 id="general-overview">General overview&nbsp;<a class="hanchor" href="#general-overview" aria-label="Anchor link for: General overview">🔗</a></h3>
<p>Singularity v3.x.x is written almost entirely in Golang. It supports two image formats: Docker/OCI and Singularity&rsquo;s native Single Image Format (SIF). As of September 2018, there are an estimated 25,000+ systems running Singularity, including users like <a href="https://www.tacc.utexas.edu/">TACC</a>, <a href="https://www.sdsc.edu/">San Diego Supercomputer Center</a>, and <a href="https://www.ornl.gov/">Oak Ridge National Laboratory</a>. Additionally, Univa <a href="http://www.univa.com/about/news/press_2018/07312018.php">announced a partnership</a> with Sylabs in July 2018 to bring Singularity workflows to Univa GridEngine.</p>
<p>Sylabs offers Singularity (free and open source) and SingularityPRO (paid and proprietary). The commercial version comes with a support contract and long-term support for some releases (among other things).</p>
<p>Admin/root access is not required to run Singularity containers and it requires no additional configuration to do this out of the box. Containers are run under the Linux user ID that launches them (see <em><a href="https://sylabs.io/guides/2.6/user-guide/introduction.html#security-and-privilege-escalation">Security and privilege escalation</a></em>).</p>
<p>At a quick glance, Sylabs developers appear to be <a href="https://github.com/sylabs">actively engaged</a> in the Kubernetes development community, particularly around Red Hat technology. They also seem to keep their promises: in early 2018, blog posts show ambitious feature promises for the then-upcoming v3.0.0 release at the end of the year. Near the end of 2018, the release was delivered on-time with most/all of the promised functionality.</p>

<h3 id="image-formats">Image formats&nbsp;<a class="hanchor" href="#image-formats" aria-label="Anchor link for: Image formats">🔗</a></h3>
<p>The Singularity Image Format (SIF) is a single-image format (i.e. no layers involved). This was a design decision specifically for HPC workloads. SIFs are treated like a binary executable by a Linux user. Additionally, it is possible to create SIFs using the <a href="https://sylabs.io/guides/3.3/user-guide/definition_files.html#sections">Definition File</a> spec.</p>
<p>However, Singularity is also compatible with Docker/OCI images and OCI is given <a href="https://github.com/sylabs/singularity/labels/OCI">active development focus</a> by upstream Singularity. Docker/OCI images are converted on-the-fly to a SIF. Docker/OCI images can be used locally or pulled from a remote registry like Docker Hub or <a href="https://www.openshift.com/products/quay">Quay</a>. To the user, if using a Docker/OCI image, the conversion is seamless and does not require additional configuration to use.</p>
<p>See <a href="https://web.archive.org/web/20190726223349/https://archive.sylabs.io/2018/03/sif-containing-your-containers/">this Sylabs blog post</a> for a deeper dive on how SIFs were designed.</p>

<h3 id="flexible-configuration">Flexible configuration&nbsp;<a class="hanchor" href="#flexible-configuration" aria-label="Anchor link for: Flexible configuration">🔗</a></h3>
<p>Singularity (uniquely?) offers advanced configuration options for HPC administrators. Some highlights are detailed here:</p>
<ul>
<li><strong>Controlling bind mounts</strong>:
<ul>
<li><code>mount dev = minimal</code>: Only binds <code>null</code>, <code>zero</code>, <code>random</code>, <code>urandom</code>, and <code>shm</code> into container</li>
<li><code>mount home = {yes,no}</code>, <code>mount tmp = {yes,no}</code>: Choose to enable or disable these bind mounts globally</li>
<li><code>bind path = &quot;&quot;</code>: Bind specific paths into containers by default</li>
<li><code>user bind control = {yes,no}</code>: Allow users to include their own bind mount paths or limit it to an admin-approved set of paths (above)</li>
</ul>
</li>
<li><strong>Controlling containers</strong>:
<ul>
<li><code>limit container paths =</code>: Possible to limit SIFs provided at a specific path and nowhere else</li>
</ul>
</li>
</ul>

<h3 id="hpc-community-engagement">HPC community engagement&nbsp;<a class="hanchor" href="#hpc-community-engagement" aria-label="Anchor link for: HPC community engagement">🔗</a></h3>
<p>These notes only apply to Singularity free, not the proprietary SingularityPRO product.</p>
<p>The signals from their open source community engagement are positive and strong. They appear authentic and genuine to an <a href="https://sylabs.io/resources/community">open source commitment</a> (i.e. not <a href="https://blogs.gnome.org/bolsh/2010/07/19/rotten-to-the-open-core/">open-core business model</a>). This is demonstrated in a few ways:</p>
<p>First, they have <a href="https://sylabs.io/guides/3.3/user-guide/">thorough user documentation</a>, intended for end-users in HPC environments using Singularity. They have a less thorough but still useful <a href="https://sylabs.io/guides/3.2/admin-guide/">admin documentation</a>.</p>
<p>Second, all issues are triaged quickly and get feedback from core developers or outside contributors at a consistent pace. Pull requests don&rsquo;t stagnate either: the oldest PR is less than six months old.</p>
<p>Third, code is regularly tested (<a href="https://travis-ci.org/sylabs/singularity">1</a>, <a href="https://circleci.com/gh/sylabs/singularity/tree/master">2</a>). The code generally follows <a href="https://goreportcard.com/report/github.com/sylabs/singularity">best practices</a> (i.e. it is not atrocious to work with).</p>
<p>Fourth, there are also a handful of active contributors (both developers and in the community support channels) who come from outside of Sylabs, which indicates more engagement by a wider audience of people.</p>
<p>For more statistics, check out the <a href="https://github.com/sylabs/singularity/pulse">GitHub project pulse</a>.</p>

<h2 id="podman">Podman&nbsp;<a class="hanchor" href="#podman" aria-label="Anchor link for: Podman">🔗</a></h2>
<p><em>tl;dr</em>: Podman is an underdog that shows promise, but likely needs another one or two years of time for most HPC use cases.</p>
<p><a href="https://podman.io/">Podman</a> is a container run-time developed by Red Hat. Its primary goal is to be a drop-in replacement for Docker. While it is not explicitly designed with HPC use cases in mind, it intends to be a lightweight &ldquo;wrapper&rdquo; to run containers without the overhead of the full Docker daemon. Furthermore, the Podman development team is recently looking into better support for HPC use cases.</p>
<p>Podman is currently lacking for a HPC use case for some of these reasons:</p>
<ol>
<li><a href="https://github.com/containers/libpod/issues/3478">Missing support for parallel filesystems</a> (e.g. <a href="https://en.wikipedia.org/wiki/IBM_Spectrum_Scale">IBM Spectrum Scale</a>)</li>
<li>Rootless Podman was designed to <a href="https://github.com/containers/libpod/blob/master/rootless.md">use kernel user namespaces</a> which is <a href="https://github.com/containers/libpod/issues/3561">not compatible with most parallel filesystems</a> (might change in a year or two)</li>
<li><a href="https://github.com/containers/libpod/issues/3587">Not yet possible to set system site policy defaults</a></li>
<li><a href="https://github.com/containers/libpod/issues/3589">Pulling Docker/OCI images requires multiple subuids/subgids</a> (might change in a year or two)</li>
</ol>
<p>Where Podman does shine is providing a way to run <strong><em>and</em></strong> build containers without root access or <code>setuid</code>.</p>
<p>The same challenges and problems required for Podman to run OCI containers in an HPC environment are the same problems faced by Singularity to build SIF images without root in the HPC environment: <strong>mapping UIDs to subuids/subgids on the compute nodes</strong>. More interestingly, <strong><a href="https://buildah.io/">Buildah</a></strong> offers a promising way to enable users to build container images as Docker/OCI images all without root. It is plausible to use Buildah as the container image delivery mechanism and swap out the container run-time implementation (Podman vs. Singularity) depending on specific needs and requirements.</p>

<h2 id="what-do-you-think">What do you think?&nbsp;<a class="hanchor" href="#what-do-you-think" aria-label="Anchor link for: What do you think?">🔗</a></h2>
<p>I hope other folks out there in the HPC world find this preliminary research useful. Do you agree or disagree with any parts of this write-up? Is something out-of-date? Drop a comment down below.</p>]]></description></item><item><title>How to automatically scale Kubernetes with Horizontal Pod Autoscaling</title><link>https://jwheel.org/blog/2018/03/kubernetes-horizontal-pod-autoscaling/</link><pubDate>Tue, 06 Mar 2018 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2018/03/kubernetes-horizontal-pod-autoscaling/</guid><description><![CDATA[<p>Scale is a critical part of how we develop applications in today&rsquo;s world of infrastructure. Now, containers and container orchestration like Docker and <a href="https://jwfblog.wpenginepowered.com/2017/07/introduction-kubernetes-fedora/">Kubernetes</a> make it easier to think about scale. One of the &ldquo;magical&rdquo; things about The potential of Kubernetes is fully realized when you have a sudden increase in load, your infrastructure scales up and grows to accommodate. How does this work? With <strong>Horizontal Pod Autoscaling</strong>, Kubernetes adds more pods when you have more load and drops them once things return to normal.</p>
<p>This article covers Horizontal Pod Autoscaling, what it is, and how to try it out with the <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/">Kubernetes guestbook</a> example. By the end of this article, you will…</p>
<ul>
<li>Understand what Horizontal Pod Autoscaling (HPA) is</li>
<li>Be able to create an HPA in Kubernetes</li>
<li>Create an HPA for the Guestbook and watch it work with <a href="https://github.com/JoeDog/siege">Siege</a></li>
</ul>

<h2 id="what-is-horizontal-pod-autoscaling">What is Horizontal Pod Autoscaling?&nbsp;<a class="hanchor" href="#what-is-horizontal-pod-autoscaling" aria-label="Anchor link for: What is Horizontal Pod Autoscaling?">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaling</a> (HPA) is a Kubernetes API resource to dynamically grow an environment. To help simplify things, consider it in three pieces:</p>
<ul>
<li><strong>Horizontal</strong>: Think of <em>horizontal</em> growth, i.e. adding more nodes to your available pool (unlike <em>vertical</em>, which would be adding more memory / CPU to your existing nodes)</li>
<li><strong>Pod</strong>: Your deployable units in Kubernetes</li>
<li><strong>Autoscaling</strong>: Automatically scaling out when needed</li>
</ul>
<p>
<figure>
  <img src="/blog/2017/08/k8s-hpa.png" alt="Diagram to explain how Horizontal Pod Autoscaler (HPA) works" loading="lazy">
  <figcaption>Diagram to explain how a Horizontal Pod Autoscaler (HPA) works. From Kubernetes documentation (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" class="bare">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a>).</figcaption>
</figure>
</p>
<p>To help visualize it, imagine you have a <a href="http://flask.pocoo.org/">Python Flask</a> web server that reads and writes data to a <a href="https://redis.io/">Redis</a> back-end. Your web server is the front-end for all of your incoming traffic. You run it with three pods in Kubernetes, with 512MB of RAM and 50m of CPU. Now, suddenly, BuzzFeed writes an article about your app, Kanye West name drops the app in a TV interview, and the president of the United States retweets a link to your site.</p>
<p>Oops.</p>
<p>Now you have a serious problem on your hands, where your tiny application is overloaded. Three pods aren&rsquo;t cutting it anymore. You get woken up at 3:00am to hastily adjust the number of replicas and rapidly scale your infrastructure. While you&rsquo;re wondering <em>how this happened</em>, you also wonder… isn&rsquo;t there an easier way? Could I have avoided this panicked, pre-dawn scaling crisis? Yes, there is! At least, somewhat.</p>

<h4 id="building-to-scale">Building to scale&nbsp;<a class="hanchor" href="#building-to-scale" aria-label="Anchor link for: Building to scale">🔗</a></h4>
<p>By creating and managing your deployments with HPAs, your application grows horizontally to handle the load. As the CPU utilization rises, HPAs trigger the addition of more pods to scale automatically. Previously, you could create a Horizontal Pod Autoscaler that would begin scaling when cumulative CPU utilization was at 60%. You could also tell it to scale to a maximum of 500 pods, but no less than three. So then, when the Apocalypse of Viral Sharing happened to your web application, it could have grown dynamically.</p>
<p>If you want to dive deeper in the technical implementation of HPAs, you can read more in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Kubernetes documentation</a>.</p>

<h2 id="create-a-horizontal-pod-autoscaler">Create a Horizontal Pod Autoscaler&nbsp;<a class="hanchor" href="#create-a-horizontal-pod-autoscaler" aria-label="Anchor link for: Create a Horizontal Pod Autoscaler">🔗</a></h2>
<p>Now that you understand how a Horizontal Pod Autoscaler (HPA) is helpful, how do you create one? Like any other resource in Kubernetes, define HPAs in a YAML definition file. Here&rsquo;s a template for getting started.</p>
<pre tabindex="0"><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
  namespace: my-app-space
  labels:
    app: my-app
    tier: frontend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: my-app-deployment
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 60
</code></pre><p>This is the minimal spec you need to deploy an HPA. It&rsquo;s not that different from other Kubernetes resources you may have seen.</p>

<h4 id="explaining-the-configuration">Explaining the configuration&nbsp;<a class="hanchor" href="#explaining-the-configuration" aria-label="Anchor link for: Explaining the configuration">🔗</a></h4>
<p>Let&rsquo;s look at what some of the specific lines are.</p>
<ul>
<li><code>spec.scaleTargetRef.name</code>: Name of resource to scale (e.g. name of a deployment)</li>
<li><code>spec.minReplicas</code>: Minimum number of replicas running when CPU use is minimal</li>
<li><code>spec.maxReplicas</code>: Maximum number of replicas running when CPU use peaks</li>
<li><code>spec.targetCPUUtilizationPercentage</code>: Percentage threshold when HPA begins scaling out pods</li>
</ul>
<p>When starting out for the first time, tweak these values based on the amount of traffic you expect to receive or what your budget is. Load testing your application is one way to see the HPAs do their job.</p>

<h2 id="obliterating-the-guestbook">Obliterating the Guestbook&nbsp;<a class="hanchor" href="#obliterating-the-guestbook" aria-label="Anchor link for: Obliterating the Guestbook">🔗</a></h2>
<p>But this guide wouldn&rsquo;t be complete without a live demo to try. You can create one with an existing application and put it to the test. This section assumes you have a running <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/">Guestbook application</a> in your Kubernetes environment. As a quick refresh, the Guestbook is a three-part application:</p>
<ul>
<li>PHP web application for writing messages into a virtual guestbook</li>
<li>Primary Redis node for writing new messages from web page</li>
<li>Replica Redis nodes for reading the data into web page</li>
</ul>
<p>We&rsquo;ll add an HPA as a fourth part to scale the PHP web application for new traffic.</p>

<h4 id="create-the-hpa-for-guestbook">Create the HPA for Guestbook&nbsp;<a class="hanchor" href="#create-the-hpa-for-guestbook" aria-label="Anchor link for: Create the HPA for Guestbook">🔗</a></h4>
<p>Now, create a new HPA spec file for the guestbook.</p>
<pre tabindex="0"><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: guestbook-frontend
  namespace: guestbook
  labels:
    app: guestbook
    env: production
    tier: frontend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: guestbook-frontend
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75
</code></pre><p>Put this into a file and create the HPA with <code>kubectl</code>.</p>
<pre tabindex="0"><code>$ kubectl apply --record -f guestbook-frontend-hpa.yaml
</code></pre><p>Now, the Horizontal Pod Autoscaler is operational and monitoring the CPU utilization of your deployment.</p>

<h4 id="load-test-with-siege">Load test with Siege&nbsp;<a class="hanchor" href="#load-test-with-siege" aria-label="Anchor link for: Load test with Siege">🔗</a></h4>
<p>To force the HPA into action, we&rsquo;ll use <a href="https://github.com/JoeDog/siege">Siege</a>, an HTTP load testing and benchmark utility. Siege is a multi-threaded load testing tool and has a few other capabilities included to make it a good option for putting some force onto a simple web app.</p>
<p>First, put various permutations of the URL in a plaintext file. By doing this, Siege can randomly scan the URLs in he text file and ping them in &ldquo;Internet mode&rdquo; by randomly selecting a URL from the list for each request. This could look like the following…</p>
<pre tabindex="0"><code>http://my-guestbook.example.com/
http://my-guestbook.example.com/index.html
http://my-guestbook.example.com/guestbook.php
http://my-guestbook.example.com/guestbook.php?cmd=get&amp;key=messages
</code></pre><p>Once this is done, you can fire up Siege to begin load testing. In this case, to get fast results, we&rsquo;ll use 255 concurrent users for five minutes, using Internet and benchmark modes.</p>
<pre tabindex="0"><code>$ siege --verbose --benchmark --internet --concurrent 255 --time 10M --file siege-urls.txt
</code></pre><p>You should see Siege begin to rapidly send requests to your Guestbook application. Now that the action is in progress, you can slowly observe your CPU utilization begin to climb. Watch it slowly change by using <code>watch</code>.</p>
<pre tabindex="0"><code>$ watch -d -n 2 -b -c kubectl get hpa -l app=guestbook
</code></pre><p>During the five minute load test, you should notice CPU usage rise and then new replicas will appear. Depending on what your original requests and limits are for the deployment, you will see different results. Next, try setting the deployment&rsquo;s requests / limits to lower values if nothing seems to happen while testing.</p>

<h2 id="learn-more-about-horizontal-pod-autoscaler">Learn more about Horizontal Pod Autoscaler&nbsp;<a class="hanchor" href="#learn-more-about-horizontal-pod-autoscaler" aria-label="Anchor link for: Learn more about Horizontal Pod Autoscaler">🔗</a></h2>
<p>Horizontal Pod Autoscalers are a stable resource in Kubernetes and are available for you to begin playing around with now. To learn more, read the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">documentation</a> or see another example in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/">official walkthrough</a>.</p>]]></description></item><item><title>ListenBrainz community gardening and user statistics</title><link>https://jwheel.org/blog/2017/11/listenbrainz-community-user-statistics/</link><pubDate>Mon, 13 Nov 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/11/listenbrainz-community-user-statistics/</guid><description><![CDATA[<p><em>This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in <a href="https://jwfblog.wpenginepowered.com/tag/rit-2171/">this tag</a>.</em></p>
<hr>
<p>My progress with ListenBrainz slowed, but I am resuming the pace of contributing and advancing on my independent study timeline. This past week, I finished out assigned tasks to discuss contributor-related documentation, like a Code of Conduct, contributor guidelines, and a pull request template. I began research on user statistics and found some already created. I wrote one of my own, but need to learn more about Google BigQuery to advance further.</p>

<h2 id="paving-the-contributor-pathway">Paving the contributor pathway&nbsp;<a class="hanchor" href="#paving-the-contributor-pathway" aria-label="Anchor link for: Paving the contributor pathway">🔗</a></h2>
<p>
<figure>
  <img src="/blog/2017/11/Screenshot-from-2017-11-13-02-05-12.png" alt="Making it easier for people to contribute user statistics to ListenBrainz" loading="lazy">
  <figcaption>Making it easier for people to contribute to ListenBrainz with helpful contibuting guidelines</figcaption>
</figure>
</p>
<p>Earlier, I identified weaknesses for the ListenBrainz contributor pathway and found ways we could improve the pathway. This started with the development environment documentation. Now, I helped draft first revisions of our <a href="https://github.com/metabrainz/listenbrainz-server/pull/287">contributor guidelines</a>, <a href="https://github.com/metabrainz/listenbrainz-server/pull/286">Code of Conduct reference</a>, and <a href="https://github.com/metabrainz/listenbrainz-server/pull/288">pull request templates</a>. Together, these three documents have two goals.</p>
<ol>
<li><strong>Make it easier</strong> to contribute to ListenBrainz</li>
<li>Have a better experience and <strong>have fun</strong> contributing!</li>
</ol>
<p>Adding these documents addresses these goals. Additionally, the <a href="https://github.com/metabrainz/listenbrainz-server/community">GitHub community profile</a> also highlights these deliverables as ways to meet these goals. After getting feedback and seeing what others think, we make more revisions later (with some trial runs).</p>

<h2 id="back-to-selinux-context-flags">Back to SELinux context flags&nbsp;<a class="hanchor" href="#back-to-selinux-context-flags" aria-label="Anchor link for: Back to SELinux context flags">🔗</a></h2>
<p>Recently, I set my desktop back up and installed Docker for the first time on this machine; however, the development environment still failed to start. When I ran the script, it would eventually error out because of a permission denial. The web server image for ListenBrainz was failing.</p>
<p>After debugging, I noticed that I missed the SELinux volume tags for the ListenBrainz web server images in my original pull request, <a href="https://github.com/metabrainz/listenbrainz-server/pull/257">#257</a>. When I created the pull request, I might have had cached data that let my laptop run the development environment without a problem. In either case, it was an easy fix and I knew what the issue was when it happened. Therefore, I submitted a new fix in <a href="https://github.com/metabrainz/listenbrainz-server/pull/290">#290</a>.</p>

<h2 id="writing-new-user-statistics">Writing new user statistics&nbsp;<a class="hanchor" href="#writing-new-user-statistics" aria-label="Anchor link for: Writing new user statistics">🔗</a></h2>
<p>The most interesting part of my independent study is working with the music data to build and generate interesting statistics. I finally began exploring the <a href="https://github.com/metabrainz/listenbrainz-server/tree/master/listenbrainz/stats">existing statistics</a> in ListenBrainz. The statistic queries use BigQuery standard SQL. BigQuery helps rapidly scan and scale data queries to help with performance (I have a lot to learn about BigQuery).</p>

<h4 id="two-types-of-statistics">Two types of statistics&nbsp;<a class="hanchor" href="#two-types-of-statistics" aria-label="Anchor link for: Two types of statistics">🔗</a></h4>
<p>Additionally, ListenBrainz generates <strong>two types</strong> of statistics:</p>
<ol>
<li>Site-wide statistics</li>
<li>User statistics</li>
</ol>
<p>Site-wide statistics are metrics non-specific to a single user. There is only <a href="https://github.com/metabrainz/listenbrainz-server/blob/master/listenbrainz/stats/sitewide.py">one site-wide query</a> now. It counts how many artists were ever submitted to this ListenBrainz instance and returns an integer. There&rsquo;s room for expansion in site-wide statistics.</p>
<p>On the other hand, user statistics are metrics specific to a single user. There&rsquo;s a <a href="https://github.com/metabrainz/listenbrainz-server/blob/master/listenbrainz/stats/user.py">fair number already</a>, like the top artists and songs in a time period and the number of artists you&rsquo;ve listened to. These are a little more complete and offer more expansion for doing cool front-end work with something like <a href="https://d3js.org/">D3.js</a>.</p>

<h4 id="writing-user-statistics">Writing user statistics&nbsp;<a class="hanchor" href="#writing-user-statistics" aria-label="Anchor link for: Writing user statistics">🔗</a></h4>
<p>Of course, I had to try writing my own. One helpful query I thought of was getting a count of the songs you listened to over a time period (e.g. &ldquo;you listened to 500 songs this week!&rdquo;). I haven&rsquo;t tested it yet, but I have this in a local branch and hope to test it with real data soon.</p>
<pre tabindex="0"><code>def get_play_count(musicbrainz_id, time_interval=None): 
 
 filter_clause = &#34;&#34; 
 if time_interval: 
     filter_clause = &#34;AND listened_at &gt;=
     TIMESTAMP_SUB(CURRENT_TIME(), 
     INTERVAL {})&#34;.format(time_interval) 
 
 query = &#34;&#34;&#34;SELECT COUNT(release_msid) as listen_count 
            FROM {dataset_id}.{table_id} 
            WHERE user_name = @musicbrainz_id 
            {time_filter_clause} 
            LIMIT {limit} 
         &#34;&#34;&#34;.format( 
                 dataset_id=config.BIGQUERY_DATASET_ID, 
                 table_id=config.BIGQUERY_TABLE_ID, 
                 time_filter_clause=filter_clause, 
                 limit=config.STATS_ENTITY_LIMIT, 
            ) 
 
 parameters = [ 
     { 
         &#39;type&#39;: &#39;STRING&#39;, 
         &#39;name&#39;: &#39;musicbrainz_id&#39;, 
         &#39;value&#39;: musicbrainz_id 
     } 
 ] 
 
 return stats.run_query(query, parameters)
</code></pre>
<h2 id="researching-google-bigquery">Researching Google BigQuery&nbsp;<a class="hanchor" href="#researching-google-bigquery" aria-label="Anchor link for: Researching Google BigQuery">🔗</a></h2>
<p>My next steps for the independent study are researching <a href="https://cloud.google.com/bigquery/docs/">Google BigQuery</a>. After going through the existing statistics and understanding how ListenBrainz generates them, an understanding of Google BigQuery is essential to writing effective queries. When I become more comfortable with the tooling and how it works, I want to map out a plan of statistics to generate and measure.</p>
<p>Until then, the hacking continues! As always, keep the FOSS flag high…</p>]]></description></item><item><title>Exploring Google Code-In, ListenBrainz easyfix bugs, D3.js</title><link>https://jwheel.org/blog/2017/10/google-code-in-listenbrainz-d3-js/</link><pubDate>Sat, 21 Oct 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/10/google-code-in-listenbrainz-d3-js/</guid><description><![CDATA[<p><em>This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in <a href="https://jwfblog.wpenginepowered.com/tag/rit-2171/">this tag</a>.</em></p>
<hr>
<p>Last week moved quickly for me in ListenBrainz. I submitted multiple pull requests and participated in the weekly developer&rsquo;s meeting on Monday. I was also invited to take part as a mentor for ListenBrainz for the upcoming round of Google Code-In! In addition to my changes and new role as a mentor, I&rsquo;m researching libraries like D3.js to help build visualizations for music data.  Suddenly, everything started moving fast!</p>

<h2 id="last-week-recap">Last week: Recap&nbsp;<a class="hanchor" href="#last-week-recap" aria-label="Anchor link for: Last week: Recap">🔗</a></h2>
<p>The ListenBrainz team accepted my <a href="https://github.com/metabrainz/listenbrainz-server/pull/257">development environment improvements</a> and <a href="https://github.com/metabrainz/listenbrainz-server/pull/259">documentation</a>. This gave me an opportunity to better explore project documentation tools. I experimented with <a href="http://www.sphinx-doc.org/en/stable/">Sphinx</a> and <a href="https://readthedocs.org/">Read the Docs</a>. Sphinx introduced me to <a href="http://docutils.sourceforge.net/rst.html">reStructuredText</a> for documentation formats. I&rsquo;ve avoided it in favor of Markdown for a long time, but I see where reStructuredText is stronger for advanced documentation.</p>
<p>Since ListenBrainz is a new project, I plan to contribute documentation for any of my work and improve documentation for pre-existing work. One of the goals for this independent study is to make ListenBrainz a viable candidate for a future data analysis course. To make it easy to use and understand, ListenBrainz needs excellent documentation. Since one of my strengths is technical writing, I plan to contribute more documentation this semester.</p>
<p>You can see some of the <a href="https://listenbrainz.readthedocs.io/en/master/">new documentation</a> already!</p>

<h2 id="google-code-in-mentor">Google Code-In mentor&nbsp;<a class="hanchor" href="#google-code-in-mentor" aria-label="Anchor link for: Google Code-In mentor">🔗</a></h2>
<p>The MetaBrainz community manager, <a href="https://musicbrainz.org/user/Freso">Freso Olesen</a>, approached me to mentor for Google Code-In. <a href="https://codein.withgoogle.com/">Google Code-In</a> is an opportunity for teenagers to meaningfully contribute to open source projects. Google describes Google Code-In as…</p>
<blockquote>
<p>Pre-university students ages 13 to 17 are invited to take part in Google Code-in: Our global, online contest introducing teenagers to the world of open source development. With a wide variety of bite-sized tasks, it’s easy for beginners to jump in and get started no matter what skills they have.</p>
<p>Mentors from our participating organizations lend a helping hand as participants learn what it’s like to work on an open source project. Participants get to work on real software and win prizes from t-shirts to a trip to Google HQ!</p>
</blockquote>
<p>MetaBrainz is a participating organization of Google Code-In this cycle. Because of my work with ListenBrainz, I will contribute a few hours a week to help mentor participating students with ListenBrainz. Beginner problems should be easy to help with since I&rsquo;m still beginning too, and as I spend more time with ListenBrainz, I can help with harder problems.</p>
<p>I&rsquo;m excited to give back to one of my favorite open source projects in this way! I&rsquo;m grateful to have this chance to help out during Google Code-In.</p>

<h2 id="choosing-easyfix-bugs">Choosing easyfix bugs&nbsp;<a class="hanchor" href="#choosing-easyfix-bugs" aria-label="Anchor link for: Choosing easyfix bugs">🔗</a></h2>
<p>After I figured out the development environment issues, I went through <a href="https://tickets.metabrainz.org/projects/LB/issues/">open tickets</a> filed against ListenBrainz to find some to work on. I made a preliminary pass through all open tickets and left some comments for more information, when needed. The tickets I highlighted to look into next were</p>
<ul>
<li><a href="https://tickets.metabrainz.org/browse/LB-85"><strong>LB-85</strong></a>: Username in the profile URL should be case insensitive</li>
<li><a href="https://tickets.metabrainz.org/browse/LB-124"><strong>LB-124</strong></a>: Install messybrainz as a a python library from requirements</li>
<li><a href="https://tickets.metabrainz.org/browse/LB-176"><strong>LB-176</strong></a>: Add stats module and begin calculating some user stats from BigQuery</li>
<li><strong><a href="https://tickets.metabrainz.org/browse/LB-206">LB-206</a></strong>: &ldquo;playing_now&rdquo; submissions not showing on profile</li>
<li><a href="https://tickets.metabrainz.org/browse/LB-212"><strong>LB-212</strong></a>: Show the MetaBrainz logo on the listenbrainz footer.</li>
</ul>
<p>Of these five, LB-124 and LB-212 are already closed. While drafting this article, I completed LB-124 in <a href="https://github.com/metabrainz/listenbrainz-server/pull/266">PR #266</a>. This was part of a test to get the documentation building again because of odd import errors. Later, a new student also learning the project for the first time asked to work on LB-212. Since it was a good first task to explore the project code, I passed the ticket to him.</p>
<p>I want to do one more &ldquo;easyfix&rdquo; bug before going into the main part of my independent study timeline. I don&rsquo;t yet feel comfortable with the code and one more bug solved will help. After this, I plan to pursue the heavier lifting of the independent study to explore data operations and queries to make.</p>

<h2 id="researching-d3js">Researching D3.js&nbsp;<a class="hanchor" href="#researching-d3js" aria-label="Anchor link for: Researching D3.js">🔗</a></h2>
<p>Prof. Roberts introduced <a href="https://d3js.org/">D3.js</a> as a library to build interactive, dynamic charts and visual representations of data. I haven&rsquo;t yet looked into much front-end work, but this was a cool project that I wanted to highlight in my weekly report. This feels like it could be a powerful match for ListenBrainz, especially since the data has high detail.</p>

<h2 id="upcoming-activity">Upcoming activity&nbsp;<a class="hanchor" href="#upcoming-activity" aria-label="Anchor link for: Upcoming activity">🔗</a></h2>
<p>This next week, I won&rsquo;t have as much time to contribute to ListenBrainz. On October 21, I&rsquo;m traveling to Raleigh, NC for <a href="https://allthingsopen.org/">All Things Open</a>. On October 24, I <a href="https://allthingsopen.org/speakers/justin-w-flory/">present my talk</a>, &ldquo;<em>What open source and J.K. Rowling have in common</em>&rdquo;. Since I&rsquo;ll be out of Rochester and missing other classwork, I expect less time on my ListenBrainz work.</p>
<p>This next week will be slower than the last two weeks. Hopefully I&rsquo;ll learn something at the conference too to bring back for ListenBrainz.</p>
<p>Until then… keep the FOSS flag high.</p>]]></description></item><item><title>How to set up a ListenBrainz development environment</title><link>https://jwheel.org/blog/2017/10/listenbrainz-development-environment/</link><pubDate>Wed, 04 Oct 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/10/listenbrainz-development-environment/</guid><description><![CDATA[<p><em>This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in <a href="https://jwfblog.wpenginepowered.com/tag/rit-2171/">this tag</a>.</em></p>
<hr>
<p>One of the first rites of passage when working on a new project is creating your development environment. It always seems simple, but sometimes there are bumps along the way. The first activity I did to begin contributing to ListenBrainz was create my development environment. I wasn&rsquo;t successful with the documentation in the README, so I had to play around and work with the project before I was even running it.</p>
<p>The first part of this post details how to set up your own development environment. Then, the second half talks about the solution I came up with and my first contribution back to the project.</p>

<h2 id="install-dependencies-docker">Install dependencies: Docker&nbsp;<a class="hanchor" href="#install-dependencies-docker" aria-label="Anchor link for: Install dependencies: Docker">🔗</a></h2>
<p>This tutorial assumes you are using a Linux distribution. If you&rsquo;re using a different operating system, install the necessary dependencies or packages with your preferred method.</p>
<p>ListenBrainz ships in Docker containers, which helps create your development environment and later deploy the application. Therefore, to work on the project, you need to install Docker and use containers for building the project. Containers save you from installing all of this on your own workstation! Since I&rsquo;m using Fedora, I run this command.</p>
<pre tabindex="0"><code>sudo dnf install docker docker-compose
</code></pre>
<h2 id="register-a-musicbrainz-application">Register a MusicBrainz application&nbsp;<a class="hanchor" href="#register-a-musicbrainz-application" aria-label="Anchor link for: Register a MusicBrainz application">🔗</a></h2>
<p>Next, you need to register your application and get a OAuth token from MusicBrainz. Using the OAuth token lets you sign into your development environment with your MusicBrainz account. Then, you can import your plays from somewhere else.</p>
<p>To register, visit the <a href="https://musicbrainz.org/account/applications">MusicBrainz applications page</a>. There, look for the option to <a href="https://musicbrainz.org/account/applications/register">register your application</a>. Fill out the form with these three options.</p>
<ul>
<li><strong>Name</strong>: (any name you want and will recognize, I used <code>listenbrainz-server-devel</code>)</li>
<li><strong>Type</strong>: <code>Web Application</code></li>
<li><strong>Callback URL</strong>: <code>http://localhost/login/musicbrainz/post</code></li>
</ul>
<p>After entering this information, you&rsquo;ll have a OAuth client ID and OAuth client secret. You&rsquo;ll use these for configuring ListenBrainz.</p>

<h4 id="update-configpy">Update config.py&nbsp;<a class="hanchor" href="#update-configpy" aria-label="Anchor link for: Update config.py">🔗</a></h4>
<p>With your new client ID and secret, update the ListenBrainz configuration file. If this is your first time configuring ListenBrainz, copy the sample to a live configuration.</p>
<pre tabindex="0"><code>cp listenbrainz/config.py.sample listenbrainz/config.py
</code></pre><p>Next, open the file with your favorite text editor and look for this section.</p>
<pre tabindex="0"><code># MusicBrainz OAuth
MUSICBRAINZ_CLIENT_ID = &#34;CLIENT_ID&#34;
MUSICBRAINZ_CLIENT_SECRET = &#34;CLIENT_SECRET&#34;
</code></pre><p>Update the strings with your client ID and secret. After doing this, your ListenBrainz development environment is able to authenticate and log in from your MusicBrainz login.</p>

<h2 id="initialize-listenbrainz-databases">Initialize ListenBrainz databases&nbsp;<a class="hanchor" href="#initialize-listenbrainz-databases" aria-label="Anchor link for: Initialize ListenBrainz databases">🔗</a></h2>
<p>Your development environment needs some databases present to work. Before proceeding, run these three commands to initialize the databases.</p>
<pre tabindex="0"><code>docker-compose -f docker/docker-compose.yml -p listenbrainz run --rm web python3 manage.py init_db --create-db
docker-compose -f docker/docker-compose.yml -p listenbrainz run --rm web python3 manage.py init_msb_db --create-db
docker-compose -f docker/docker-compose.yml -p listenbrainz run --rm web python3 manage.py init_influx
</code></pre><p>Your development environment is now ready. Now, let&rsquo;s actually see ListenBrainz load locally!</p>

<h2 id="run-the-magic-script">Run the magic script&nbsp;<a class="hanchor" href="#run-the-magic-script" aria-label="Anchor link for: Run the magic script">🔗</a></h2>
<p>Once you have done this, run the <code>develop.sh</code> script in the root of the repository. Using <code>docker-compose</code>, the script creates multiple Docker containers for the different services and parts of the ListenBrainz server. Running this script will start Redis, PostgreSQL, InfluxDB, and web server containers, to name a few. But this also makes it easy to stop them all later.</p>
<pre tabindex="0"><code>./develop.sh
</code></pre><p>You will see the containers build and eventually run. Leave the script running to see your development environment. Later, you can shut it down by pressing <code>CTRL^C</code>. Once everything is running, visit your new site from your browser!</p>
<p><a href="http://localhost/">http://localhost/</a></p>
<p>Now, you are all set to begin making changes and testing them in your development environment!</p>

<h2 id="making-my-first-pull-request">Making my first pull request&nbsp;<a class="hanchor" href="#making-my-first-pull-request" aria-label="Anchor link for: Making my first pull request">🔗</a></h2>
<p>As mentioned earlier, my first attempt at a development environment was unsuccessful. My system kept denying permission to the processes in the containers. After looking at system audit logs and running a temporary <code>setenforce 0</code>, I tried the script one more time. Everything suddenly worked! So the issue was mostly with SELinux.</p>
<p>With my goal to get my environment set up, I figured out a few issues with the configuration offered by the project developers. I eventually made <a href="https://github.com/metabrainz/listenbrainz-server/pull/257">PR #257</a> against <code>listenbrainz-server</code> with my improvements.</p>

<h4 id="labeling-selinux-volume-mounts">Labeling SELinux volume mounts&nbsp;<a class="hanchor" href="#labeling-selinux-volume-mounts" aria-label="Anchor link for: Labeling SELinux volume mounts">🔗</a></h4>
<p>To diagnose the issue, I started with a quick search and found a <a href="https://stackoverflow.com/questions/24288616/permission-denied-on-accessing-host-directory-in-docker">StackOverflow question</a> with my same problem. There, the question was about Docker containers and denied permissions in the container. The answers explained it was an SELinux error and the context for the containers was not set. However, temporarily changing context for a directory didn&rsquo;t seem too effective and doesn&rsquo;t persist across reboots.</p>
<p>Continuing the search, I found an issue filed against <code>docker-compose</code> about the <code>:z</code> and <code>:Z</code> flags for volume mounts. These flags set SELinux context for containers, with the best explanation I found coming from <a href="https://stackoverflow.com/a/35222815/2497452">this StackOverflow answer</a>.</p>
<blockquote>
<p>Two suffixes :z or :Z can be added to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The &lsquo;z&rsquo; option tells Docker that the volume content will be shared between containers. Docker will label the content with a shared content label. Shared volumes labels allow all containers to read/write content. The &lsquo;Z&rsquo; option tells Docker to label the content with a private unshared label.</p>
</blockquote>
<p>Therefore, I added the <code>:z</code> flag to all the volume mounts in the <code>docker-compose.yml</code> file. I submitted a fix upstream for this in <a href="https://github.com/metabrainz/listenbrainz-server/pull/257">listenbrainz-server#257</a>!</p>

<h4 id="correct-the-startup-port">Correct the startup port&nbsp;<a class="hanchor" href="#correct-the-startup-port" aria-label="Anchor link for: Correct the startup port">🔗</a></h4>
<p>In the README, it says the server will start on port 8000, but the <code>docker-compose.yml</code> file actually started the server on port 80. I included a fix for this in <a href="https://github.com/metabrainz/listenbrainz-server/pull/257">my pull request</a> as well.</p>

<h2 id="git-push">git push!&nbsp;<a class="hanchor" href="#git-push" aria-label="Anchor link for: git push!">🔗</a></h2>
<p>This post makes a debugging experience that actually took hours look like it happened in minutes. But after getting over this hurdle, it was awesome to finally see ListenBrainz running locally on my workstation. It was an even better feeling when I could take my improvements and send them back in a pull request to ListenBrainz. Hopefully this will make it easier for others to create their own development environments and start hacking!</p>]]></description></item><item><title>Sign at the line: Deploying an app to CoreOS Tectonic</title><link>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</link><pubDate>Fri, 04 Aug 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. The last post and this post build on top of the Fedora Magazine series. The third post introduced how to <a href="https://jwfblog.wpenginepowered.com/2017/07/tectonic-amazon-web-services-aws/">deploy CoreOS Tectonic</a> to Amazon Web Services (AWS). This fourth post teaches how to deploy a simple web application to your Tectonic installation.</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article picks up from where we left off last when you installed Tectonic to Amazon Web Services (AWS). By the end of this article, you will…</p>
<ul>
<li>Start up <a href="https://redis.io/">Redis</a> master and slave pods</li>
<li>Start a front-end pod that interacts with the Redis pods</li>
<li>Deploy a simple web app for all of your friends to leave you messages</li>
</ul>
<p>Compared to previous articles, this article will be a little more hands-on. Also like before, this is based off an excellent tutorial in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">upstream Kubernetes documentation</a>. Let&rsquo;s get started!</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>This tutorial assumes you followed the <a href="https://fedoramagazine.org/minikube-kubernetes/">Minikube how-to</a> earlier in this series and that you already <a href="https://fedoramagazine.org/tectonic-amazon-web-services-aws/">have a Tectonic installation</a> running (doesn&rsquo;t have to be on AWS). In case you&rsquo;re jumping in now, make sure you have the Kubernetes client tools installed on your Fedora system, like <code>kubectl</code>. If not, you can install them now.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes-client
</code></pre>
<h2 id="configure-kubectl-for-tectonic">Configure <code>kubectl</code> for Tectonic&nbsp;<a class="hanchor" href="#configure-kubectl-for-tectonic" aria-label="Anchor link for: Configure kubectl for Tectonic">🔗</a></h2>
<p>To use <code>kubectl</code> with your Tectonic installation, you need to have a valid configuration in <code>~/.kube/config</code> for your cluster. This is how <code>kubectl</code> knows where and how to talk to Tectonic. To get these values, first log into the Tectonic Console you installed.</p>
<ol>
<li>Click <em>username</em> (usually <em>admin</em>) &gt; <em>My Account</em> on the bottom left.</li>
<li>Click <em>Download Configuration</em>.</li>
<li>When the <em>Set Up kubectl</em> window opens, click <em>Verify Identity</em>.</li>
<li>Enter your username and password, and click <em>Login</em>.</li>
<li>From the <em>Login Successful</em> screen, copy the provided code.</li>
<li>Switch back to Tectonic and enter the code in the field.</li>
</ol>
<p>Now you will be able to download <code>kubectl-config</code> from Tectonic. There&rsquo;s two ways to proceed from here.</p>

<h4 id="add-a-new-configuration">Add a new configuration&nbsp;<a class="hanchor" href="#add-a-new-configuration" aria-label="Anchor link for: Add a new configuration">🔗</a></h4>
<p>If this is your first time using <code>kubectl</code>, your configuration is likely empty. If it&rsquo;s empty or you don&rsquo;t care about overwriting an old configuration, you can run the following commands to add the configuration.</p>
<pre tabindex="0"><code>$ mkdir ~/.kube/
$ mv ~/Downloads/minikube-config ~/.kube/config
$ chmod 600 ~/.kube/config
</code></pre>
<h4 id="append-to-an-existing-configuration">Append to an existing configuration&nbsp;<a class="hanchor" href="#append-to-an-existing-configuration" aria-label="Anchor link for: Append to an existing configuration">🔗</a></h4>
<p>If you already have a configuration, like from Minikube, you might not want to wipe it all out. In this case, you can merge the files manually together. You&rsquo;ll need to copy the <code>clusters</code>, <code>users</code>, and <code>contexts</code> from the Tectonic configuration into your existing one. The benefit of doing this is that you&rsquo;ll be able to change contexts to switch from one cluster to another.</p>

<h4 id="test-your-configuration">Test your configuration&nbsp;<a class="hanchor" href="#test-your-configuration" aria-label="Anchor link for: Test your configuration">🔗</a></h4>
<p>Once you finished your configuration, test to see if it works.</p>
<pre tabindex="0"><code>$ kubectl config use-context tectonic       # if you have multiple contexts in config
$ kubectl get nodes
NAME                                        STATUS    AGE
ip-10-0-0-59.us-east-2.compute.internal     Ready     1d
ip-10-0-23-239.us-east-2.compute.internal   Ready     1d
ip-10-0-44-211.us-east-2.compute.internal   Ready     1d
ip-10-0-61-218.us-east-2.compute.internal   Ready     1d
ip-10-0-67-239.us-east-2.compute.internal   Ready     1d
ip-10-0-95-51.us-east-2.compute.internal    Ready     1d
</code></pre><p>Huzzah! Now we&rsquo;re ready to get to work.</p>

<h2 id="getting-the-deployment-and-service-files">Getting the deployment and service files&nbsp;<a class="hanchor" href="#getting-the-deployment-and-service-files" aria-label="Anchor link for: Getting the deployment and service files">🔗</a></h2>
<p>All of the example files come from the official Kubernetes GitHub repo. You can find them in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">Guestbook example</a>. To get started, create a new directory and download all of the files.</p>
<pre tabindex="0"><code>$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/redis-{master,slave}-{deployment,service}.yaml \
       https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/frontend-{deployment,service}.yaml
</code></pre><p>We&rsquo;ll explain what all of these do in next steps. All of these next steps will start with the command to run, followed by a short explanation of what&rsquo;s actually happening.</p>

<h2 id="start-the-redis-master">Start the Redis master&nbsp;<a class="hanchor" href="#start-the-redis-master" aria-label="Anchor link for: Start the Redis master">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-master-service.yaml
service &#34;redis-master&#34; created
$ kubectl create -f redis-master-deployment.yaml
deployment &#34;redis-master&#34; created
</code></pre>
<h4 id="define-the-deployment">Define the deployment&nbsp;<a class="hanchor" href="#define-the-deployment" aria-label="Anchor link for: Define the deployment">🔗</a></h4>
<p>The <code>redis-master-deployment.yaml</code> file downloaded earlier defines the deployment and its characteristics. In this case, we have one pod that runs the Redis master in a container. Since we&rsquo;re using a deployment, that means if our pod goes down, Kubernetes will <strong>spin up a new pod</strong> to replace it. Worth noting in this example, if the pod <em>did</em> go down, there would be a potential for data loss until the new one replaces the old one (since the Redis master is not highly available, i.e. there are multiple).</p>

<h4 id="define-the-service">Define the service&nbsp;<a class="hanchor" href="#define-the-service" aria-label="Anchor link for: Define the service">🔗</a></h4>
<p>Our service in this example is a <strong>named load balancer</strong> that <strong>proxies traffic</strong> across one or many containers. Even though we only have one Redis master pod, we still want to use a service. This is a deterministic way of making the route to the master with a dynamic (or elastic) IP address.</p>
<p>Labeling the pods is important in this case, as Kubernetes will use the pods&rsquo; labels to determine which pods receive the traffic sent to the service, and load balance it accordingly.</p>

<h4 id="create-the-service">Create the service&nbsp;<a class="hanchor" href="#create-the-service" aria-label="Anchor link for: Create the service">🔗</a></h4>
<p>The next important step is to create the service. Note that we&rsquo;re doing this <em>before</em> we create the deployment. It&rsquo;s best practice to create the service first. This allows the scheduler to later spread the service across the deployments you create to support your application.</p>
<p>After creating the service, you can check its status by running this command. You should see similar output.</p>
<pre tabindex="0"><code>$ kubectl get services
NAME              CLUSTER-IP       EXTERNAL-IP       PORT(S)       AGE
redis-master      10.0.76.248      &lt;none&gt;            6379/TCP      1s
</code></pre><p>Now your Redis master serivce is up and running! The next step will be to create the Redis master deployment.</p>
<p>If you look at the service configuration file, you&rsquo;ll notice <code>port</code> and <code>targetPort</code> are two defined variables. Once everything is up and running, these will be important for determining how the traffic from the slaves to the masters is routed.</p>
<ol>
<li>Redis slave connects to <code>port</code> on Redis master service</li>
<li>Traffic forwarded from service&rsquo;s <code>port</code> to <code>targetPort</code> on pod the service listens to</li>
</ol>

<h4 id="create-the-deployment">Create the deployment&nbsp;<a class="hanchor" href="#create-the-deployment" aria-label="Anchor link for: Create the deployment">🔗</a></h4>
<p>Next, we created the Redis master pod in the cluster. To see our deployment and pods, we can run the following commands to see what was created.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
redis-master   1         1         1            1           27s
</code></pre><pre tabindex="0"><code>$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
redis-master-2353460263-1ecey   1/1       Running   0          1m
...
</code></pre><p>You should see all of the pods in your cluster so far. For now, that&rsquo;s just the Redis master. Let&rsquo;s give it some friends!</p>

<h2 id="start-the-redis-slaves">Start the Redis slaves&nbsp;<a class="hanchor" href="#start-the-redis-slaves" aria-label="Anchor link for: Start the Redis slaves">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-slave-service.yaml
service &#34;redis-slave&#34; created
$ kubectl create -f redis-slave-deployment.yaml
deployment &#34;redis-slave&#34; created
</code></pre>
<h4 id="defining-the-deployment">Defining the deployment&nbsp;<a class="hanchor" href="#defining-the-deployment" aria-label="Anchor link for: Defining the deployment">🔗</a></h4>
<p>In the configuration file, we defined two replicas, unlike the master. By doing this, it tells Kubernetes that the minimum number of pods that should always be running is two. If one of your pods goes down, Kubernetes automatically creates a new one to support the application. If you want, you can try killing the Docker process for one of your pods to see it happen in real time.</p>

<h2 id="start-the-guestbook-front-end">Start the guestbook front-end&nbsp;<a class="hanchor" href="#start-the-guestbook-front-end" aria-label="Anchor link for: Start the guestbook front-end">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f frontend-service.yaml
service &#34;frontend&#34; created
$ kubectl create -f frontend-deployment.yaml
deployment &#34;frontend&#34; created
</code></pre><p>The front-end is a PHP application with an AJAX interface and Angular-based UI. When using the form on the front-end application, it talks to the Redis master or slave, depending on if it&rsquo;s reading or writing to Redis. Again, we&rsquo;re deploying the front-end with multiple replicas. In this case, there will be three pods to support the front-end.</p>

<h2 id="say-hello">Say hello!&nbsp;<a class="hanchor" href="#say-hello" aria-label="Anchor link for: Say hello!">🔗</a></h2>
<p>Once you&rsquo;ve finished deploying everything, your web app should now be accessible! To get the full domain from AWS, run this command to figure out where to look.</p>
<pre tabindex="0"><code>$ kubectl get deploy/frontend svc/frontend -o wide
NAME           CLUSTER-IP   EXTERNAL-IP                                                             PORT(S)        AGE       SELECTOR
svc/frontend   10.3.0.175   aaebd8247ef2311e6a045021d1620193-54019671.us-east-2.elb.amazonaws.com   80:31020/TCP   1m        k8s-app=guestbook,tier=frontend

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/frontend   3         3         3            3           1m
</code></pre><p>Congratulations, we&rsquo;re all finished!</p>

<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Once you&rsquo;re finished or when you want to stop running the guestbook, it&rsquo;s easy to get rid of the deployments and services we created. Using labels, all the deployments and services can be deleted with one command.</p>
<pre tabindex="0"><code>$ kubectl delete deployments,services -l &#34;app in (redis, guestbook)&#34;
</code></pre><p>And now your guestbook application is offline. (It was nice while it lasted!)</p>

<h2 id="learn-more-about-kubernetes-and-tectonic">Learn more about Kubernetes and Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes-and-tectonic" aria-label="Anchor link for: Learn more about Kubernetes and Tectonic">🔗</a></h2>
<p>If you want to explore more about Kubernetes, you can read some of the earlier articles in this series. You can also read the original tutorial published by Kubernetes <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">on GitHub</a>. Additionally, the upstream documentation for <a href="https://kubernetes.io/docs/home/">Kubernetes</a> and <a href="https://coreos.com/tectonic/docs/latest/">Tectonic</a> is thorough and can help answer more advanced questions.</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Deploy CoreOS Tectonic to Amazon Web Services (AWS)</title><link>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</link><pubDate>Fri, 28 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. This post builds on top of the Fedora Magazine series by showing how to deploy CoreOS Tectonic to Amazon Web Services (AWS).</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article takes off from running Kubernetes on your own hardware and moves us one step closer to the cloud. By the end of this article, you will…</p>
<ul>
<li>Understand what CoreOS Tectonic is</li>
<li>Set up Amazon Web Services (AWS) for Tectonic</li>
<li>Deploy Tectonic to AWS</li>
</ul>
<p>This article is also based off of the excellent tutorial provided in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/creating-aws.html">CoreOS documentation</a>. Let&rsquo;s get started!</p>

<h2 id="what-is-tectonic">What is Tectonic?&nbsp;<a class="hanchor" href="#what-is-tectonic" aria-label="Anchor link for: What is Tectonic?">🔗</a></h2>
<p>In the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first article</a>, some of the key concepts of Kubernetes and why it&rsquo;s useful were explained. Kubernetes automates the deployment and setting up of your infrastructure across the three layers (users, masters, nodes). If you&rsquo;re working on your own at a small scale, Kubernetes itself can be plenty to meet your needs. However, there is still a decent amount of human involvement in managing the different pieces of Kubernetes. If you&rsquo;re working with multiple people in a team and across different environments, vanilla Kubernetes can be a lot to manage. For an enterprise environment, there&rsquo;s still some unmet needs. This is where Tectonic steps in.</p>
<p>Tectonic is a commercial product offered by <a href="https://coreos.com/">CoreOS</a>, the providers of <a href="https://coreos.com/os/docs/latest">Container Linux</a> and the original developers of <code>etcd</code>, now one of the core components of Kubernetes. Tectonic takes all of the open source components and pre-packages them. The self-proclaimed goal of doing this is to let anyone build a Google-style infrastructure into a cloud or on-premise environment. The outcome for the user is that it&rsquo;s easy to install a Kubernetes infrastructure across many different environments. In addition to simplifying the installation of the various components of a Kubernetes stack, Tectonic also provides a management console, a container registry for building and sharing containers, additional tools for deployment, and a few other nice features.</p>
<p>If we think about Kubernetes as a cake like we did before with three layers, Tectonic is like the box you set it in. Now, you can take your cake anywhere, move it around, and stack it with other cakes-in-a-box. All of your cakes are in their own boxes and you don&rsquo;t have to worry about them accidentally being damaged. If you&rsquo;re still a little confused, this diagram might help make more sense of it.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/platform-features.png" alt="Understanding where CoreOS Tectonic fits into the Kubernetes puzzle" loading="lazy">
  <figcaption>Understanding where Tectonic fits into the Kubernetes puzzle. From coreos.com/tectonic (<a href="https://coreos.com/tectonic/" class="bare">https://coreos.com/tectonic/</a>)</figcaption>
</figure>
</p>
<p>Fortunately, Tectonic has a free license that lets you use it for ten nodes. In this example, we&rsquo;ll register, get a free license, and deploy it into AWS.</p>
<p>(<em>Note</em>: If you want to revert anything we do in this example, there&rsquo;s an easy way to dismantle it across AWS and bring your bill to $0.00.)</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>In order to successfully run this guide, there&rsquo;s a few things you&rsquo;ll need first.</p>
<ul>
<li><strong>Amazon Web Services (AWS) account</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://aws.amazon.com">here</a></li>
</ul>
</li>
<li><strong>CoreOS Tectonic account and license</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://account.coreos.com/">here</a></li>
</ul>
</li>
<li><strong>A root-level or sub-domain</strong> (<em>e.g. example.com or k8s.example.com</em>)
<ul>
<li>If you look around, you can probably find some for less than USD$1 a year if you need one</li>
</ul>
</li>
<li><strong>Curiosity</strong>!</li>
</ul>

<h2 id="setting-up-dns-with-route-53">Setting up DNS with Route 53&nbsp;<a class="hanchor" href="#setting-up-dns-with-route-53" aria-label="Anchor link for: Setting up DNS with Route 53">🔗</a></h2>
<p>The first things we&rsquo;ll do is set up our domain with Route 53 in AWS. Route 53 can do a lot of things, like DNS management, traffic management, availability monitoring, domain registration, and more. However, we&rsquo;re only going to be using it for DNS management. Tectonic will use this to automatically provision DNS records for internal and external use.</p>

<h4 id="add-your-domain">Add your domain&nbsp;<a class="hanchor" href="#add-your-domain" aria-label="Anchor link for: Add your domain">🔗</a></h4>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-add-domain-route-53-283x300.png" alt="Adding a new domain to AWS Route 53 for Tectonic" loading="lazy">
  <figcaption>Adding a new domain to AWS Route 53 for Tectonic</figcaption>
</figure>
</p>
<p>To add your domain to Route 53, follow these steps from AWS.</p>
<ol>
<li>From <em>Services</em>, select <em>Networking &amp; Content Delivery</em> &gt; <em>Route 53</em>.</li>
<li>Select <em>Hosted zones</em> from the left pane and click <em>Create Hosted Zone</em>.</li>
<li>Enter your domain or sub-domain, add a comment if you want, and choose a Public Zone for the type.</li>
</ol>
<p>Once you&rsquo;ve done this, you can go ahead and click &ldquo;<em>Create</em>&rdquo;.</p>

<h4 id="change-the-nameservers">Change the nameservers&nbsp;<a class="hanchor" href="#change-the-nameservers" aria-label="Anchor link for: Change the nameservers">🔗</a></h4>
<p>After adding the hosted zone to Route 53, you&rsquo;ll need to change the nameservers for your domain via the domain registrar (whoever you bought the domain from). Usually it should be easy to find this, but it varies among registrars. If you&rsquo;re having a hard figuring out how to do this, try searching for a how-to or contacting your registrar&rsquo;s support.</p>
<p>After you added the hosted zone, you should see the nameservers in Route 53. There will be four nameservers provided there. You can copy and paste them from Route 53 to your domain registrar. Also note that if you&rsquo;re using a subdomain, the instructions might be a little different. You can read how to do this in the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/creating-migrating.html">Route 53 documentation</a>.</p>
<p>The nameservers could take minutes or hours to update, depending on how lucky you are. If you&rsquo;re impatient and want to check, open up a terminal and run this command. If you see the AWS nameservers in the output, then your domain has propagated and is now usable by Route 53.</p>
<pre tabindex="0"><code>dig -t ns &lt;example.com&gt;
</code></pre>
<h2 id="configuring-ec2-with-ssh-key-pair">Configuring EC2 with SSH key pair&nbsp;<a class="hanchor" href="#configuring-ec2-with-ssh-key-pair" aria-label="Anchor link for: Configuring EC2 with SSH key pair">🔗</a></h2>
<p>This guide assumes you already have an SSH key pair created on your system. If you don&rsquo;t have one generated, you can read how to generate one <a href="https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/">here</a>.</p>
<p>The next step for us is to add an SSH key pair to EC2, one of the compute engine products offered by AWS. We&rsquo;ll import an existing key on your system into EC2.</p>
<ol>
<li>From AWS, go to <em>Services</em> &gt; <em>Compute</em> &gt; <em>EC2</em>.</li>
<li>Confirm that you are in the <strong>correct EC2 region</strong> by checking the location next to your name in the menu bar.</li>
<li>Under <em>Network &amp; Security</em>, click <em>Key Pairs</em>.</li>
<li>Click <em>Import Key Pair</em>.</li>
<li>Either upload your public key file (<code>~/.ssh/id_rsa.pub</code>) or paste it into the text field. Don&rsquo;t forget to give it a name.</li>
</ol>
<p>And that&rsquo;s all you need to do!</p>

<h2 id="assigning-aws-user-privileges">Assigning AWS user privileges&nbsp;<a class="hanchor" href="#assigning-aws-user-privileges" aria-label="Anchor link for: Assigning AWS user privileges">🔗</a></h2>
<p>Tectonic does the magic of setting up AWS for you, so you don&rsquo;t have to manually add and create the services from the web interface. In order to do this, you need to add a user account that Tectonic can use to do all of the provisioning it needs. To do this, you&rsquo;ll need to create a new Access ID and Secret key pair from AWS.</p>
<ol>
<li>Select <em>Services</em> &gt; <em>Security, Identity &amp; Compliance</em> &gt; <em>IAM</em>.</li>
<li>From the left hand pane, click <em>Users</em>, then click <em>Add user</em>.</li>
<li>Set the user details:
<ol>
<li><em>User name</em> can be anything you like (I used <code>tectonic-mydomain.com</code>)</li>
<li><em>Access type</em> only needs to be <em>Programmatic access</em></li>
</ol>
</li>
<li>For permissions, click <em>Add user to group</em> and create a new group for your user.</li>
<li>When creating a new group, attach only the policies needed by Tectonic to operate correctly:
<ol>
<li><code>AmazonEC2FullAccess</code></li>
<li><code>IAMFullAccess</code></li>
<li><code>AmazonS3FullAccess</code></li>
<li><code>AmazonVPCFullAccess</code></li>
<li><code>AmazonRoute53FullAccess</code></li>
</ol>
</li>
<li>Finish creating the user. You&rsquo;ll then see the <em>Access key ID</em> and <em>Secret access key</em>. Hold onto these, you&rsquo;ll need them later. You won&rsquo;t get to see the secret key again!</li>
</ol>
<p>Now we&rsquo;re ready to install Tectonic! Let&rsquo;s grab your credentials next.</p>

<h2 id="download-tectonic-credentials">Download Tectonic credentials&nbsp;<a class="hanchor" href="#download-tectonic-credentials" aria-label="Anchor link for: Download Tectonic credentials">🔗</a></h2>
<p>Jump back over to the <a href="https://account.coreos.com/">CoreOS accounts page</a>. When you&rsquo;re logged in, you&rsquo;ll see the <em>Account Assets</em> area. Download the CoreOS license file and pull secret. Later on in the installer, you&rsquo;ll need to insert these to finish the installation.</p>

<h2 id="running-the-installer">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer" aria-label="Anchor link for: Running the installer">🔗</a></h2>
<p>Now things get interesting! We finally get to install and deploy Tectonic into AWS. The installer takes the form of a graphical installer in your web browser. To use the installer, you need to download the binary and run it. If you&rsquo;re curious, you can find the installer source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>.</p>

<h4 id="download-and-run-installer">Download and run installer&nbsp;<a class="hanchor" href="#download-and-run-installer" aria-label="Anchor link for: Download and run installer">🔗</a></h4>
<p>First, open up a new terminal window and navigate to a directory you want to download the installer to. Even though you likely won&rsquo;t need to run the installer again, you will want to hang on to this if you ever want to easily dismantle everything in AWS later.</p>
<pre tabindex="0"><code>curl -O https://releases.tectonic.com/tectonic-1.6.4-tectonic.1.tar.gz
</code></pre><p>Next, extract the tarball and navigate into the directory.</p>
<pre tabindex="0"><code>tar -xzvf tectonic-1.6.4-tectonic.1.tar.gz
cd tectonic/tectonic-installer
</code></pre><p>Now execute the installer binary. After running this, a new browser window will open that features the graphical installer.</p>
<pre tabindex="0"><code>./linux/installer
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-installer-aws.png" alt="Now we&rsquo;re ready to deploy Tectonic into AWS!" loading="lazy">
  <figcaption>Now we’re ready to deploy Tectonic into AWS!</figcaption>
</figure>
</p>

<h4 id="running-the-installer-1">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer-1" aria-label="Anchor link for: Running the installer">🔗</a></h4>
<p>The installer is thorough and assumes safe defaults for most of the steps. Be sure to have your AWS Access and Secret ID keys on hand. You should be able to run through the installer without issue. If you&rsquo;re confused about what any of the values mean or want to make custom changes, you can read more in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/installing-tectonic.html">upstream documentation</a>.</p>
<p>Once you&rsquo;re finished, congrats! You&rsquo;ve successfully installed Tectonic!</p>

<h2 id="check-out-your-tectonic-install">Check out your Tectonic install&nbsp;<a class="hanchor" href="#check-out-your-tectonic-install" aria-label="Anchor link for: Check out your Tectonic install">🔗</a></h2>
<p>Once you finish the installation successfully, your Tectonic installation will be accessible within AWS. You can navigate to the domain you specified during the install to find it. Unless you added a CA authority and certificates, your browser will probably complain about invalid SSL certificates, but you can ignore the warning safely. It might also take a few minutes before the URL is accessible, so if you were looking for a coffee or tea break, now would be a good time!</p>
<p>Once you&rsquo;re logged in, you should see something like this.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-status-page.png" alt="Looking at a freshly installed Tectonic status page on AWS" loading="lazy">
  <figcaption>Looking at a freshly installed Tectonic status page on AWS</figcaption>
</figure>
</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/prometheus-monitoring.png" alt="A more advanced use case of what Tectonic can do with monitoring" loading="lazy">
  <figcaption>A more advanced use case of what Tectonic can do with monitoring</figcaption>
</figure>
</p>

<h2 id="blow-it-all-away">Blow it all away!&nbsp;<a class="hanchor" href="#blow-it-all-away" aria-label="Anchor link for: Blow it all away!">🔗</a></h2>
<p>If you&rsquo;re like me, you might be frustrated by guides that tell you how to install things but not how to take it all apart. Fortunately, this guide not only tells you how to do that, but the Tectonic installer also makes it super easy to do. If you&rsquo;re sure that you&rsquo;re done with Tectonic and don&rsquo;t want any leftovers to remain in AWS, this is the best way to do it, instead of deleting everything manually from the AWS Console.</p>
<p>Every installation has a time-stamped folder in the <code>tectonic</code> directory we used earlier. First, you need to navigate into the specific folder for the cluster you installed. It&rsquo;s important to be inside of this directory first.</p>
<pre tabindex="0"><code>cd tectonic/tectonic-installer/linux/clusters/&lt;CLUSTERNAME&gt;
</code></pre><p><code>&lt;CLUSTERNAME&gt;</code> will be the time-stamped directory. Once you&rsquo;re in the folder, run this command to trigger the uninstaller. After running this, you&rsquo;ll see the installer slowly dismantle everything and delete any leftovers in AWS.</p>
<pre tabindex="0"><code>../../terraform destroy
</code></pre><p>Once it finishes, you should see an output message confirming how many AWS resources were destroyed. And now you&rsquo;re back to where you started.</p>

<h2 id="learn-more-about-tectonic">Learn more about Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-tectonic" aria-label="Anchor link for: Learn more about Tectonic">🔗</a></h2>
<p>If you thought this was exciting and want to learn more, there is no shortage of resources for you to read. You can learn more about Tectonic from the <a href="https://coreos.com/tectonic/">CoreOS website</a> or the <a href="https://tectonic.com/blog/announcing-tectonic/">original release announcement</a>. You can also dig into the installer&rsquo;s source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>. If you&rsquo;re still trying to wrap your head around Tectonic, there&rsquo;s a good write-up <a href="https://virtualizationreview.com/articles/2017/04/04/coreos-tectonic-to-shake-up-kubernetes.aspx">on virtualizationreview.com</a>.</p>
<p>Next week, we&rsquo;ll install a simple guestbook application to our Tectonic installation to see how it all works and what you can do with it. Stay tuned!</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Clustered computing on Fedora with Minikube</title><link>https://jwheel.org/blog/2017/07/minikube-kubernetes/</link><pubDate>Fri, 07 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/minikube-kubernetes/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/minikube-kubernetes/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. This second post shows you how to build a single-node Kubernetes deployment on your own computer.</em></p>
<hr>
<p>Once you have a better understanding of what the key concepts and terminology in Kubernetes are, getting started is easier. Like many programming tutorials, this tutorial shows you how to build a &ldquo;Hello World&rdquo; application and deploy it locally on your computer using Kubernetes. This is a simple tutorial because there aren&rsquo;t multiple nodes to work with. Instead, the only device we&rsquo;re using is a single node (a.k.a. your computer). By the end, you&rsquo;ll see how to deploy a Node.js application into a Kubernetes pod and manage it with a deployment on Fedora.</p>
<p>This tutorial isn&rsquo;t made from scratch. You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the official Kubernetes documentation. This article adds some changes that will let you do the same thing on your own Fedora computer.</p>

<h2 id="introducing-minikube">Introducing Minikube&nbsp;<a class="hanchor" href="#introducing-minikube" aria-label="Anchor link for: Introducing Minikube">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> is an official tool developed by the Kubernetes team to help make testing it out easier. It lets you run a single-node Kubernetes cluster through a virtual machine on your own hardware. Beyond using it to play around with or experiment for the first time, it&rsquo;s also useful as a testing tool if you&rsquo;re working with Kubernetes daily. It does support many of the features you&rsquo;d want in a production Kubernetes environment, like DNS, NodePorts, and container run-times.</p>

<h2 id="installation">Installation&nbsp;<a class="hanchor" href="#installation" aria-label="Anchor link for: Installation">🔗</a></h2>
<p>This tutorial requires virtual machine and container software. There are many options you can use. Minikube supports <code>virtualbox</code>, <code>vmwarefusion</code>, <code>kvm</code>, and <code>xhyve</code> drivers for virtualization. However, this guide will use KVM since it&rsquo;s already packaged and available in Fedora. We&rsquo;ll also use Node.js for building the application and Docker for putting it in a container.</p>

<h4 id="pre-requirements">Pre-requirements&nbsp;<a class="hanchor" href="#pre-requirements" aria-label="Anchor link for: Pre-requirements">🔗</a></h4>
<p>You can install the prerequisites with this command.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes libvirt-daemon-kvm kvm nodejs docker
</code></pre><p>After installing these packages, you&rsquo;ll need to add your user to the right group to let you use KVM. The following commands will add your user to the group and then update your current session for the group change to take effect.</p>
<pre tabindex="0"><code>$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
</code></pre>
<h4 id="docker-kvm-drivers">Docker KVM drivers&nbsp;<a class="hanchor" href="#docker-kvm-drivers" aria-label="Anchor link for: Docker KVM drivers">🔗</a></h4>
<p>If using KVM, you will also need to install the KVM drivers to work with Docker. You need to add <a href="https://github.com/docker/machine/releases">Docker Machine</a> and the <a href="https://github.com/dhiltgen/docker-machine-kvm/releases/">Docker Machine KVM Driver</a> to your local path. You can check their pages on GitHub for the latest versions, or you can run the following commands for specific versions. These were tested on a Fedora 25 installation.</p>

<h5 id="docker-machine">Docker Machine&nbsp;<a class="hanchor" href="#docker-machine" aria-label="Anchor link for: Docker Machine">🔗</a></h5>
<pre tabindex="0"><code>$ curl -L https://github.com/docker/machine/releases/download/v0.12.0/docker-machine-`uname -s`-`uname -m` &gt;/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
</code></pre>
<h5 id="docker-machine-kvm-driver">Docker Machine KVM Driver&nbsp;<a class="hanchor" href="#docker-machine-kvm-driver" aria-label="Anchor link for: Docker Machine KVM Driver">🔗</a></h5>
<p>This installs the CentOS 7 driver, but it also works with Fedora.</p>
<pre tabindex="0"><code>$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 &gt;/tmp/docker-machine-driver-kvm
$ chmod +x /tmp/docker-machine-driver-kvm
$ sudo cp /tmp/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm
</code></pre>
<h4 id="installing-minikube">Installing Minikube&nbsp;<a class="hanchor" href="#installing-minikube" aria-label="Anchor link for: Installing Minikube">🔗</a></h4>
<p>The final step for installation is getting Minikube itself. Currently, there is no package in Fedora available, and official documentation recommends grabbing the binary and moving it your local path. To download the binary, make it executable, and move it to your path, run the following.</p>
<pre tabindex="0"><code>$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin/
</code></pre><p>Now you&rsquo;re ready to build your cluster.</p>

<h2 id="create-the-minikube-cluster">Create the Minikube cluster&nbsp;<a class="hanchor" href="#create-the-minikube-cluster" aria-label="Anchor link for: Create the Minikube cluster">🔗</a></h2>
<p>Now that you have everything installed and in the right place, you can create your Minikube cluster and get started. To start Minikube, run this command.</p>
<pre tabindex="0"><code>$ minikube start --vm-driver=kvm
</code></pre><p>Next, you&rsquo;ll need to set the context. Context is how <code>kubectl</code> (the command-line interface for Kubernetes) knows what it&rsquo;s dealing with. To set the context for Minikube, run this command.</p>
<pre tabindex="0"><code>$ kubectl config use-context minikube
</code></pre><p>As a check, make sure that <code>kubectl</code> can communicate with your cluster by running this command.</p>
<pre tabindex="0"><code>$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use &#39;kubectl cluster-info dump&#39;.
</code></pre>
<h2 id="build-your-application">Build your application&nbsp;<a class="hanchor" href="#build-your-application" aria-label="Anchor link for: Build your application">🔗</a></h2>
<p>Now that Kubernetes is ready, we need to have an application to deploy in it. This article uses the same Node.js application as the official tutorial in the Kubernetes documentation. Create a folder called <code>hellonode</code> and create a new file called <code>server.js</code> with your favorite text editor.</p>
<pre tabindex="0"><code>var http = require(&#39;http&#39;);

var handleRequest = function(request, response) {
 console.log(&#39;Received request for URL: &#39; + request.url);
 response.writeHead(200);
 response.end(&#39;Hello world!&#39;);
};
var www = http.createServer(handleRequest);
www.listen(8080);
</code></pre><p>Now try running your application and running it.</p>
<pre tabindex="0"><code>$ node server.js
</code></pre><p>While it&rsquo;s running, you should be able to access it on <a href="http://localhost:8080/">localhost:8080</a>. Once you verify it&rsquo;s working, hit <code>Ctrl+C</code> to kill the process.</p>

<h2 id="create-docker-container">Create Docker container&nbsp;<a class="hanchor" href="#create-docker-container" aria-label="Anchor link for: Create Docker container">🔗</a></h2>
<p>Now you have an application to deploy! The next step is to get it packaged into a Docker container (that you&rsquo;ll pass to Kubernetes later). You&rsquo;ll need to create a <code>Dockerfile</code> in the same folder as your <code>server.js</code> file. This guide uses an existing Node.js Docker image. It exposes your application on port 8080, copies <code>server.js</code> to the image, and runs it as a server. Your <code>Dockerfile</code> should look like this.</p>
<pre tabindex="0"><code>FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js
</code></pre><p>If you&rsquo;re familiar with Docker, you&rsquo;re likely used to pushing your image to a registry. In this case, since we&rsquo;re deploying it to Minikube, you can build it using the same Docker host as the Minikube virtual machine. For this to happen, you&rsquo;ll need to use the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env)
</code></pre><p>Now you can build your Docker image with the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v1 .
</code></pre><p>Huzzah! Now you have an image Minikube can run.</p>

<h2 id="create-minikube-deployment">Create Minikube deployment&nbsp;<a class="hanchor" href="#create-minikube-deployment" aria-label="Anchor link for: Create Minikube deployment">🔗</a></h2>
<p>If you remember from the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first part</a> of this series, deployments watch your application&rsquo;s health and reschedule it if it dies. Deployments are the supported way of creating and scaling pods. <code>kubectl run</code> creates a deployment to manage a pod. We&rsquo;ll create one that uses the <code>hello-node</code> Docker image we just built.</p>
<pre tabindex="0"><code>$ kubectl run hello-node --image=hello-node:v1 --port=8080
</code></pre><p>Next, check that the deployment was created successfully.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           30s
</code></pre><p>Creating the deployment also creates the pod where the application is running. You can view the pod with this command.</p>
<pre tabindex="0"><code>$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-1644695913-k2314   1/1       Running   0          3
</code></pre><p>Finally, let&rsquo;s look at what the configuration looks like. If you&rsquo;re familiar with Ansible, the configuration files for Kubernetes also use easy-to-read YAML. You can see the full configuration with this command.</p>
<pre tabindex="0"><code>$ kubectl config view
</code></pre><p><code>kubectl</code> does many things. To read more about what you can do with it, you can read the <a href="https://kubernetes.io/docs/user-guide/kubectl-overview/">documentation</a>.</p>

<h2 id="create-service">Create service&nbsp;<a class="hanchor" href="#create-service" aria-label="Anchor link for: Create service">🔗</a></h2>
<p>Right now, the pod is only accessible inside of the Kubernetes pod with its internal IP address. To see it in a web browser, you&rsquo;ll need to expose it as a service. To expose it as a service, run this command.</p>
<pre tabindex="0"><code>$ kubectl expose deployment hello-node --type=LoadBalancer
</code></pre><p>The type was specified as a <code>LoadBalancer</code> because Kubernetes will expose the IP outside of the cluster. If you were running a load balancer in a cloud environment, this how you&rsquo;d provision an external IP address. However, in this case, it exposes your application as a service in Minikube. And now, finally, you get to see your application. Running this command will open a new browser window with your application.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/minikube-hello-world-browser-e1497995645454.png" alt="Minikube: Exposing Hello Minikube application in browser" loading="lazy">
</figure>
</p>
<p>Congratulations, you deployed your first containerized application via Kubernetes! But now, what if you need to our small Hello World application?</p>

<h2 id="how-do-we-push-changes">How do we push changes?&nbsp;<a class="hanchor" href="#how-do-we-push-changes" aria-label="Anchor link for: How do we push changes?">🔗</a></h2>
<p>The time has come when you&rsquo;re ready to make an update and push it. Edit your <code>server.js</code> file and change &ldquo;Hello world!&rdquo; to &ldquo;Hello again, world!&rdquo;</p>
<pre tabindex="0"><code>response.end(&#39;Hello again, world!&#39;);
</code></pre><p>And we&rsquo;ll build another Docker image. Note the version bump.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v2 .
</code></pre><p>Next, you need to give Kubernetes the new image to deploy.</p>
<pre tabindex="0"><code>$ kubectl set image deployment/hello-node hello-node=hello-node:v2
</code></pre><p>And now, your update is pushed! Like before, run this command to have it open in a new browser window.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>If your application doesn&rsquo;t come up any different, double-check that you updated the right image. You can troubleshoot by getting a shell into your pod by running the following command. You can get the pod name from the command run earlier (<code>kubectl get pods</code>). Once you&rsquo;re in the shell, check if the <code>server.js</code> file shows your changes.</p>
<pre tabindex="0"><code>$ kubectl exec -it &lt;pod-name&gt; bash
</code></pre>
<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Now that we&rsquo;re done, we can clean up the environment. To clear up the resources in your cluster, run these two commands.</p>
<pre tabindex="0"><code>$ kubectl delete service hello-node
$ kubectl delete deployment hello-node
</code></pre><p>If you&rsquo;re done playing with Minikube, you can also stop it.</p>
<pre tabindex="0"><code>$ minikube stop
</code></pre><p>If you&rsquo;re done using Minikube for a while, you can unset Minikube Docker daemon that we set earlier in this guide.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env -u)
</code></pre>
<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the Kubernetes documentation. If you want to read more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is thorough and comprehensive.</p>
<p>Questions, Minikube stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Introduction to Kubernetes with Fedora</title><link>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</link><pubDate>Mon, 03 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This article is part of a short series that introduces Kubernetes. This beginner-oriented series covers some higher level concepts and gives examples of using Kubernetes on Fedora.</em></p>
<hr>
<p>The information technology world changes daily, and the demands of building scalable infrastructure become more important. Containers aren&rsquo;t anything new these days, and have various uses and implementations. But what about building scalable, containerized applications? By itself, Docker and other tools don&rsquo;t quite cut it, as far as building the infrastructure to support containers. How do you deploy, scale, and manage containerized applications in your infrastructure? This is where tools such as Kubernetes comes in. <a href="https://kubernetes.io/">Kubernetes</a> is an open source system that automates deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google before being donated to the <a href="https://en.wikipedia.org/wiki/Linux_Foundation#Cloud_Native_Computing_Foundation">Cloud Native Computing Foundation</a>, a project of the <a href="https://www.linuxfoundation.org/">Linux Foundation</a>. This article gives a quick precursor to what Kubernetes is and what some of the buzzwords really mean.</p>

<h2 id="what-is-kubernetes">What is Kubernetes?&nbsp;<a class="hanchor" href="#what-is-kubernetes" aria-label="Anchor link for: What is Kubernetes?">🔗</a></h2>
<p>Kubernetes simplifies and automates the process of deploying containerized applications at scale. Just like Ansible <a href="https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/">orchestrates software</a>, Kubernetes orchestrates deploying infrastructure that supports the software. There are various &ldquo;layers of the cake&rdquo; that make Kubernetes a strong solution for building resilient infrastructure. It also assists with making systems that can grow at scale. If your application has increasing demands such as higher traffic, Kubernetes helps grow your environment to support increasing demands. This is one reason why Kubernetes is helpful for building long-term solutions for complex problems (even if it&rsquo;s not complex… yet).</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/kubernetes-high-level-design.jpg" alt="Kubernetes: The high level design" loading="lazy">
  <figcaption>Kubernetes: The high level design. Daniel Smith, Robert Bailey, Kit Merker (<a href="https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015" class="bare">https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015</a>).</figcaption>
</figure>
</p>
<p>At a high level overview, imagine three different layers.</p>
<ul>
<li><strong>Users</strong>: People who deploy or create containerized applications to run in your infrastructure</li>
<li><strong>Master(s)</strong>: Manages and schedules your software across various other machines, for example in a clustered computing environment</li>
<li><strong>Nodes</strong>: Various machines to support the application, called <em>kubelets</em></li>
</ul>
<p>These three layers are orchestrated and automated by Kubernetes. One of the key pieces of the master (not included in the visual) is <strong>etcd</strong>. etcd is a lightweight and distributed key/value store that holds configuration data. Each node, or kubelet, can access this data in etcd through a HTTP/JSON API interface. The components of communication between master and node such as etcd are explained <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/">in the official documentation</a>.</p>
<p>Another important detail not shown in the diagram is that you might have many masters. In a high-availability (HA) set-up, you can keep your infrastructure resilient by having multiple masters in case one happens to go down.</p>

<h2 id="terminology">Terminology&nbsp;<a class="hanchor" href="#terminology" aria-label="Anchor link for: Terminology">🔗</a></h2>
<p>It&rsquo;s important to understand the concepts of Kubernetes before you start to play around with it. There are many core concepts in Kubernetes, such as services, volumes, secrets, daemon sets, and jobs. However, this article explains four that are helpful for the next exercise of building a mini Kubernetes cluster. The three concepts are <em>pods</em>, <em>labels</em>, <em>replica sets</em>, and <em>deployments</em>.</p>

<h4 id="pods"><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/">Pods</a>&nbsp;<a class="hanchor" href="#pods" aria-label="Anchor link for: Pods">🔗</a></h4>
<p>If you imagine Kubernetes as a Lego® castle, pods are the smallest block you can pick out. By themselves, they are the smallest unit you can deploy. The containers of an application fit into a pod. The pod can be one container, but it can also be as many as needed. Containers in a pod are unique since they share the Linux namespace and aren&rsquo;t isolated from each other. In a world before containers, this would be similar to running an application on the same host machine.</p>
<p>When the pods share the same namespace, all the containers in a pod:</p>
<ul>
<li>Share an IP address</li>
<li>Share port space</li>
<li>Find each other over <em>localhost</em></li>
<li>Communicate over IPC namespace</li>
<li>Have access to shared volumes</li>
</ul>
<p>But what&rsquo;s the point of having pods? The main purpose of pods is to have groups of &ldquo;helping&rdquo; containers on the same namespace (co-located) and integrated together (co-managed) along with the main application container. Some examples might be logging or monitoring tools that check the health of your application, or backup tools that act when certain data changes.</p>
<p>In the big picture, containers in a single pod are always scheduled together too. However, Kubernetes doesn&rsquo;t automatically reschedule them to a new node if the node dies (more on this later).</p>

<h4 id="labels"><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">Labels</a>&nbsp;<a class="hanchor" href="#labels" aria-label="Anchor link for: Labels">🔗</a></h4>
<p>Labels are a simple but important concept in Kubernetes. Labels are key/value pairs attached to <em>objects</em> in Kubernetes, like pods. They let you specify unique attributes of objects that actually mean something to humans. You can attach them when you create an object, and modify or add them later. Labels help you organize and select different sets of objects to interact with when performing actions inside of Kubernetes. For example, you can identify:</p>
<ul>
<li><strong>Software releases</strong>: Alpha, beta, stable</li>
<li><strong>Environments</strong>: Development, production</li>
<li><strong>Tiers</strong>: Front-end, back-end</li>
</ul>
<p>Labels are as flexible as you need them to be, and this list isn&rsquo;t comprehensive. Be creative when thinking of how to apply them.</p>

<h4 id="replica-sets"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">Replica sets</a>&nbsp;<a class="hanchor" href="#replica-sets" aria-label="Anchor link for: Replica sets">🔗</a></h4>
<p>Replica sets are where some of the magic begins to happen with automatic scheduling or rescheduling. Replica sets ensure that a number of pod instances (called <em>replicas</em>) are running at any moment. If your web application needs to constantly have four pods in the front-end and two in the back-end, the replica sets are your insurance that number is always maintained. This also makes Kubernetes great for scaling. If you need to scale up or down, change the number of replicas.</p>
<p>When reading about replica sets, you might also see <em>replication controllers</em>. They are somewhat interchangeable, but replication controllers are older, semi-deprecated, and less powerful than replica sets. The main difference is that sets work with more advanced set-based selectors &ndash; which goes back to labels. Ideally, you won&rsquo;t have to worry about this much today.</p>
<p>Even though replica sets are where the scheduling magic happens to help make your infrastructure resilient, you won&rsquo;t actually interact with them much. Replica sets are managed by deployments, so it&rsquo;s unusual to directly create or manipulate replica sets. And guess what&rsquo;s next?</p>

<h4 id="deployments"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployments</a>&nbsp;<a class="hanchor" href="#deployments" aria-label="Anchor link for: Deployments">🔗</a></h4>
<p>Deployments are another important concept inside of Kubernetes. Deployments are a declarative way to deploy and manage software. If you&rsquo;re familiar with Ansible, you can compare deployments to the playbooks of Ansible. If you&rsquo;re building your infrastructure out, you want to make sure it is easily reproducible without much manual work. Deployments are the way to do this.</p>
<p>Deployments offer functionality such as revision history, so it&rsquo;s always easy to rollback changes if something doesn&rsquo;t work out. They also manage any updates you push out to your application, and if something isn&rsquo;t working, it will stop rolling out your update and revert back to the last working state. Deployments follow the mathematical property of <a href="https://en.wikipedia.org/wiki/Idempotence">idempotence</a>, which means you define your specs once and use them many times to get the same result.</p>
<p>Deployments also get into imperative and declarative ways to build infrastructure, but this explanation is a quick, fly-by overview. You can read more <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">detailed information</a> in the official documentation.</p>

<h2 id="installing-on-fedora">Installing on Fedora&nbsp;<a class="hanchor" href="#installing-on-fedora" aria-label="Anchor link for: Installing on Fedora">🔗</a></h2>
<p>If you want to start playing with Kubernetes, install it and some useful tools from the Fedora repositories.</p>
<pre tabindex="0"><code>sudo dnf install kubernetes
</code></pre><p>This command provides the bare minimum needed to get started. You can also install other cool tools like <em>cockpit-kubernetes</em> (integration with <a href="http://cockpit-project.org/">Cockpit</a>) and <em>kubernetes-ansible</em> (provisioning Kubernetes with <a href="https://www.ansible.com/">Ansible</a> playbooks and roles).</p>

<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>If you want to read more about Kubernetes or want to explore the concepts more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is fantastic, but there are also other helpful guides from <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes">DigitalOcean</a> and <a href="https://blog.giantswarm.io/understanding-basic-kubernetes-concepts-i-introduction-to-pods-labels-replicas/">Giant Swarm</a>. The next article in the series will explore building a mini Kubernetes cluster on your own computer to see how it really works.</p>
<p>Questions, Kubernetes stories, or tips for beginners? Add your comments below.</p>]]></description></item></channel></rss>