<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Google</title><link>https://jwheel.org/tags/google/</link><description>Homepage of Justin Wheeler, an Open Source contributor and Free Software advocate from Georgia, USA.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>Justin Wheeler</managingEditor><lastBuildDate>Mon, 18 Dec 2017 00:00:00 +0000</lastBuildDate><atom:link href="https://jwheel.org/rss/tags/google/index.xml" rel="self" type="application/rss+xml"/><item><title>Statistics proposal and self-hosting ListenBrainz</title><link>https://jwheel.org/blog/2017/12/statistics-hosting-listenbrainz/</link><pubDate>Mon, 18 Dec 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/12/statistics-hosting-listenbrainz/</guid><description><![CDATA[<p><em>This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in <a href="/tags/rit-2171/">this tag</a>.</em></p>
<hr>
<p>This week is the last week of the fall 2017 semester at RIT. This semester, I spent time with the MetaBrainz community working on ListenBrainz for an independent study. This post explains what I was working on in the last month and reflects back on my <a href="/blog/2017/10/contributing-listenbrainz/">original objectives</a> for the independent study.</p>

<h2 id="running-my-own-listenbrainz">Running my own ListenBrainz&nbsp;<a class="hanchor" href="#running-my-own-listenbrainz" aria-label="Anchor link for: Running my own ListenBrainz">🔗</a></h2>
<p>The <a href="http://ritlug.com/">RIT Linux Users Group</a> hosts various virtual machines for our projects. I requested one to set up and host a &ldquo;production&rdquo; ListenBrainz site. The purpose of doing this was to…</p>
<ol>
<li>Test my changes in a &ldquo;production&rdquo; environment</li>
<li>Offer a service for the RIT Linux Users Group to poke around with</li>
</ol>
<p>I spent most of this time working with our system administrator to set up the machine and adjust hardware specs for ListenBrainz. Once we fixed storage space and memory issues, it was easy to set it up and get ListenBrainz running. My experience writing the <a href="https://listenbrainz.readthedocs.io/en/latest/dev/devel-env.html">development guide</a> made it easy to get set up and get working. On the first run, it worked!</p>
<p>Now, <a href="http://listen.ritlug.com/">listen.ritlug.com</a> is live.</p>

<h4 id="figuring-out-https">Figuring out HTTPS&nbsp;<a class="hanchor" href="#figuring-out-https" aria-label="Anchor link for: Figuring out HTTPS">🔗</a></h4>
<p>My next challenge for the site is to set up HTTPS. I tried using a <a href="https://www.nginx.com/resources/admin-guide/nginx-https-upstreams/">reverse proxy in nginx</a> to set up HTTPS, but I received <em>502 Bad Gateway</em> errors. I realized I spent too much time figuring this out on my own and decided to <a href="https://community.metabrainz.org/t/how-does-metabrainz-use-https-on-listenbrainz/347319">ask for help</a> in the MetaBrainz community forums.</p>

<h2 id="proposing-new-statistics">Proposing new statistics&nbsp;<a class="hanchor" href="#proposing-new-statistics" aria-label="Anchor link for: Proposing new statistics">🔗</a></h2>
<p>Halfway through the independent study, I realized I would fall short of my original objective of implementing basic statistics in ListenBrainz. To compromise, I wrote a <a href="https://docs.google.com/document/d/1kByAgC9kbuDHNbsEJDkYkTMJ-wAoouWj0qNyi2UPb2Y/edit?usp=sharing">proposal for new statistics</a> to start in the project. My proposal looked at other proprietary platforms that compete with ListenBrainz to see some of their statistics. I also came up with some of my own.</p>
<p>I <a href="https://community.metabrainz.org/t/feedback-needed-listenbrainz-statistics-proposal/347327">proposed this to the MetaBrainz community</a> on the community forums. I&rsquo;m awaiting feedback on my ideas. Once I get feedback, I plan to file new tickets for each statistic to track their implementation over time.</p>
<p>I don&rsquo;t expect statistics being at the forefront of ListenBrainz for some time. A lot of work is going towards other areas of the project. But later in 2018, I expect more focus on the user-facing side of the project.</p>

<h2 id="my-statistic-and-google-bigquery">My statistic and Google BigQuery&nbsp;<a class="hanchor" href="#my-statistic-and-google-bigquery" aria-label="Anchor link for: My statistic and Google BigQuery">🔗</a></h2>
<p>My biggest blocker over the last month was <a href="https://cloud.google.com/bigquery/">Google BigQuery</a>. I wrote a statistic to <a href="https://github.com/metabrainz/listenbrainz-server/pull/318/commits/c1c08ce7f8d207591daeb288087872616d5063a4">calculate play counts</a> over a time period, but was asked to test my statistic. To test my statistic, I needed real data to work with.</p>
<p>Originally, I tried using the <a href="https://github.com/tgwizard/sls">Simple Last.fm Scrobbler</a> to submit listens to the local IP address for my development environment, but I wasn&rsquo;t able to get the app to reach my ListenBrainz server. To get the data, I had to set up Google BigQuery credentials so I could make queries against data on the production site, <a href="https://listenbrainz.org/">listenbrainz.org</a>.</p>
<p>I tried working through the <a href="https://cloud.google.com/bigquery/docs/">Google BigQuery documentation</a>. There&rsquo;s a lot of documentation for using BigQuery as a developer, but it was confusing where to find the information I needed to set it up in my development environment. I tried creating a new project in the Google Cloud Platform, but I was confused because it prompted me to upload my own data instead of accessing data already in BigQuery.</p>
<p>Too late, I realized I spent too much time on my own and not asking for help. I <a href="https://github.com/metabrainz/listenbrainz-server/pull/318">submitted a pull request</a> with the statistic I made and <a href="https://community.metabrainz.org/t/how-to-set-up-google-bigquery-in-a-listenbrainz-development-environment/347307">asked for help</a> in the MetaBrainz community. I also offered to write documentation for setting this up once I learn how to do it.</p>

<h2 id="reflecting-back">Reflecting back&nbsp;<a class="hanchor" href="#reflecting-back" aria-label="Anchor link for: Reflecting back">🔗</a></h2>
<p>I looked back on my <a href="/blog/2017/10/contributing-listenbrainz/">original objectives</a> for the independent study, and I was satisfied and dissatisfied.</p>

<h4 id="not-enough-programming">Not enough programming&nbsp;<a class="hanchor" href="#not-enough-programming" aria-label="Anchor link for: Not enough programming">🔗</a></h4>
<p>I wanted this independent study to enhance my programming knowledge. I especially wanted to focus on Python because I wanted to become more familiar with the language. However, I actually didn&rsquo;t do much programming during the independent study, to my own fault.</p>
<p>My biggest challenge was I bit off more than I could chew. I wanted to write code, and made a big goal before I knew the code base of the project. Even now, I still am not completely comfortable with the code yet. It&rsquo;s a big project with a lot of things going on. I was able to understand the things I did work on, but there&rsquo;s still a lot.</p>
<p>I realized that next time, I need to spend more time evaluating the code base of a project before writing out my milestones. I wish I set more realistic, smaller milestones for myself. My milestone of implementing basic reports was lofty given my existing programming knowledge.</p>

<h4 id="successes">Successes&nbsp;<a class="hanchor" href="#successes" aria-label="Anchor link for: Successes">🔗</a></h4>
<p>One of my other objectives was to write documentation for the project. I felt I succeeded in this milestone, and actually found it enjoyable and interesting to do! I helped separate out documentation from the README into the dedicated <a href="https://listenbrainz.readthedocs.io/en/latest/">ReadTheDocs site</a>. I wrote the <a href="https://listenbrainz.readthedocs.io/en/latest/dev/devel-env.html">development environment guide</a> and helped fix some build issues with the docs site. I also plan to write more for some of the other pain points I found, like Google BigQuery.</p>
<p>My last milestone was to create a use case for a data visualization course at RIT. While I didn&rsquo;t implement my basic reports, I did create the proposal and make an effort to write new statistics. There&rsquo;s a lot of potential now to work with the data in Google BigQuery and do front-end work with tools like <a href="https://d3js.org/">D3.js</a> and <a href="https://plot.ly/javascript/">Plotly.js</a>. I believe there&rsquo;s significant potential to use ListenBrainz as a hands-on project for students to explore data visualization with real data. I hope to support my independent study professor, Prof. Roberts, with questions and logistics of using it as a tool for learning in the future.</p>

<h4 id="unexpected-success">Unexpected success&nbsp;<a class="hanchor" href="#unexpected-success" aria-label="Anchor link for: Unexpected success">🔗</a></h4>
<p>I also think I had an unplanned success too. I immersed myself in the community for ListenBrainz too. Over the last few months, I realized that many of my strengths are in community management and tooling. During my time in the community, I did the following:</p>
<ul>
<li><a href="https://github.com/metabrainz/listenbrainz-server/pull/290">Fixed SELinux labels in Docker</a></li>
<li><a href="https://github.com/metabrainz/listenbrainz-server/pull/288">Contributed a pull request template</a></li>
<li><a href="https://github.com/metabrainz/listenbrainz-server/pull/287">Drafted contributing guidelines</a></li>
<li><a href="https://github.com/metabrainz/listenbrainz-server/pull/294">Fixed a PostgreSQL bug</a></li>
<li><a href="https://github.com/metabrainz/listenbrainz-server/pulls?utf8=%E2%9C%93&amp;q=is%3Apr&#43;author%3Ajflory7&#43;">And more…</a></li>
</ul>

<h2 id="to-the-future">To the future!&nbsp;<a class="hanchor" href="#to-the-future" aria-label="Anchor link for: To the future!">🔗</a></h2>
<p>This ends my independent study with ListenBrainz, but it doesn&rsquo;t end my time contributing! I chose ListenBrainz because it&rsquo;s a project I&rsquo;m passionate about. An independent study allowed me to justify more time on it than a side project in my free time. I&rsquo;m happy to have that opportunity, but I don&rsquo;t want to end here!</p>
<p>I want to follow through on the statistics because I&rsquo;m passionate about understanding music listening trends. I think there&rsquo;s a lot of power for psychological research through music data. To this point, I filed a ticket to request <a href="https://tickets.metabrainz.org/browse/LB-243">tagging listens with &ldquo;emotion&rdquo; words</a> that are synced back to <a href="https://musicbrainz.org/doc/MusicBrainz_Database">MusicBrainz entities</a>.</p>
<p>I won&rsquo;t have as much time to work on the project without the course credit, but I hope to stay involved for the future. I love the project and I love the community. I&rsquo;m thankful for the opportunity to work on this project as an independent study, and learn some things along the way.</p>]]></description></item><item><title>ListenBrainz community gardening and user statistics</title><link>https://jwheel.org/blog/2017/11/listenbrainz-community-user-statistics/</link><pubDate>Mon, 13 Nov 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/11/listenbrainz-community-user-statistics/</guid><description><![CDATA[<p><em>This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in <a href="/tags/rit-2171/">this tag</a>.</em></p>
<hr>
<p>My progress with ListenBrainz slowed, but I am resuming the pace of contributing and advancing on my independent study timeline. This past week, I finished out assigned tasks to discuss contributor-related documentation, like a Code of Conduct, contributor guidelines, and a pull request template. I began research on user statistics and found some already created. I wrote one of my own, but need to learn more about Google BigQuery to advance further.</p>

<h2 id="paving-the-contributor-pathway">Paving the contributor pathway&nbsp;<a class="hanchor" href="#paving-the-contributor-pathway" aria-label="Anchor link for: Paving the contributor pathway">🔗</a></h2>
<p>
<figure>
  <img src="/blog/2017/11/Screenshot-from-2017-11-13-02-05-12.png" alt="Making it easier for people to contribute user statistics to ListenBrainz" loading="lazy">
  <figcaption>Making it easier for people to contribute to ListenBrainz with helpful contibuting guidelines</figcaption>
</figure>
</p>
<p>Earlier, I identified weaknesses for the ListenBrainz contributor pathway and found ways we could improve the pathway. This started with the development environment documentation. Now, I helped draft first revisions of our <a href="https://github.com/metabrainz/listenbrainz-server/pull/287">contributor guidelines</a>, <a href="https://github.com/metabrainz/listenbrainz-server/pull/286">Code of Conduct reference</a>, and <a href="https://github.com/metabrainz/listenbrainz-server/pull/288">pull request templates</a>. Together, these three documents have two goals.</p>
<ol>
<li><strong>Make it easier</strong> to contribute to ListenBrainz</li>
<li>Have a better experience and <strong>have fun</strong> contributing!</li>
</ol>
<p>Adding these documents addresses these goals. Additionally, the <a href="https://github.com/metabrainz/listenbrainz-server/community">GitHub community profile</a> also highlights these deliverables as ways to meet these goals. After getting feedback and seeing what others think, we make more revisions later (with some trial runs).</p>

<h2 id="back-to-selinux-context-flags">Back to SELinux context flags&nbsp;<a class="hanchor" href="#back-to-selinux-context-flags" aria-label="Anchor link for: Back to SELinux context flags">🔗</a></h2>
<p>Recently, I set my desktop back up and installed Docker for the first time on this machine; however, the development environment still failed to start. When I ran the script, it would eventually error out because of a permission denial. The web server image for ListenBrainz was failing.</p>
<p>After debugging, I noticed that I missed the SELinux volume tags for the ListenBrainz web server images in my original pull request, <a href="https://github.com/metabrainz/listenbrainz-server/pull/257">#257</a>. When I created the pull request, I might have had cached data that let my laptop run the development environment without a problem. In either case, it was an easy fix and I knew what the issue was when it happened. Therefore, I submitted a new fix in <a href="https://github.com/metabrainz/listenbrainz-server/pull/290">#290</a>.</p>

<h2 id="writing-new-user-statistics">Writing new user statistics&nbsp;<a class="hanchor" href="#writing-new-user-statistics" aria-label="Anchor link for: Writing new user statistics">🔗</a></h2>
<p>The most interesting part of my independent study is working with the music data to build and generate interesting statistics. I finally began exploring the <a href="https://github.com/metabrainz/listenbrainz-server/tree/master/listenbrainz/stats">existing statistics</a> in ListenBrainz. The statistic queries use BigQuery standard SQL. BigQuery helps rapidly scan and scale data queries to help with performance (I have a lot to learn about BigQuery).</p>

<h4 id="two-types-of-statistics">Two types of statistics&nbsp;<a class="hanchor" href="#two-types-of-statistics" aria-label="Anchor link for: Two types of statistics">🔗</a></h4>
<p>Additionally, ListenBrainz generates <strong>two types</strong> of statistics:</p>
<ol>
<li>Site-wide statistics</li>
<li>User statistics</li>
</ol>
<p>Site-wide statistics are metrics non-specific to a single user. There is only <a href="https://github.com/metabrainz/listenbrainz-server/blob/master/listenbrainz/stats/sitewide.py">one site-wide query</a> now. It counts how many artists were ever submitted to this ListenBrainz instance and returns an integer. There&rsquo;s room for expansion in site-wide statistics.</p>
<p>On the other hand, user statistics are metrics specific to a single user. There&rsquo;s a <a href="https://github.com/metabrainz/listenbrainz-server/blob/master/listenbrainz/stats/user.py">fair number already</a>, like the top artists and songs in a time period and the number of artists you&rsquo;ve listened to. These are a little more complete and offer more expansion for doing cool front-end work with something like <a href="https://d3js.org/">D3.js</a>.</p>

<h4 id="writing-user-statistics">Writing user statistics&nbsp;<a class="hanchor" href="#writing-user-statistics" aria-label="Anchor link for: Writing user statistics">🔗</a></h4>
<p>Of course, I had to try writing my own. One helpful query I thought of was getting a count of the songs you listened to over a time period (e.g. &ldquo;you listened to 500 songs this week!&rdquo;). I haven&rsquo;t tested it yet, but I have this in a local branch and hope to test it with real data soon.</p>
<pre tabindex="0"><code>def get_play_count(musicbrainz_id, time_interval=None): 
 
 filter_clause = &#34;&#34; 
 if time_interval: 
     filter_clause = &#34;AND listened_at &gt;=
     TIMESTAMP_SUB(CURRENT_TIME(), 
     INTERVAL {})&#34;.format(time_interval) 
 
 query = &#34;&#34;&#34;SELECT COUNT(release_msid) as listen_count 
            FROM {dataset_id}.{table_id} 
            WHERE user_name = @musicbrainz_id 
            {time_filter_clause} 
            LIMIT {limit} 
         &#34;&#34;&#34;.format( 
                 dataset_id=config.BIGQUERY_DATASET_ID, 
                 table_id=config.BIGQUERY_TABLE_ID, 
                 time_filter_clause=filter_clause, 
                 limit=config.STATS_ENTITY_LIMIT, 
            ) 
 
 parameters = [ 
     { 
         &#39;type&#39;: &#39;STRING&#39;, 
         &#39;name&#39;: &#39;musicbrainz_id&#39;, 
         &#39;value&#39;: musicbrainz_id 
     } 
 ] 
 
 return stats.run_query(query, parameters)
</code></pre>
<h2 id="researching-google-bigquery">Researching Google BigQuery&nbsp;<a class="hanchor" href="#researching-google-bigquery" aria-label="Anchor link for: Researching Google BigQuery">🔗</a></h2>
<p>My next steps for the independent study are researching <a href="https://cloud.google.com/bigquery/docs/">Google BigQuery</a>. After going through the existing statistics and understanding how ListenBrainz generates them, an understanding of Google BigQuery is essential to writing effective queries. When I become more comfortable with the tooling and how it works, I want to map out a plan of statistics to generate and measure.</p>
<p>Until then, the hacking continues! As always, keep the FOSS flag high…</p>]]></description></item><item><title>Sign at the line: Deploying an app to CoreOS Tectonic</title><link>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</link><pubDate>Fri, 04 Aug 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. The last post and this post build on top of the Fedora Magazine series. The third post introduced how to <a href="/blog/2017/07/tectonic-amazon-web-services-aws/">deploy CoreOS Tectonic</a> to Amazon Web Services (AWS). This fourth post teaches how to deploy a simple web application to your Tectonic installation.</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article picks up from where we left off last when you installed Tectonic to Amazon Web Services (AWS). By the end of this article, you will…</p>
<ul>
<li>Start up <a href="https://redis.io/">Redis</a> master and slave pods</li>
<li>Start a front-end pod that interacts with the Redis pods</li>
<li>Deploy a simple web app for all of your friends to leave you messages</li>
</ul>
<p>Compared to previous articles, this article will be a little more hands-on. Also like before, this is based off an excellent tutorial in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">upstream Kubernetes documentation</a>. Let&rsquo;s get started!</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>This tutorial assumes you followed the <a href="https://fedoramagazine.org/minikube-kubernetes/">Minikube how-to</a> earlier in this series and that you already <a href="https://fedoramagazine.org/tectonic-amazon-web-services-aws/">have a Tectonic installation</a> running (doesn&rsquo;t have to be on AWS). In case you&rsquo;re jumping in now, make sure you have the Kubernetes client tools installed on your Fedora system, like <code>kubectl</code>. If not, you can install them now.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes-client
</code></pre>
<h2 id="configure-kubectl-for-tectonic">Configure <code>kubectl</code> for Tectonic&nbsp;<a class="hanchor" href="#configure-kubectl-for-tectonic" aria-label="Anchor link for: Configure kubectl for Tectonic">🔗</a></h2>
<p>To use <code>kubectl</code> with your Tectonic installation, you need to have a valid configuration in <code>~/.kube/config</code> for your cluster. This is how <code>kubectl</code> knows where and how to talk to Tectonic. To get these values, first log into the Tectonic Console you installed.</p>
<ol>
<li>Click <em>username</em> (usually <em>admin</em>) &gt; <em>My Account</em> on the bottom left.</li>
<li>Click <em>Download Configuration</em>.</li>
<li>When the <em>Set Up kubectl</em> window opens, click <em>Verify Identity</em>.</li>
<li>Enter your username and password, and click <em>Login</em>.</li>
<li>From the <em>Login Successful</em> screen, copy the provided code.</li>
<li>Switch back to Tectonic and enter the code in the field.</li>
</ol>
<p>Now you will be able to download <code>kubectl-config</code> from Tectonic. There&rsquo;s two ways to proceed from here.</p>

<h4 id="add-a-new-configuration">Add a new configuration&nbsp;<a class="hanchor" href="#add-a-new-configuration" aria-label="Anchor link for: Add a new configuration">🔗</a></h4>
<p>If this is your first time using <code>kubectl</code>, your configuration is likely empty. If it&rsquo;s empty or you don&rsquo;t care about overwriting an old configuration, you can run the following commands to add the configuration.</p>
<pre tabindex="0"><code>$ mkdir ~/.kube/
$ mv ~/Downloads/minikube-config ~/.kube/config
$ chmod 600 ~/.kube/config
</code></pre>
<h4 id="append-to-an-existing-configuration">Append to an existing configuration&nbsp;<a class="hanchor" href="#append-to-an-existing-configuration" aria-label="Anchor link for: Append to an existing configuration">🔗</a></h4>
<p>If you already have a configuration, like from Minikube, you might not want to wipe it all out. In this case, you can merge the files manually together. You&rsquo;ll need to copy the <code>clusters</code>, <code>users</code>, and <code>contexts</code> from the Tectonic configuration into your existing one. The benefit of doing this is that you&rsquo;ll be able to change contexts to switch from one cluster to another.</p>

<h4 id="test-your-configuration">Test your configuration&nbsp;<a class="hanchor" href="#test-your-configuration" aria-label="Anchor link for: Test your configuration">🔗</a></h4>
<p>Once you finished your configuration, test to see if it works.</p>
<pre tabindex="0"><code>$ kubectl config use-context tectonic       # if you have multiple contexts in config
$ kubectl get nodes
NAME                                        STATUS    AGE
ip-10-0-0-59.us-east-2.compute.internal     Ready     1d
ip-10-0-23-239.us-east-2.compute.internal   Ready     1d
ip-10-0-44-211.us-east-2.compute.internal   Ready     1d
ip-10-0-61-218.us-east-2.compute.internal   Ready     1d
ip-10-0-67-239.us-east-2.compute.internal   Ready     1d
ip-10-0-95-51.us-east-2.compute.internal    Ready     1d
</code></pre><p>Huzzah! Now we&rsquo;re ready to get to work.</p>

<h2 id="getting-the-deployment-and-service-files">Getting the deployment and service files&nbsp;<a class="hanchor" href="#getting-the-deployment-and-service-files" aria-label="Anchor link for: Getting the deployment and service files">🔗</a></h2>
<p>All of the example files come from the official Kubernetes GitHub repo. You can find them in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">Guestbook example</a>. To get started, create a new directory and download all of the files.</p>
<pre tabindex="0"><code>$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/redis-{master,slave}-{deployment,service}.yaml \
       https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/frontend-{deployment,service}.yaml
</code></pre><p>We&rsquo;ll explain what all of these do in next steps. All of these next steps will start with the command to run, followed by a short explanation of what&rsquo;s actually happening.</p>

<h2 id="start-the-redis-master">Start the Redis master&nbsp;<a class="hanchor" href="#start-the-redis-master" aria-label="Anchor link for: Start the Redis master">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-master-service.yaml
service &#34;redis-master&#34; created
$ kubectl create -f redis-master-deployment.yaml
deployment &#34;redis-master&#34; created
</code></pre>
<h4 id="define-the-deployment">Define the deployment&nbsp;<a class="hanchor" href="#define-the-deployment" aria-label="Anchor link for: Define the deployment">🔗</a></h4>
<p>The <code>redis-master-deployment.yaml</code> file downloaded earlier defines the deployment and its characteristics. In this case, we have one pod that runs the Redis master in a container. Since we&rsquo;re using a deployment, that means if our pod goes down, Kubernetes will <strong>spin up a new pod</strong> to replace it. Worth noting in this example, if the pod <em>did</em> go down, there would be a potential for data loss until the new one replaces the old one (since the Redis master is not highly available, i.e. there are multiple).</p>

<h4 id="define-the-service">Define the service&nbsp;<a class="hanchor" href="#define-the-service" aria-label="Anchor link for: Define the service">🔗</a></h4>
<p>Our service in this example is a <strong>named load balancer</strong> that <strong>proxies traffic</strong> across one or many containers. Even though we only have one Redis master pod, we still want to use a service. This is a deterministic way of making the route to the master with a dynamic (or elastic) IP address.</p>
<p>Labeling the pods is important in this case, as Kubernetes will use the pods&rsquo; labels to determine which pods receive the traffic sent to the service, and load balance it accordingly.</p>

<h4 id="create-the-service">Create the service&nbsp;<a class="hanchor" href="#create-the-service" aria-label="Anchor link for: Create the service">🔗</a></h4>
<p>The next important step is to create the service. Note that we&rsquo;re doing this <em>before</em> we create the deployment. It&rsquo;s best practice to create the service first. This allows the scheduler to later spread the service across the deployments you create to support your application.</p>
<p>After creating the service, you can check its status by running this command. You should see similar output.</p>
<pre tabindex="0"><code>$ kubectl get services
NAME              CLUSTER-IP       EXTERNAL-IP       PORT(S)       AGE
redis-master      10.0.76.248      &lt;none&gt;            6379/TCP      1s
</code></pre><p>Now your Redis master serivce is up and running! The next step will be to create the Redis master deployment.</p>
<p>If you look at the service configuration file, you&rsquo;ll notice <code>port</code> and <code>targetPort</code> are two defined variables. Once everything is up and running, these will be important for determining how the traffic from the slaves to the masters is routed.</p>
<ol>
<li>Redis slave connects to <code>port</code> on Redis master service</li>
<li>Traffic forwarded from service&rsquo;s <code>port</code> to <code>targetPort</code> on pod the service listens to</li>
</ol>

<h4 id="create-the-deployment">Create the deployment&nbsp;<a class="hanchor" href="#create-the-deployment" aria-label="Anchor link for: Create the deployment">🔗</a></h4>
<p>Next, we created the Redis master pod in the cluster. To see our deployment and pods, we can run the following commands to see what was created.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
redis-master   1         1         1            1           27s
</code></pre><pre tabindex="0"><code>$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
redis-master-2353460263-1ecey   1/1       Running   0          1m
...
</code></pre><p>You should see all of the pods in your cluster so far. For now, that&rsquo;s just the Redis master. Let&rsquo;s give it some friends!</p>

<h2 id="start-the-redis-slaves">Start the Redis slaves&nbsp;<a class="hanchor" href="#start-the-redis-slaves" aria-label="Anchor link for: Start the Redis slaves">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-slave-service.yaml
service &#34;redis-slave&#34; created
$ kubectl create -f redis-slave-deployment.yaml
deployment &#34;redis-slave&#34; created
</code></pre>
<h4 id="defining-the-deployment">Defining the deployment&nbsp;<a class="hanchor" href="#defining-the-deployment" aria-label="Anchor link for: Defining the deployment">🔗</a></h4>
<p>In the configuration file, we defined two replicas, unlike the master. By doing this, it tells Kubernetes that the minimum number of pods that should always be running is two. If one of your pods goes down, Kubernetes automatically creates a new one to support the application. If you want, you can try killing the Docker process for one of your pods to see it happen in real time.</p>

<h2 id="start-the-guestbook-front-end">Start the guestbook front-end&nbsp;<a class="hanchor" href="#start-the-guestbook-front-end" aria-label="Anchor link for: Start the guestbook front-end">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f frontend-service.yaml
service &#34;frontend&#34; created
$ kubectl create -f frontend-deployment.yaml
deployment &#34;frontend&#34; created
</code></pre><p>The front-end is a PHP application with an AJAX interface and Angular-based UI. When using the form on the front-end application, it talks to the Redis master or slave, depending on if it&rsquo;s reading or writing to Redis. Again, we&rsquo;re deploying the front-end with multiple replicas. In this case, there will be three pods to support the front-end.</p>

<h2 id="say-hello">Say hello!&nbsp;<a class="hanchor" href="#say-hello" aria-label="Anchor link for: Say hello!">🔗</a></h2>
<p>Once you&rsquo;ve finished deploying everything, your web app should now be accessible! To get the full domain from AWS, run this command to figure out where to look.</p>
<pre tabindex="0"><code>$ kubectl get deploy/frontend svc/frontend -o wide
NAME           CLUSTER-IP   EXTERNAL-IP                                                             PORT(S)        AGE       SELECTOR
svc/frontend   10.3.0.175   aaebd8247ef2311e6a045021d1620193-54019671.us-east-2.elb.amazonaws.com   80:31020/TCP   1m        k8s-app=guestbook,tier=frontend

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/frontend   3         3         3            3           1m
</code></pre><p>Congratulations, we&rsquo;re all finished!</p>

<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Once you&rsquo;re finished or when you want to stop running the guestbook, it&rsquo;s easy to get rid of the deployments and services we created. Using labels, all the deployments and services can be deleted with one command.</p>
<pre tabindex="0"><code>$ kubectl delete deployments,services -l &#34;app in (redis, guestbook)&#34;
</code></pre><p>And now your guestbook application is offline. (It was nice while it lasted!)</p>

<h2 id="learn-more-about-kubernetes-and-tectonic">Learn more about Kubernetes and Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes-and-tectonic" aria-label="Anchor link for: Learn more about Kubernetes and Tectonic">🔗</a></h2>
<p>If you want to explore more about Kubernetes, you can read some of the earlier articles in this series. You can also read the original tutorial published by Kubernetes <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">on GitHub</a>. Additionally, the upstream documentation for <a href="https://kubernetes.io/docs/home/">Kubernetes</a> and <a href="https://coreos.com/tectonic/docs/latest/">Tectonic</a> is thorough and can help answer more advanced questions.</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Deploy CoreOS Tectonic to Amazon Web Services (AWS)</title><link>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</link><pubDate>Fri, 28 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. This post builds on top of the Fedora Magazine series by showing how to deploy CoreOS Tectonic to Amazon Web Services (AWS).</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article takes off from running Kubernetes on your own hardware and moves us one step closer to the cloud. By the end of this article, you will…</p>
<ul>
<li>Understand what CoreOS Tectonic is</li>
<li>Set up Amazon Web Services (AWS) for Tectonic</li>
<li>Deploy Tectonic to AWS</li>
</ul>
<p>This article is also based off of the excellent tutorial provided in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/creating-aws.html">CoreOS documentation</a>. Let&rsquo;s get started!</p>

<h2 id="what-is-tectonic">What is Tectonic?&nbsp;<a class="hanchor" href="#what-is-tectonic" aria-label="Anchor link for: What is Tectonic?">🔗</a></h2>
<p>In the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first article</a>, some of the key concepts of Kubernetes and why it&rsquo;s useful were explained. Kubernetes automates the deployment and setting up of your infrastructure across the three layers (users, masters, nodes). If you&rsquo;re working on your own at a small scale, Kubernetes itself can be plenty to meet your needs. However, there is still a decent amount of human involvement in managing the different pieces of Kubernetes. If you&rsquo;re working with multiple people in a team and across different environments, vanilla Kubernetes can be a lot to manage. For an enterprise environment, there&rsquo;s still some unmet needs. This is where Tectonic steps in.</p>
<p>Tectonic is a commercial product offered by <a href="https://coreos.com/">CoreOS</a>, the providers of <a href="https://coreos.com/os/docs/latest">Container Linux</a> and the original developers of <code>etcd</code>, now one of the core components of Kubernetes. Tectonic takes all of the open source components and pre-packages them. The self-proclaimed goal of doing this is to let anyone build a Google-style infrastructure into a cloud or on-premise environment. The outcome for the user is that it&rsquo;s easy to install a Kubernetes infrastructure across many different environments. In addition to simplifying the installation of the various components of a Kubernetes stack, Tectonic also provides a management console, a container registry for building and sharing containers, additional tools for deployment, and a few other nice features.</p>
<p>If we think about Kubernetes as a cake like we did before with three layers, Tectonic is like the box you set it in. Now, you can take your cake anywhere, move it around, and stack it with other cakes-in-a-box. All of your cakes are in their own boxes and you don&rsquo;t have to worry about them accidentally being damaged. If you&rsquo;re still a little confused, this diagram might help make more sense of it.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/platform-features.png" alt="Understanding where CoreOS Tectonic fits into the Kubernetes puzzle" loading="lazy">
  <figcaption>Understanding where Tectonic fits into the Kubernetes puzzle. From coreos.com/tectonic (<a href="https://coreos.com/tectonic/" class="bare">https://coreos.com/tectonic/</a>)</figcaption>
</figure>
</p>
<p>Fortunately, Tectonic has a free license that lets you use it for ten nodes. In this example, we&rsquo;ll register, get a free license, and deploy it into AWS.</p>
<p>(<em>Note</em>: If you want to revert anything we do in this example, there&rsquo;s an easy way to dismantle it across AWS and bring your bill to $0.00.)</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>In order to successfully run this guide, there&rsquo;s a few things you&rsquo;ll need first.</p>
<ul>
<li><strong>Amazon Web Services (AWS) account</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://aws.amazon.com">here</a></li>
</ul>
</li>
<li><strong>CoreOS Tectonic account and license</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://account.coreos.com/">here</a></li>
</ul>
</li>
<li><strong>A root-level or sub-domain</strong> (<em>e.g. example.com or k8s.example.com</em>)
<ul>
<li>If you look around, you can probably find some for less than USD$1 a year if you need one</li>
</ul>
</li>
<li><strong>Curiosity</strong>!</li>
</ul>

<h2 id="setting-up-dns-with-route-53">Setting up DNS with Route 53&nbsp;<a class="hanchor" href="#setting-up-dns-with-route-53" aria-label="Anchor link for: Setting up DNS with Route 53">🔗</a></h2>
<p>The first things we&rsquo;ll do is set up our domain with Route 53 in AWS. Route 53 can do a lot of things, like DNS management, traffic management, availability monitoring, domain registration, and more. However, we&rsquo;re only going to be using it for DNS management. Tectonic will use this to automatically provision DNS records for internal and external use.</p>

<h4 id="add-your-domain">Add your domain&nbsp;<a class="hanchor" href="#add-your-domain" aria-label="Anchor link for: Add your domain">🔗</a></h4>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-add-domain-route-53-283x300.png" alt="Adding a new domain to AWS Route 53 for Tectonic" loading="lazy">
  <figcaption>Adding a new domain to AWS Route 53 for Tectonic</figcaption>
</figure>
</p>
<p>To add your domain to Route 53, follow these steps from AWS.</p>
<ol>
<li>From <em>Services</em>, select <em>Networking &amp; Content Delivery</em> &gt; <em>Route 53</em>.</li>
<li>Select <em>Hosted zones</em> from the left pane and click <em>Create Hosted Zone</em>.</li>
<li>Enter your domain or sub-domain, add a comment if you want, and choose a Public Zone for the type.</li>
</ol>
<p>Once you&rsquo;ve done this, you can go ahead and click &ldquo;<em>Create</em>&rdquo;.</p>

<h4 id="change-the-nameservers">Change the nameservers&nbsp;<a class="hanchor" href="#change-the-nameservers" aria-label="Anchor link for: Change the nameservers">🔗</a></h4>
<p>After adding the hosted zone to Route 53, you&rsquo;ll need to change the nameservers for your domain via the domain registrar (whoever you bought the domain from). Usually it should be easy to find this, but it varies among registrars. If you&rsquo;re having a hard figuring out how to do this, try searching for a how-to or contacting your registrar&rsquo;s support.</p>
<p>After you added the hosted zone, you should see the nameservers in Route 53. There will be four nameservers provided there. You can copy and paste them from Route 53 to your domain registrar. Also note that if you&rsquo;re using a subdomain, the instructions might be a little different. You can read how to do this in the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/creating-migrating.html">Route 53 documentation</a>.</p>
<p>The nameservers could take minutes or hours to update, depending on how lucky you are. If you&rsquo;re impatient and want to check, open up a terminal and run this command. If you see the AWS nameservers in the output, then your domain has propagated and is now usable by Route 53.</p>
<pre tabindex="0"><code>dig -t ns &lt;example.com&gt;
</code></pre>
<h2 id="configuring-ec2-with-ssh-key-pair">Configuring EC2 with SSH key pair&nbsp;<a class="hanchor" href="#configuring-ec2-with-ssh-key-pair" aria-label="Anchor link for: Configuring EC2 with SSH key pair">🔗</a></h2>
<p>This guide assumes you already have an SSH key pair created on your system. If you don&rsquo;t have one generated, you can read how to generate one <a href="https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/">here</a>.</p>
<p>The next step for us is to add an SSH key pair to EC2, one of the compute engine products offered by AWS. We&rsquo;ll import an existing key on your system into EC2.</p>
<ol>
<li>From AWS, go to <em>Services</em> &gt; <em>Compute</em> &gt; <em>EC2</em>.</li>
<li>Confirm that you are in the <strong>correct EC2 region</strong> by checking the location next to your name in the menu bar.</li>
<li>Under <em>Network &amp; Security</em>, click <em>Key Pairs</em>.</li>
<li>Click <em>Import Key Pair</em>.</li>
<li>Either upload your public key file (<code>~/.ssh/id_rsa.pub</code>) or paste it into the text field. Don&rsquo;t forget to give it a name.</li>
</ol>
<p>And that&rsquo;s all you need to do!</p>

<h2 id="assigning-aws-user-privileges">Assigning AWS user privileges&nbsp;<a class="hanchor" href="#assigning-aws-user-privileges" aria-label="Anchor link for: Assigning AWS user privileges">🔗</a></h2>
<p>Tectonic does the magic of setting up AWS for you, so you don&rsquo;t have to manually add and create the services from the web interface. In order to do this, you need to add a user account that Tectonic can use to do all of the provisioning it needs. To do this, you&rsquo;ll need to create a new Access ID and Secret key pair from AWS.</p>
<ol>
<li>Select <em>Services</em> &gt; <em>Security, Identity &amp; Compliance</em> &gt; <em>IAM</em>.</li>
<li>From the left hand pane, click <em>Users</em>, then click <em>Add user</em>.</li>
<li>Set the user details:
<ol>
<li><em>User name</em> can be anything you like (I used <code>tectonic-mydomain.com</code>)</li>
<li><em>Access type</em> only needs to be <em>Programmatic access</em></li>
</ol>
</li>
<li>For permissions, click <em>Add user to group</em> and create a new group for your user.</li>
<li>When creating a new group, attach only the policies needed by Tectonic to operate correctly:
<ol>
<li><code>AmazonEC2FullAccess</code></li>
<li><code>IAMFullAccess</code></li>
<li><code>AmazonS3FullAccess</code></li>
<li><code>AmazonVPCFullAccess</code></li>
<li><code>AmazonRoute53FullAccess</code></li>
</ol>
</li>
<li>Finish creating the user. You&rsquo;ll then see the <em>Access key ID</em> and <em>Secret access key</em>. Hold onto these, you&rsquo;ll need them later. You won&rsquo;t get to see the secret key again!</li>
</ol>
<p>Now we&rsquo;re ready to install Tectonic! Let&rsquo;s grab your credentials next.</p>

<h2 id="download-tectonic-credentials">Download Tectonic credentials&nbsp;<a class="hanchor" href="#download-tectonic-credentials" aria-label="Anchor link for: Download Tectonic credentials">🔗</a></h2>
<p>Jump back over to the <a href="https://account.coreos.com/">CoreOS accounts page</a>. When you&rsquo;re logged in, you&rsquo;ll see the <em>Account Assets</em> area. Download the CoreOS license file and pull secret. Later on in the installer, you&rsquo;ll need to insert these to finish the installation.</p>

<h2 id="running-the-installer">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer" aria-label="Anchor link for: Running the installer">🔗</a></h2>
<p>Now things get interesting! We finally get to install and deploy Tectonic into AWS. The installer takes the form of a graphical installer in your web browser. To use the installer, you need to download the binary and run it. If you&rsquo;re curious, you can find the installer source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>.</p>

<h4 id="download-and-run-installer">Download and run installer&nbsp;<a class="hanchor" href="#download-and-run-installer" aria-label="Anchor link for: Download and run installer">🔗</a></h4>
<p>First, open up a new terminal window and navigate to a directory you want to download the installer to. Even though you likely won&rsquo;t need to run the installer again, you will want to hang on to this if you ever want to easily dismantle everything in AWS later.</p>
<pre tabindex="0"><code>curl -O https://releases.tectonic.com/tectonic-1.6.4-tectonic.1.tar.gz
</code></pre><p>Next, extract the tarball and navigate into the directory.</p>
<pre tabindex="0"><code>tar -xzvf tectonic-1.6.4-tectonic.1.tar.gz
cd tectonic/tectonic-installer
</code></pre><p>Now execute the installer binary. After running this, a new browser window will open that features the graphical installer.</p>
<pre tabindex="0"><code>./linux/installer
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-installer-aws.png" alt="Now we&rsquo;re ready to deploy Tectonic into AWS!" loading="lazy">
  <figcaption>Now we’re ready to deploy Tectonic into AWS!</figcaption>
</figure>
</p>

<h4 id="running-the-installer-1">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer-1" aria-label="Anchor link for: Running the installer">🔗</a></h4>
<p>The installer is thorough and assumes safe defaults for most of the steps. Be sure to have your AWS Access and Secret ID keys on hand. You should be able to run through the installer without issue. If you&rsquo;re confused about what any of the values mean or want to make custom changes, you can read more in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/installing-tectonic.html">upstream documentation</a>.</p>
<p>Once you&rsquo;re finished, congrats! You&rsquo;ve successfully installed Tectonic!</p>

<h2 id="check-out-your-tectonic-install">Check out your Tectonic install&nbsp;<a class="hanchor" href="#check-out-your-tectonic-install" aria-label="Anchor link for: Check out your Tectonic install">🔗</a></h2>
<p>Once you finish the installation successfully, your Tectonic installation will be accessible within AWS. You can navigate to the domain you specified during the install to find it. Unless you added a CA authority and certificates, your browser will probably complain about invalid SSL certificates, but you can ignore the warning safely. It might also take a few minutes before the URL is accessible, so if you were looking for a coffee or tea break, now would be a good time!</p>
<p>Once you&rsquo;re logged in, you should see something like this.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-status-page.png" alt="Looking at a freshly installed Tectonic status page on AWS" loading="lazy">
  <figcaption>Looking at a freshly installed Tectonic status page on AWS</figcaption>
</figure>
</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/prometheus-monitoring.png" alt="A more advanced use case of what Tectonic can do with monitoring" loading="lazy">
  <figcaption>A more advanced use case of what Tectonic can do with monitoring</figcaption>
</figure>
</p>

<h2 id="blow-it-all-away">Blow it all away!&nbsp;<a class="hanchor" href="#blow-it-all-away" aria-label="Anchor link for: Blow it all away!">🔗</a></h2>
<p>If you&rsquo;re like me, you might be frustrated by guides that tell you how to install things but not how to take it all apart. Fortunately, this guide not only tells you how to do that, but the Tectonic installer also makes it super easy to do. If you&rsquo;re sure that you&rsquo;re done with Tectonic and don&rsquo;t want any leftovers to remain in AWS, this is the best way to do it, instead of deleting everything manually from the AWS Console.</p>
<p>Every installation has a time-stamped folder in the <code>tectonic</code> directory we used earlier. First, you need to navigate into the specific folder for the cluster you installed. It&rsquo;s important to be inside of this directory first.</p>
<pre tabindex="0"><code>cd tectonic/tectonic-installer/linux/clusters/&lt;CLUSTERNAME&gt;
</code></pre><p><code>&lt;CLUSTERNAME&gt;</code> will be the time-stamped directory. Once you&rsquo;re in the folder, run this command to trigger the uninstaller. After running this, you&rsquo;ll see the installer slowly dismantle everything and delete any leftovers in AWS.</p>
<pre tabindex="0"><code>../../terraform destroy
</code></pre><p>Once it finishes, you should see an output message confirming how many AWS resources were destroyed. And now you&rsquo;re back to where you started.</p>

<h2 id="learn-more-about-tectonic">Learn more about Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-tectonic" aria-label="Anchor link for: Learn more about Tectonic">🔗</a></h2>
<p>If you thought this was exciting and want to learn more, there is no shortage of resources for you to read. You can learn more about Tectonic from the <a href="https://coreos.com/tectonic/">CoreOS website</a> or the <a href="https://tectonic.com/blog/announcing-tectonic/">original release announcement</a>. You can also dig into the installer&rsquo;s source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>. If you&rsquo;re still trying to wrap your head around Tectonic, there&rsquo;s a good write-up <a href="https://virtualizationreview.com/articles/2017/04/04/coreos-tectonic-to-shake-up-kubernetes.aspx">on virtualizationreview.com</a>.</p>
<p>Next week, we&rsquo;ll install a simple guestbook application to our Tectonic installation to see how it all works and what you can do with it. Stay tuned!</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Clustered computing on Fedora with Minikube</title><link>https://jwheel.org/blog/2017/07/minikube-kubernetes/</link><pubDate>Fri, 07 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/minikube-kubernetes/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/minikube-kubernetes/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. This second post shows you how to build a single-node Kubernetes deployment on your own computer.</em></p>
<hr>
<p>Once you have a better understanding of what the key concepts and terminology in Kubernetes are, getting started is easier. Like many programming tutorials, this tutorial shows you how to build a &ldquo;Hello World&rdquo; application and deploy it locally on your computer using Kubernetes. This is a simple tutorial because there aren&rsquo;t multiple nodes to work with. Instead, the only device we&rsquo;re using is a single node (a.k.a. your computer). By the end, you&rsquo;ll see how to deploy a Node.js application into a Kubernetes pod and manage it with a deployment on Fedora.</p>
<p>This tutorial isn&rsquo;t made from scratch. You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the official Kubernetes documentation. This article adds some changes that will let you do the same thing on your own Fedora computer.</p>

<h2 id="introducing-minikube">Introducing Minikube&nbsp;<a class="hanchor" href="#introducing-minikube" aria-label="Anchor link for: Introducing Minikube">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> is an official tool developed by the Kubernetes team to help make testing it out easier. It lets you run a single-node Kubernetes cluster through a virtual machine on your own hardware. Beyond using it to play around with or experiment for the first time, it&rsquo;s also useful as a testing tool if you&rsquo;re working with Kubernetes daily. It does support many of the features you&rsquo;d want in a production Kubernetes environment, like DNS, NodePorts, and container run-times.</p>

<h2 id="installation">Installation&nbsp;<a class="hanchor" href="#installation" aria-label="Anchor link for: Installation">🔗</a></h2>
<p>This tutorial requires virtual machine and container software. There are many options you can use. Minikube supports <code>virtualbox</code>, <code>vmwarefusion</code>, <code>kvm</code>, and <code>xhyve</code> drivers for virtualization. However, this guide will use KVM since it&rsquo;s already packaged and available in Fedora. We&rsquo;ll also use Node.js for building the application and Docker for putting it in a container.</p>

<h4 id="pre-requirements">Pre-requirements&nbsp;<a class="hanchor" href="#pre-requirements" aria-label="Anchor link for: Pre-requirements">🔗</a></h4>
<p>You can install the prerequisites with this command.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes libvirt-daemon-kvm kvm nodejs docker
</code></pre><p>After installing these packages, you&rsquo;ll need to add your user to the right group to let you use KVM. The following commands will add your user to the group and then update your current session for the group change to take effect.</p>
<pre tabindex="0"><code>$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
</code></pre>
<h4 id="docker-kvm-drivers">Docker KVM drivers&nbsp;<a class="hanchor" href="#docker-kvm-drivers" aria-label="Anchor link for: Docker KVM drivers">🔗</a></h4>
<p>If using KVM, you will also need to install the KVM drivers to work with Docker. You need to add <a href="https://github.com/docker/machine/releases">Docker Machine</a> and the <a href="https://github.com/dhiltgen/docker-machine-kvm/releases/">Docker Machine KVM Driver</a> to your local path. You can check their pages on GitHub for the latest versions, or you can run the following commands for specific versions. These were tested on a Fedora 25 installation.</p>

<h5 id="docker-machine">Docker Machine&nbsp;<a class="hanchor" href="#docker-machine" aria-label="Anchor link for: Docker Machine">🔗</a></h5>
<pre tabindex="0"><code>$ curl -L https://github.com/docker/machine/releases/download/v0.12.0/docker-machine-`uname -s`-`uname -m` &gt;/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
</code></pre>
<h5 id="docker-machine-kvm-driver">Docker Machine KVM Driver&nbsp;<a class="hanchor" href="#docker-machine-kvm-driver" aria-label="Anchor link for: Docker Machine KVM Driver">🔗</a></h5>
<p>This installs the CentOS 7 driver, but it also works with Fedora.</p>
<pre tabindex="0"><code>$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 &gt;/tmp/docker-machine-driver-kvm
$ chmod +x /tmp/docker-machine-driver-kvm
$ sudo cp /tmp/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm
</code></pre>
<h4 id="installing-minikube">Installing Minikube&nbsp;<a class="hanchor" href="#installing-minikube" aria-label="Anchor link for: Installing Minikube">🔗</a></h4>
<p>The final step for installation is getting Minikube itself. Currently, there is no package in Fedora available, and official documentation recommends grabbing the binary and moving it your local path. To download the binary, make it executable, and move it to your path, run the following.</p>
<pre tabindex="0"><code>$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin/
</code></pre><p>Now you&rsquo;re ready to build your cluster.</p>

<h2 id="create-the-minikube-cluster">Create the Minikube cluster&nbsp;<a class="hanchor" href="#create-the-minikube-cluster" aria-label="Anchor link for: Create the Minikube cluster">🔗</a></h2>
<p>Now that you have everything installed and in the right place, you can create your Minikube cluster and get started. To start Minikube, run this command.</p>
<pre tabindex="0"><code>$ minikube start --vm-driver=kvm
</code></pre><p>Next, you&rsquo;ll need to set the context. Context is how <code>kubectl</code> (the command-line interface for Kubernetes) knows what it&rsquo;s dealing with. To set the context for Minikube, run this command.</p>
<pre tabindex="0"><code>$ kubectl config use-context minikube
</code></pre><p>As a check, make sure that <code>kubectl</code> can communicate with your cluster by running this command.</p>
<pre tabindex="0"><code>$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use &#39;kubectl cluster-info dump&#39;.
</code></pre>
<h2 id="build-your-application">Build your application&nbsp;<a class="hanchor" href="#build-your-application" aria-label="Anchor link for: Build your application">🔗</a></h2>
<p>Now that Kubernetes is ready, we need to have an application to deploy in it. This article uses the same Node.js application as the official tutorial in the Kubernetes documentation. Create a folder called <code>hellonode</code> and create a new file called <code>server.js</code> with your favorite text editor.</p>
<pre tabindex="0"><code>var http = require(&#39;http&#39;);

var handleRequest = function(request, response) {
 console.log(&#39;Received request for URL: &#39; + request.url);
 response.writeHead(200);
 response.end(&#39;Hello world!&#39;);
};
var www = http.createServer(handleRequest);
www.listen(8080);
</code></pre><p>Now try running your application and running it.</p>
<pre tabindex="0"><code>$ node server.js
</code></pre><p>While it&rsquo;s running, you should be able to access it on <a href="http://localhost:8080/">localhost:8080</a>. Once you verify it&rsquo;s working, hit <code>Ctrl+C</code> to kill the process.</p>

<h2 id="create-docker-container">Create Docker container&nbsp;<a class="hanchor" href="#create-docker-container" aria-label="Anchor link for: Create Docker container">🔗</a></h2>
<p>Now you have an application to deploy! The next step is to get it packaged into a Docker container (that you&rsquo;ll pass to Kubernetes later). You&rsquo;ll need to create a <code>Dockerfile</code> in the same folder as your <code>server.js</code> file. This guide uses an existing Node.js Docker image. It exposes your application on port 8080, copies <code>server.js</code> to the image, and runs it as a server. Your <code>Dockerfile</code> should look like this.</p>
<pre tabindex="0"><code>FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js
</code></pre><p>If you&rsquo;re familiar with Docker, you&rsquo;re likely used to pushing your image to a registry. In this case, since we&rsquo;re deploying it to Minikube, you can build it using the same Docker host as the Minikube virtual machine. For this to happen, you&rsquo;ll need to use the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env)
</code></pre><p>Now you can build your Docker image with the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v1 .
</code></pre><p>Huzzah! Now you have an image Minikube can run.</p>

<h2 id="create-minikube-deployment">Create Minikube deployment&nbsp;<a class="hanchor" href="#create-minikube-deployment" aria-label="Anchor link for: Create Minikube deployment">🔗</a></h2>
<p>If you remember from the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first part</a> of this series, deployments watch your application&rsquo;s health and reschedule it if it dies. Deployments are the supported way of creating and scaling pods. <code>kubectl run</code> creates a deployment to manage a pod. We&rsquo;ll create one that uses the <code>hello-node</code> Docker image we just built.</p>
<pre tabindex="0"><code>$ kubectl run hello-node --image=hello-node:v1 --port=8080
</code></pre><p>Next, check that the deployment was created successfully.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           30s
</code></pre><p>Creating the deployment also creates the pod where the application is running. You can view the pod with this command.</p>
<pre tabindex="0"><code>$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-1644695913-k2314   1/1       Running   0          3
</code></pre><p>Finally, let&rsquo;s look at what the configuration looks like. If you&rsquo;re familiar with Ansible, the configuration files for Kubernetes also use easy-to-read YAML. You can see the full configuration with this command.</p>
<pre tabindex="0"><code>$ kubectl config view
</code></pre><p><code>kubectl</code> does many things. To read more about what you can do with it, you can read the <a href="https://kubernetes.io/docs/user-guide/kubectl-overview/">documentation</a>.</p>

<h2 id="create-service">Create service&nbsp;<a class="hanchor" href="#create-service" aria-label="Anchor link for: Create service">🔗</a></h2>
<p>Right now, the pod is only accessible inside of the Kubernetes pod with its internal IP address. To see it in a web browser, you&rsquo;ll need to expose it as a service. To expose it as a service, run this command.</p>
<pre tabindex="0"><code>$ kubectl expose deployment hello-node --type=LoadBalancer
</code></pre><p>The type was specified as a <code>LoadBalancer</code> because Kubernetes will expose the IP outside of the cluster. If you were running a load balancer in a cloud environment, this how you&rsquo;d provision an external IP address. However, in this case, it exposes your application as a service in Minikube. And now, finally, you get to see your application. Running this command will open a new browser window with your application.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/minikube-hello-world-browser-e1497995645454.png" alt="Minikube: Exposing Hello Minikube application in browser" loading="lazy">
</figure>
</p>
<p>Congratulations, you deployed your first containerized application via Kubernetes! But now, what if you need to our small Hello World application?</p>

<h2 id="how-do-we-push-changes">How do we push changes?&nbsp;<a class="hanchor" href="#how-do-we-push-changes" aria-label="Anchor link for: How do we push changes?">🔗</a></h2>
<p>The time has come when you&rsquo;re ready to make an update and push it. Edit your <code>server.js</code> file and change &ldquo;Hello world!&rdquo; to &ldquo;Hello again, world!&rdquo;</p>
<pre tabindex="0"><code>response.end(&#39;Hello again, world!&#39;);
</code></pre><p>And we&rsquo;ll build another Docker image. Note the version bump.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v2 .
</code></pre><p>Next, you need to give Kubernetes the new image to deploy.</p>
<pre tabindex="0"><code>$ kubectl set image deployment/hello-node hello-node=hello-node:v2
</code></pre><p>And now, your update is pushed! Like before, run this command to have it open in a new browser window.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>If your application doesn&rsquo;t come up any different, double-check that you updated the right image. You can troubleshoot by getting a shell into your pod by running the following command. You can get the pod name from the command run earlier (<code>kubectl get pods</code>). Once you&rsquo;re in the shell, check if the <code>server.js</code> file shows your changes.</p>
<pre tabindex="0"><code>$ kubectl exec -it &lt;pod-name&gt; bash
</code></pre>
<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Now that we&rsquo;re done, we can clean up the environment. To clear up the resources in your cluster, run these two commands.</p>
<pre tabindex="0"><code>$ kubectl delete service hello-node
$ kubectl delete deployment hello-node
</code></pre><p>If you&rsquo;re done playing with Minikube, you can also stop it.</p>
<pre tabindex="0"><code>$ minikube stop
</code></pre><p>If you&rsquo;re done using Minikube for a while, you can unset Minikube Docker daemon that we set earlier in this guide.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env -u)
</code></pre>
<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the Kubernetes documentation. If you want to read more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is thorough and comprehensive.</p>
<p>Questions, Minikube stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Introduction to Kubernetes with Fedora</title><link>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</link><pubDate>Mon, 03 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This article is part of a short series that introduces Kubernetes. This beginner-oriented series covers some higher level concepts and gives examples of using Kubernetes on Fedora.</em></p>
<hr>
<p>The information technology world changes daily, and the demands of building scalable infrastructure become more important. Containers aren&rsquo;t anything new these days, and have various uses and implementations. But what about building scalable, containerized applications? By itself, Docker and other tools don&rsquo;t quite cut it, as far as building the infrastructure to support containers. How do you deploy, scale, and manage containerized applications in your infrastructure? This is where tools such as Kubernetes comes in. <a href="https://kubernetes.io/">Kubernetes</a> is an open source system that automates deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google before being donated to the <a href="https://en.wikipedia.org/wiki/Linux_Foundation#Cloud_Native_Computing_Foundation">Cloud Native Computing Foundation</a>, a project of the <a href="https://www.linuxfoundation.org/">Linux Foundation</a>. This article gives a quick precursor to what Kubernetes is and what some of the buzzwords really mean.</p>

<h2 id="what-is-kubernetes">What is Kubernetes?&nbsp;<a class="hanchor" href="#what-is-kubernetes" aria-label="Anchor link for: What is Kubernetes?">🔗</a></h2>
<p>Kubernetes simplifies and automates the process of deploying containerized applications at scale. Just like Ansible <a href="https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/">orchestrates software</a>, Kubernetes orchestrates deploying infrastructure that supports the software. There are various &ldquo;layers of the cake&rdquo; that make Kubernetes a strong solution for building resilient infrastructure. It also assists with making systems that can grow at scale. If your application has increasing demands such as higher traffic, Kubernetes helps grow your environment to support increasing demands. This is one reason why Kubernetes is helpful for building long-term solutions for complex problems (even if it&rsquo;s not complex… yet).</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/kubernetes-high-level-design.jpg" alt="Kubernetes: The high level design" loading="lazy">
  <figcaption>Kubernetes: The high level design. Daniel Smith, Robert Bailey, Kit Merker (<a href="https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015" class="bare">https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015</a>).</figcaption>
</figure>
</p>
<p>At a high level overview, imagine three different layers.</p>
<ul>
<li><strong>Users</strong>: People who deploy or create containerized applications to run in your infrastructure</li>
<li><strong>Master(s)</strong>: Manages and schedules your software across various other machines, for example in a clustered computing environment</li>
<li><strong>Nodes</strong>: Various machines to support the application, called <em>kubelets</em></li>
</ul>
<p>These three layers are orchestrated and automated by Kubernetes. One of the key pieces of the master (not included in the visual) is <strong>etcd</strong>. etcd is a lightweight and distributed key/value store that holds configuration data. Each node, or kubelet, can access this data in etcd through a HTTP/JSON API interface. The components of communication between master and node such as etcd are explained <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/">in the official documentation</a>.</p>
<p>Another important detail not shown in the diagram is that you might have many masters. In a high-availability (HA) set-up, you can keep your infrastructure resilient by having multiple masters in case one happens to go down.</p>

<h2 id="terminology">Terminology&nbsp;<a class="hanchor" href="#terminology" aria-label="Anchor link for: Terminology">🔗</a></h2>
<p>It&rsquo;s important to understand the concepts of Kubernetes before you start to play around with it. There are many core concepts in Kubernetes, such as services, volumes, secrets, daemon sets, and jobs. However, this article explains four that are helpful for the next exercise of building a mini Kubernetes cluster. The three concepts are <em>pods</em>, <em>labels</em>, <em>replica sets</em>, and <em>deployments</em>.</p>

<h4 id="pods"><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/">Pods</a>&nbsp;<a class="hanchor" href="#pods" aria-label="Anchor link for: Pods">🔗</a></h4>
<p>If you imagine Kubernetes as a Lego® castle, pods are the smallest block you can pick out. By themselves, they are the smallest unit you can deploy. The containers of an application fit into a pod. The pod can be one container, but it can also be as many as needed. Containers in a pod are unique since they share the Linux namespace and aren&rsquo;t isolated from each other. In a world before containers, this would be similar to running an application on the same host machine.</p>
<p>When the pods share the same namespace, all the containers in a pod:</p>
<ul>
<li>Share an IP address</li>
<li>Share port space</li>
<li>Find each other over <em>localhost</em></li>
<li>Communicate over IPC namespace</li>
<li>Have access to shared volumes</li>
</ul>
<p>But what&rsquo;s the point of having pods? The main purpose of pods is to have groups of &ldquo;helping&rdquo; containers on the same namespace (co-located) and integrated together (co-managed) along with the main application container. Some examples might be logging or monitoring tools that check the health of your application, or backup tools that act when certain data changes.</p>
<p>In the big picture, containers in a single pod are always scheduled together too. However, Kubernetes doesn&rsquo;t automatically reschedule them to a new node if the node dies (more on this later).</p>

<h4 id="labels"><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">Labels</a>&nbsp;<a class="hanchor" href="#labels" aria-label="Anchor link for: Labels">🔗</a></h4>
<p>Labels are a simple but important concept in Kubernetes. Labels are key/value pairs attached to <em>objects</em> in Kubernetes, like pods. They let you specify unique attributes of objects that actually mean something to humans. You can attach them when you create an object, and modify or add them later. Labels help you organize and select different sets of objects to interact with when performing actions inside of Kubernetes. For example, you can identify:</p>
<ul>
<li><strong>Software releases</strong>: Alpha, beta, stable</li>
<li><strong>Environments</strong>: Development, production</li>
<li><strong>Tiers</strong>: Front-end, back-end</li>
</ul>
<p>Labels are as flexible as you need them to be, and this list isn&rsquo;t comprehensive. Be creative when thinking of how to apply them.</p>

<h4 id="replica-sets"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">Replica sets</a>&nbsp;<a class="hanchor" href="#replica-sets" aria-label="Anchor link for: Replica sets">🔗</a></h4>
<p>Replica sets are where some of the magic begins to happen with automatic scheduling or rescheduling. Replica sets ensure that a number of pod instances (called <em>replicas</em>) are running at any moment. If your web application needs to constantly have four pods in the front-end and two in the back-end, the replica sets are your insurance that number is always maintained. This also makes Kubernetes great for scaling. If you need to scale up or down, change the number of replicas.</p>
<p>When reading about replica sets, you might also see <em>replication controllers</em>. They are somewhat interchangeable, but replication controllers are older, semi-deprecated, and less powerful than replica sets. The main difference is that sets work with more advanced set-based selectors &ndash; which goes back to labels. Ideally, you won&rsquo;t have to worry about this much today.</p>
<p>Even though replica sets are where the scheduling magic happens to help make your infrastructure resilient, you won&rsquo;t actually interact with them much. Replica sets are managed by deployments, so it&rsquo;s unusual to directly create or manipulate replica sets. And guess what&rsquo;s next?</p>

<h4 id="deployments"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployments</a>&nbsp;<a class="hanchor" href="#deployments" aria-label="Anchor link for: Deployments">🔗</a></h4>
<p>Deployments are another important concept inside of Kubernetes. Deployments are a declarative way to deploy and manage software. If you&rsquo;re familiar with Ansible, you can compare deployments to the playbooks of Ansible. If you&rsquo;re building your infrastructure out, you want to make sure it is easily reproducible without much manual work. Deployments are the way to do this.</p>
<p>Deployments offer functionality such as revision history, so it&rsquo;s always easy to rollback changes if something doesn&rsquo;t work out. They also manage any updates you push out to your application, and if something isn&rsquo;t working, it will stop rolling out your update and revert back to the last working state. Deployments follow the mathematical property of <a href="https://en.wikipedia.org/wiki/Idempotence">idempotence</a>, which means you define your specs once and use them many times to get the same result.</p>
<p>Deployments also get into imperative and declarative ways to build infrastructure, but this explanation is a quick, fly-by overview. You can read more <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">detailed information</a> in the official documentation.</p>

<h2 id="installing-on-fedora">Installing on Fedora&nbsp;<a class="hanchor" href="#installing-on-fedora" aria-label="Anchor link for: Installing on Fedora">🔗</a></h2>
<p>If you want to start playing with Kubernetes, install it and some useful tools from the Fedora repositories.</p>
<pre tabindex="0"><code>sudo dnf install kubernetes
</code></pre><p>This command provides the bare minimum needed to get started. You can also install other cool tools like <em>cockpit-kubernetes</em> (integration with <a href="http://cockpit-project.org/">Cockpit</a>) and <em>kubernetes-ansible</em> (provisioning Kubernetes with <a href="https://www.ansible.com/">Ansible</a> playbooks and roles).</p>

<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>If you want to read more about Kubernetes or want to explore the concepts more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is fantastic, but there are also other helpful guides from <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes">DigitalOcean</a> and <a href="https://blog.giantswarm.io/understanding-basic-kubernetes-concepts-i-introduction-to-pods-labels-replicas/">Giant Swarm</a>. The next article in the series will explore building a mini Kubernetes cluster on your own computer to see how it really works.</p>
<p>Questions, Kubernetes stories, or tips for beginners? Add your comments below.</p>]]></description></item></channel></rss>