<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Infrastructure</title><link>https://jwheel.org/tags/infrastructure/</link><description>Homepage of Justin Wheeler, an Open Source contributor and Free Software advocate from Georgia, USA.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>Justin Wheeler</managingEditor><lastBuildDate>Mon, 07 Feb 2022 00:00:00 +0000</lastBuildDate><atom:link href="https://jwheel.org/rss/tags/infrastructure/index.xml" rel="self" type="application/rss+xml"/><item><title>CrystalCraftMC forums sunset</title><link>https://jwheel.org/blog/2022/02/crystalcraftmc-forums-sunset/</link><pubDate>Mon, 07 Feb 2022 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2022/02/crystalcraftmc-forums-sunset/</guid><description><![CDATA[<p>It is with a sad heart that I share that the <code>crystalcraftmc.com</code> forums are permanently retired as of 6 February 2022. Nearly ten years ago, in August 2012, I founded a Minecraft multiplayer game server. It would eventually become known as CrystalCraftMC. I <a href="https://web.archive.org/web/20210103155344/https://crystalcraftmc.com/?page=2">passed the torch</a> for the game server in April 2018. CrystalCraftMC got a second life from the dedicated player community until December 2018 when it was <a href="https://web.archive.org/web/20210614054749/https://crystalcraftmc.com/threads/survival-and-creative-world-downloads-now-available.2708/">shuttered for good</a>. Today&rsquo;s news is sad to me as it marks the official end of the online presence of CrystalCraftMC after nearly a decade.</p>
<p>If you are missing the memories, you can still find a good part of the forums archived via the Internet Archive&rsquo;s Wayback Machine. Click <strong><a href="https://web.archive.org/web/20211229160232/https://crystalcraftmc.com/">here</a></strong> to navigate to the last snapshot of the official forums. You can also find downloads of the survival and creative Minecraft worlds below:</p>
<ul>
<li>
<p>Creative world: <strong><a href="https://drive.proton.me/urls/B43RXQD9XC#dvddVplG0RcX">2015 Nov. 2 backup</a></strong> (~29MB)</p>
</li>
<li>
<p>Survival world: <strong><a href="https://drive.proton.me/urls/07P658D4J4#pLN3umpCGcwQ">2018 Sept. 22 backup</a></strong> (~43.6GB)</p>
</li>
<li>
<p>Survival world: <strong><a href="https://drive.proton.me/urls/WBA2ZZKX1G#SGvUqnavxnjO">2018 Dec. 3 backup</a></strong> (~44.1GB)</p>
</li>
</ul>

<h2 id="history-of-the-crystalcraftmc-forums">History of the CrystalCraftMC forums&nbsp;<a class="hanchor" href="#history-of-the-crystalcraftmc-forums" aria-label="Anchor link for: History of the CrystalCraftMC forums">🔗</a></h2>
<p>Since December 2013, I maintained the CrystalCraftMC forums on my own infrastructure and dime. Even after I passed the torch in April 2018, I continued to maintain and manage the forums. When the game server finally closed in December 2018, I continued to keep the forums alive as a novelty for past players. For me, whenever I felt nostalgic, the forums always gave me a place to remember good friends and memories from that era of my life.</p>
<p>If you are one of the few original folks who kept up with CrystalCraftMC in its lifetime, you may remember we used a different forum at one point, hosted by Enjin. Those forums are (miraculously) still alive, and you can still find them! Visit <em><a href="http://old.crystalcraftmc.com">old.crystalcraftmc.com</a></em> to get a nostalgia trip on the oldest memories of our Minecraft community.</p>

<h2 id="why-the-forums-closed">Why the forums closed&nbsp;<a class="hanchor" href="#why-the-forums-closed" aria-label="Anchor link for: Why the forums closed">🔗</a></h2>
<p>The forum software behind <code>crystalcraftmc.com</code> was called <a href="https://xenforo.com/">XenForo</a>. We used the 1.x versions of XenForo. XenForo was a great fit for us when CrystalCraftMC was in full swing, but the forums fell behind on updates. The forums eventually stopped receiving maintenance and security updates for XenForo 1.x when <a href="https://xenforo.com/community/threads/xenforo-2-0-0-add-ons-released.137930/">XenForo 2.0.0 was released</a>. Since I never renewed our license and didn&rsquo;t have time to complete the complicated upgrade, the CrystalCraftMC forums remained on XenForo 1.x.</p>
<p>This month, I took on a project to reduce the costs of my personal infrastructure. The total cost to run the CrystalCraftMC forums was &gt;$200 USD every month. I hoped to reduce the cost to less than $50 USD every month. My goal was to migrate the database for the forums to a competitive and more affordable hosting service. However, during the migration, I discovered that XenForo 1.x did not support newer versions of the MySQL database software. I would need to continue paying my bill for a hosting service that supported the legacy version of the database used by the CrystalCraftMC forums.</p>
<p>After careful consideration, I decided ten years was a good run and it was time to finally close things down. It was not the outcome I wanted, but it was what had to be done. Nevertheless, I took a final &ldquo;snapshot&rdquo; of the forums and database. I retained a final copy, and perhaps one day in the future, I may choose to put a read-only copy online again. But for now, the forums will remain offline until further notice.</p>
<p>(<em>This also doesn&rsquo;t even note the <strong>hundreds</strong> of spam, scam, and porn emails I received every day because of the contact form on the website not having a captcha form to stop spambots!</em>)</p>

<h2 id="what-now-for-crystalcraftmc">What now for CrystalCraftMC?&nbsp;<a class="hanchor" href="#what-now-for-crystalcraftmc" aria-label="Anchor link for: What now for CrystalCraftMC?">🔗</a></h2>
<p>I will continue to pay for the domain names of <code>crystalcraftmc.com</code> and <code>ccmc.pw</code>. Both domains will now redirect to this blog post, which might be how you ended up here. But otherwise, there are no more public archives of CrystalCraftMC other than our memories, YouTube &ldquo;Let&rsquo;s Plays&rdquo; and <a href="https://www.youtube.com/c/EchophoxGaming/playlists">Echophox video series</a>, and the <a href="http://old.crystalcraftmc.com">super-old forums</a>. This is the end of the road for CrystalCraftMC.</p>
<p>To all the players and the CrystalCraftMC community, you will always have a special place in my heart. CrystalCraftMC was one of these incredible things that happened to me in my life, especially when so many things in my life were honestly upside-down. I wish you all the best and hope you all also have fond memories of building spawn points, running drop parties, building amazing forts and bases (while hoping they don&rsquo;t get raided), and hanging out with other folks who ended up becoming close friends.</p>
<p>Long live CrystalCraftMC.</p>
<hr>
<p><em>Photo by <a href="https://unsplash.com/@jplenio?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Johannes Plenio</a> on <a href="https://unsplash.com/s/photos/sunset?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>. Modified by Justin Wheeler.</em></p>]]></description></item><item><title>DevConf CZ 2020: play by play</title><link>https://jwheel.org/blog/2020/02/devconf-cz-2020-play-by-play/</link><pubDate>Thu, 13 Feb 2020 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2020/02/devconf-cz-2020-play-by-play/</guid><description><![CDATA[<p>DevConf CZ 2020 took place from Friday, January 24th to Sunday January 27th in Brno, Czech Republic:</p>
<blockquote>
<p>DevConf.CZ 2020 is the 12th annual, free, Red Hat sponsored community conference for developers, admins, DevOps engineers, testers, documentation writers and other contributors to open source technologies. The conference includes topics on Linux, Middleware, Virtualization, Storage, Cloud and mobile. At DevConf.CZ, FLOSS communities sync, share, and hack on upstream projects together in the beautiful city of Brno, Czech Republic.</p>
<p><a href="https://www.devconf.info/cz/">devconf.info/cz/</a></p>
</blockquote>
<p>This is my third time attending DevConf CZ. I attended on behalf of <a href="https://fossrit.github.io/librecorps/">RIT LibreCorps</a> for professional development, before a week of work-related travel. DevConf CZ is also a great opportunity to meet friends and colleagues from across time zones. This year, I arrived hoping to better understand the future of Red Hat&rsquo;s technology, see how others are approaching complex problems in emerging technology and open source, and of course, to have yummy candy.</p>

<h2 id="sessions-play-by-play">Sessions: Play-by-play&nbsp;<a class="hanchor" href="#sessions-play-by-play" aria-label="Anchor link for: Sessions: Play-by-play">🔗</a></h2>
<p>Event reports take many forms. My form is an expanded version of my session notes along with key takeaways. Said another way, my event report is biased towards what is interesting to me. You can also skim the headings to find what interests you.</p>

<h3 id="diversity-and-inclusion-meet-up">Diversity and inclusion meet-up&nbsp;<a class="hanchor" href="#diversity-and-inclusion-meet-up" aria-label="Anchor link for: Diversity and inclusion meet-up">🔗</a></h3>
<blockquote>
<p>Would you like to meet other attendees who stand under the umbrella of &ldquo;Diversity and Inclusion&rdquo; or would you like a introduction into what Diversity and inclusion is and why it&rsquo;s a good thing? this is the session for you! All are welcome!</p>
<p><a href="https://devconfcz2020a.sched.com/event/YS2w/diversity-and-inclusion-meet-up">Imo Flood-Murphy</a></p>
</blockquote>
<p>This was a short, informal session ran by Imo to network and get a high-level introduction to diversity and inclusion in open source. Everyone in the room introduced themselves and gave a short explanation of who they were or what projects they represent. I appreciated the opportunity to meet others and better understand how Red Hat approaches diversity and inclusion.</p>
<p>A suggestion for next time is to allow more unstructured time for conversations. I think fun icebreakers get folks comfortable in a short amount of time to help make connections for the rest of the weekend.</p>

<h3 id="lessons-learned-from-testing-over-200000-lines-of-infrastructure-code">Lessons learned from testing over 200,000 lines of Infrastructure Code&nbsp;<a class="hanchor" href="#lessons-learned-from-testing-over-200000-lines-of-infrastructure-code" aria-label="Anchor link for: Lessons learned from testing over 200,000 lines of Infrastructure Code">🔗</a></h3>
<blockquote>
<p>If we are talking that infrastructure is code, then we should reuse practices from development for infrastructure, i.e.</p>
<p>1. S.O.L.I.D. for Ansible.</p>
<p>2.Pair devops-ing as part of XP practices.</p>
<p>3. Infrastructure Testing Pyramid: static/unit/integration/e2e tests.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YS73/lessons-learned-from-testing-over-200000-lines-of-infrastructure-code">Lev Goncharov</a></p>
</blockquote>
<p>Lev shared best practices on building sustainable, tested infrastructure. Infrastructure-as-Code (IaC) was important to how T-Systems scaled their infrastructure over time.</p>
<p>My key takeaways:</p>
<ol>
<li>Smaller components:
<ol>
<li>More sustainable</li>
<li>Easier to maintain</li>
<li>Easier to test</li>
</ol>
</li>
<li>Ansible Roles encourage best use practices for Ansible</li>
<li>Spreading knowledge is essential (if nobody understands it, the code is broken)</li>
<li>Code review creates accountability</li>
<li>Use static analysis tools (<a href="https://github.com/koalaman/shellcheck">Shellcheck</a>, <a href="https://www.pylint.org/">Pylint</a>, <a href="https://docs.ansible.com/ansible-lint/">Ansible Lint</a>)</li>
<li>Write unit tests (<a href="https://github.com/kward/shunit2">shUnit2</a>, <a href="https://rspec.info/">Rspec</a>, <a href="https://docs.pytest.org/en/latest/">Pytest</a>, <a href="https://testinfra.readthedocs.io/en/latest/">Testinfra</a>, <a href="https://molecule.readthedocs.io/en/latest/">Ansible Molecule</a>)</li>
</ol>

<h3 id="content-as-code-technical-writers-as-developers">Content as code, technical writers as developers&nbsp;<a class="hanchor" href="#content-as-code-technical-writers-as-developers" aria-label="Anchor link for: Content as code, technical writers as developers">🔗</a></h3>
<blockquote>
<p>In the open-source project <a href="http://kyma-project.io">Kyma</a>, documentation is an integral part of code delivery. We, the project&rsquo;s Information Developers, believe that using the same tools and methodology as your good old code developers, we can create comprehensive and accurate documentation. During our talk, we’ll share the whys and hows of our approach, showing you that the &ldquo;developer&rdquo; in &ldquo;Information Developer&rdquo; isn&rsquo;t there just because it sounds cool. We&rsquo;ll prove that creating documentation goes beyond linguistic shenanigans and salvaging whatever information there is from a trainwreck that is the developer&rsquo;s notes. Testing solutions, finding our way around Kubernetes, tweaking the website, engaging with the community are just a few examples of what keeps us busy every day.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOvj/content-as-code-technical-writers-as-developers">Barbara Czyz, Tomasz Papiernik</a></p>
</blockquote>
<p>&ldquo;Information Developers&rdquo; is a cool phrase I learned. Barbara and Tomasz explained the value of technical writing and asserted documentation should live close to project code.</p>
<p>My key takeaways:</p>
<ol>
<li>Documenting processes like release notes enables others to join with less barriers</li>
<li><strong>Docs-as-Code (DaC)</strong>: Visibility of docs across development process is important
<ol>
<li>Placing docs with code encourages feedback loops and avoids silos</li>
</ol>
</li>
<li>Put links to docs in visible places (e.g. API messages, console messages)</li>
<li>Management aside: Emphasize/incentivize value of technical writing in your team</li>
<li>Remember bridges across skill areas is possible (technical writers can also be community-oriented people too)</li>
</ol>

<h3 id="uncharted-waters-documenting-emerging-technology">Uncharted waters: Documenting emerging technology&nbsp;<a class="hanchor" href="#uncharted-waters-documenting-emerging-technology" aria-label="Anchor link for: Uncharted waters: Documenting emerging technology">🔗</a></h3>
<blockquote>
<p>We can&rsquo;t help but feel the lure towards the hot new thing, especially when it comes to technology. Part of that lure is the breaking of ground, venturing into the unknown, and working on solutions to new problems. But a lot of the same things that make emerging technology fun and exciting to work on are exactly why it can be difficult to document. These challenges are quite different to those associated with mature products.</p>
<p>This talk is for anyone working on new products and emerging technology, or just interested in learning about fast-moving documentation. It is for the developer as much as it is for the writer, since it usually falls to them to write the early docs before a writer is added to the team.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOyU/uncharted-waters-documenting-emerging-technology">Andrew Burden</a></p>
</blockquote>
<p>This was the talk I didn&rsquo;t know I <strong><em>needed</em></strong> to go to.</p>
<p>Lately I work with &ldquo;emerging technology,&rdquo; which means different things to different people. Regardless of what emerging tech means to you, Andrew focused on how to write documentation in a fast-paced environment with &ldquo;pre-release&rdquo; technology, where things change fast and suddenly. Normally this is an excuse to <em>not</em> write docs, but Andrew showed, <em>yes</em>! It is possible to write good docs, even when context changes fast and often.</p>

<h4 id="key-considerations-of-fast-paced-technical-writers">Key considerations of fast-paced technical writers&nbsp;<a class="hanchor" href="#key-considerations-of-fast-paced-technical-writers" aria-label="Anchor link for: Key considerations of fast-paced technical writers">🔗</a></h4>
<p>An even balance of these considerations helps get into a user&rsquo;s mindset:</p>
<ol>
<li>Scope / scale of release</li>
<li>Release schedule</li>
<li>Developer meetings / face-time</li>
<li>Exposure with <code>$TECHNOLOGY</code></li>
<li>Deployment experience with <code>$TECHNOLOGY</code></li>
</ol>

<h4 id="surviving-the-information-wall">Surviving the information wall&nbsp;<a class="hanchor" href="#surviving-the-information-wall" aria-label="Anchor link for: Surviving the information wall">🔗</a></h4>
<p>The &ldquo;information wall&rdquo; is the endless wall of information and things to know about a project. If information is endless, how do technical writers survive?</p>
<ul>
<li>Take notes: Be like a scientist</li>
<li>Take notes about your notes</li>
<li>Be organized with your notes</li>
</ul>
<p>Obviously Andrew was getting at the value of note-taking. Practicing note-taking skills is critical to keep up with the pace of change.</p>

<h4 id="multi-version-syndrome">&ldquo;Multi-Version Syndrome&rdquo;&nbsp;<a class="hanchor" href="#multi-version-syndrome" aria-label="Anchor link for: &ldquo;Multi-Version Syndrome&rdquo;">🔗</a></h4>
<p>Sometimes you are writing features for things that will not be released in the next release. There is a risk of losing information across multiple releases (e.g. publishing the wrong thing too soon, or the right thing too late). Clarify the release schedule as you go. A good safeguard against losing information is to rigorously understand release cycle cadence and priority.</p>
<p>If your product isn&rsquo;t mature yet, anticipate change instead of avoiding it.</p>

<h4 id="access-to-technology-is-critical">Access to technology is critical&nbsp;<a class="hanchor" href="#access-to-technology-is-critical" aria-label="Anchor link for: Access to technology is critical">🔗</a></h4>
<p>Technical writers are often User 0. To understand the technology, you need access. There are interactive and non-interactive ways of getting access. Interactive ways are preferred because they are always reproducible.</p>
<ul>
<li>Interactive
<ul>
<li>Deploy your own</li>
<li>Get someone else to deploy it for you (but lose install context)</li>
</ul>
</li>
<li>Non-interactive
<ul>
<li>Live demos</li>
<li>Demo videos</li>
<li><a href="https://asciinema.org/">Asciicinema</a> (CLI-oriented)</li>
</ul>
</li>
</ul>

<h4 id="other-takeaways">Other takeaways&nbsp;<a class="hanchor" href="#other-takeaways" aria-label="Anchor link for: Other takeaways">🔗</a></h4>
<ul>
<li>Screenshots have high maintainability cost; avoid if possible
<ul>
<li>Sometimes good stop-gaps until something more maintainable</li>
</ul>
</li>
<li>Where to begin? Make a table-of-contents for the Minimum Viable Product
<ul>
<li>Never underestimate outlines (<em>ahem, like how I wrote this blog post…</em>)</li>
</ul>
</li>
<li>Avoid documentation scramble near release day:
<ul>
<li>Make lists / check-lists</li>
<li>Take more notes</li>
<li>Pre-release checklist</li>
<li>Think now, and for the future</li>
</ul>
</li>
<li>Audit your docs: On-boarding new people is a powerful opportunity to test out your docs</li>
</ul>
<p>Thanks Andrew for a deep dive on this narrow but important topic.</p>

<h3 id="community-management-not-less-than-a-curry">Community management: not less than a curry&nbsp;<a class="hanchor" href="#community-management-not-less-than-a-curry" aria-label="Anchor link for: Community management: not less than a curry">🔗</a></h3>
<blockquote>
<p>Every volunteer joins an Open Source community for a reason. The reasons could range from technical gains to finding his/her/their passion. This community of diverse volunteers require a leader who can not just mentor them with their interests but also a manager managing the community activities in terms of community engagement and planning. A community manager is not less than a candle of light and in this presentation, I would be highlighting my learnings and experiences about starting a community from scratch around a project and maintaining a healthy community management practices.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOpX/community-management-not-less-than-a-curry">Prathamesh Chavan</a></p>
</blockquote>
<p>Prathamesh designed an activity to help the audience understand community management. My key takeaway was community management is about connecting and understanding others as their authentic self.</p>
<p>In the activity, Prathamesh passed papers and pens to the audience. His session had three steps. Between each step, all attendees traded papers with another attendee:</p>
<ol>
<li>Define a project idea (why, how, what)</li>
<li>Identify challenges to idea (i.e. questions)</li>
<li>Answer challenges</li>
</ol>
<p>It reminded me of a similar workshop I attended before. This inspired me to work on <a href="https://github.com/justwheel/logbook/blob/master/notes/identity/question-burst-better-questioners.adoc">my own workshop idea</a> for a future conference.</p>

<h3 id="cognitive-biases-blind-spots-and-inclusion">Cognitive biases, blind spots, and inclusion&nbsp;<a class="hanchor" href="#cognitive-biases-blind-spots-and-inclusion" aria-label="Anchor link for: Cognitive biases, blind spots, and inclusion">🔗</a></h3>
<blockquote>
<p>Open source thrives on diversity. The last couple of years has seen huge strides in that aspect with codes of conduct and initiatives like the Contributor Covenant. While these advancements are crucial, they are not enough. In order to truly be inclusive, it’s not enough for the community members to be welcoming and unbiased, the communities’ processes and procedures really support inclusiveness by not only making marginalized members welcome, but allowing them to fully participate.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOoH/cognitive-biases-blindspots-and-inclusion">Allon Mureinik</a></p>
</blockquote>
<p>Allon recognizes the importance of diversity, but asking for improved diversity is one side of the coin. A friend recently shared a powerful quote with me: &ldquo;If diversity is being invited to the party, inclusion is being invited <em>to</em> dance.&rdquo; Allon&rsquo;s message was to dig deeper on including marginalized people in our project communities.</p>
<p>He identified ways we accidentally make our communities less inclusive because of our cognitive/unconscious biases. Everyone has blind spots! Allon suggested ways to be more conscious about inclusion in open source:</p>
<ul>
<li><strong>Knowledge barriers</strong>
<ul>
<li>Procedural knowledge, not just technical
<ul>
<li>How do you submit code? File a bug? Make meaningful contributions? These need to be documented</li>
</ul>
</li>
<li>Documentation fosters inclusivity</li>
</ul>
</li>
<li><strong>Language barriers</strong>
<ul>
<li>Working proficiency in English not universal</li>
<li>Conversations have extra barriers (e.g. communicating complex ideas, understanding advanced words)</li>
</ul>
</li>
<li><strong>Time barriers</strong>
<ul>
<li>Work schedules no longer 9 to 5</li>
<li>Remember other folks in different time zones</li>
<li>On giving feedback: Fast is not a metric! Be smart
<ul>
<li>Merging PRs while others are away, or shortly after opening it</li>
<li>Allow time for input on non-trivial changes</li>
</ul>
</li>
</ul>
</li>
<li><strong>Transparency barriers</strong>
<ul>
<li>If not in the open, it didn&rsquo;t open</li>
<li>Negative example: Contributor makes a PR, reviewer has face-to-face conversation with contributor, reviewer merges PR without public context</li>
<li>Default to open: in many ways
<ul>
<li>If you can&rsquo;t be open, at least be transparent</li>
</ul>
</li>
</ul>
</li>
</ul>

<h3 id="diversity-in-open-source-show-me-the-data">Diversity in open source: show me the data!&nbsp;<a class="hanchor" href="#diversity-in-open-source-show-me-the-data" aria-label="Anchor link for: Diversity in open source: show me the data!">🔗</a></h3>
<blockquote>
<p>How diverse is your work environment? Diverse communities are more effective, they allow us to share the strengths of the individuals who make up the community. Have you ever looked around and noticed that most of our Open Source communities are predominantly male? Why do you think that is? We’ll use gender diversity as a case study and share some intriguing data points. Let us convince you why it’s so important.</p>
<p>Regardless of your gender, we would love for you to join us! We will also give you some tips on how you can make a difference.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOtn/diversity-in-opensource-show-me-the-data">Serena Chechile Nichols, Denise Dumas</a></p>
</blockquote>
<p>Serena and Denise divided the talk into two sections: metrics and action. The way they presented, they brought the audience on the same page by visiting a variety of metrics and then transitioned to an empowering discussion about changing trends we see.</p>
<p>Next time, I hope to see expanded messaging by defining diversity beyond only women. Diversity was frequently tied to gender participation metrics in open source. While women are underrepresented, there are additional aspects of identity that can compound discrimination. Race, socioeconomic status, nationality, sexual orientation, and more also play a part in understanding challenges collectively faced in inclusion work.</p>

<h4 id="the-data">The data&nbsp;<a class="hanchor" href="#the-data" aria-label="Anchor link for: The data">🔗</a></h4>
<ul>
<li><strong>Gender differences by # of contributors</strong>:
<ul>
<li>GSoC 2018: 11.6% female-identifying contributors</li>
<li>OpenStack: 10.4% female-identifying contributors</li>
<li>Linux kernel: 9.9% female-identifying contributors</li>
</ul>
</li>
<li><strong>U.S. Dept. of Labor: 22.2% of technical roles filled by women</strong>
<ul>
<li>2014-2019: More women entering tech jobs at companies like Apple, Microsoft, Google, etc.</li>
</ul>
</li>
<li><strong>Years of experience by gender (&lt;9 years)</strong>:
<ul>
<li>66.2% female</li>
<li>52.9% non-binary/queer</li>
<li>50.1% male</li>
</ul>
</li>
<li><strong>GitHub user and developer survey</strong>:
<ul>
<li>95% male</li>
<li>3% female</li>
<li>1% non-binary</li>
</ul>
</li>
</ul>

<h4 id="lets-make-things-better">Let&rsquo;s make things better&nbsp;<a class="hanchor" href="#lets-make-things-better" aria-label="Anchor link for: Let&rsquo;s make things better">🔗</a></h4>
<p>Serena and Denise asserted diversity creates change. All changes come with challenges. Diversity can increase the friction in the process, but that is okay. They emphasized the need for multiple perspectives see past our initial biases (conveniently covered by Allon in the previous talk).</p>
<p>This transitioned to questions, comments, and thoughts from the audience. One interesting point was using the phrase, &ldquo;<a href="http://www.thagomizer.com/blog/2017/09/29/we-don-t-do-that-here.html">we don&rsquo;t do that here</a>&rdquo; to create and set norms. I gave a suggestion to look at projects you already participate in and see if there is a diversity and inclusion effort there already. If there is, see if there are ways to help and get involved. If not, consider starting one (or network with the <a href="https://discourse.opensourcediversity.org/">Open Source Diversity community</a>).</p>
<p>To wrap up, one recurring theme of Serena and Denise&rsquo;s talk is to make time to step back and evaluate the bigger picture. Questioning our biases is an important skill to practice. We need the space and time to recompute!</p>

<h3 id="candy-swap">Candy Swap&nbsp;<a class="hanchor" href="#candy-swap" aria-label="Anchor link for: Candy Swap">🔗</a></h3>
<blockquote>
<p>Do you have a unique sweet dessert or candy from your country or hometown? Do you love to try new and exciting foods from around the world? Spend an hour with fellows as we share stories and candies from the world with each other. Participants are invited to bring a unique confectionary or candy from their country or city to share with multiple other people. Before going around to try yummy things, all participants explain what item they bring and any story about its origins or where it is normally used. After sharing, everyone who brought something rotates around to try candies brought by others. After all participants have had a chance to sample, the rest of the community is invited to come and try anything remaining.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YS6U/candy-swap">Jona Azizaj, Justin Wheeler</a></p>
</blockquote>
<p>I <em>am</em> biased when I say this is one of my favorite parts of conferences I go to. Jona originally proposed the candy swap for DevConf CZ. After unexpectedly adding DevConf CZ to my travel list for 2020, we teamed up to share the sweet tradition from Fedora Flock to DevConf CZ! This is one of my favorite conference traditions because I get to know other attendees in a context outside of technology. And food is always an easy way to win me over.</p>
<p>Instead of listening to me, see what other people have to say about it:</p>
<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">Time for the international candy swap! There are so many things to love about <a href="https://twitter.com/hashtag/DevConf_CZ?src=hash&amp;ref_src=twsrc%5Etfw">#DevConf_CZ</a> but the geographic diversity of attendees might be my favorite part. Thank you for organizing, <a href="https://twitter.com/jonatoni?ref_src=twsrc%5Etfw">@jonatoni</a> &amp; @jflory7! <a href="https://t.co/rU1ETp5aTa">pic.twitter.com/rU1ETp5aTa</a></p>&mdash; Mary Thengvall (she/her); mary-grace.bsky.social (@mary_grace) <a href="https://twitter.com/mary_grace/status/1221075300584448000?ref_src=twsrc%5Etfw">January 25, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>


<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">The &quot;sweetest&quot; activity of <a href="https://twitter.com/hashtag/devconf_cz?src=hash&amp;ref_src=twsrc%5Etfw">#devconf_cz</a>: Today at 3 PM! 🍬🍫<br>Join Candy Swap, share candies, sweets and stories with others from around the world! <a href="https://t.co/OlfdmgGa3a">https://t.co/OlfdmgGa3a</a> <a href="https://t.co/Jnlqi3lsaq">pic.twitter.com/Jnlqi3lsaq</a></p>&mdash; DevConf.CZ (@devconf_cz) <a href="https://twitter.com/devconf_cz/status/1221026710969298947?ref_src=twsrc%5Etfw">January 25, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>


<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">Candy Swap time at <a href="https://twitter.com/hashtag/DevConf_CZ?src=hash&amp;ref_src=twsrc%5Etfw">#DevConf_CZ</a> 😍 <a href="https://t.co/zFCNnXZoJf">pic.twitter.com/zFCNnXZoJf</a></p>&mdash; Jona Azizaj👩🏻‍💻 🥑 @jonatoni@mastodon.social (@jonatoni) <a href="https://twitter.com/jonatoni/status/1221076375081062400?ref_src=twsrc%5Etfw">January 25, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>


<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">Well I experienced this for the <a href="https://twitter.com/hashtag/Flock?src=hash&amp;ref_src=twsrc%5Etfw">#Flock</a> 2019. It&#39;s a great opportunity to know the tastebuds of diverse people and enjoy! :D</p>&mdash; Aal (Alisha)🌻 (@withloveaal) <a href="https://twitter.com/withloveaal/status/1221366223381778434?ref_src=twsrc%5Etfw">January 26, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>



<h3 id="from-outreachy-to-cancer-research">From Outreachy to cancer research&nbsp;<a class="hanchor" href="#from-outreachy-to-cancer-research" aria-label="Anchor link for: From Outreachy to cancer research">🔗</a></h3>
<blockquote>
<p>Outreachy program is helping women and other underrepresented people to make first steps in tech career. Picking a project, making first open source contributions, working on assigned project and learning from advanced people. But what happens when this three months are over? Can Outreachy be a lifechanging experience?</p>
<p>I will share my story of conversion from a chemist and full time parent into a Fedora Outreachy intern and how I found my place as a junior software developer in cancer genomics research at IRB Barcelona.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOwh/from-outreachy-to-cancer-research">Lenka Segura</a></p>
</blockquote>
<p>This was a favorite of the weekend. &ldquo;Fedora Outreachy intern Lenka Segura on how Outreachy opened the door for her career to cancer research at IRB Barcelona!&rdquo;</p>
<p>I put effort into live-tweeting a Twitter thread. Get the full scoop there!</p>


<h3 id="connect-and-grow-your-community-through-meetups">Connect and grow your community through meetups&nbsp;<a class="hanchor" href="#connect-and-grow-your-community-through-meetups" aria-label="Anchor link for: Connect and grow your community through meetups">🔗</a></h3>
<blockquote>
<p>Open source communities collaborate in a multitude of ways - chatting on irc, submitting issues and contributing code on GitHub, discussing and sharing ideas on reddit and other social channels. Face to face gatherings add another dimension to that, where community members can learn and share their experiences. Local meetups provide a venue for people with similar interests to socialize and connect. However, organizing meetups is not trivial. How do we encourage and motivate the community to arrange meetups, and to keep the momentum? In my one year with the Ansible community, we have doubled the number of active meetups in Europe. These meetups are community driven, rather than Red Hat. Find out how we use metrics to analyze the situation and needs, and the steps we are taking to reach our goals of connecting with even more community members. Learn from our mistakes and challenges (100 RSVPs and only 20 turned up?), plus some tips to make your meetups more inclusive.</p>
<p><a href="https://devconfcz2020a.sched.com/event/YOr2/connect-and-grow-your-community-through-meetups">Carol Chen</a></p>
</blockquote>
<p>Carol explained the role of local meet-ups around the world in building communities around software projects. She emphasized that single metrics are not always useful, so it is more helpful to evaluate on multiple areas. The most useful takeaway for me was the 5 W&rsquo;s: why, who, what, when, where.</p>
<ul>
<li><strong>Why?</strong> Common curiosity (noticing something new in your community)</li>
<li><strong>Who?</strong> Connections and networking</li>
<li><strong>What?</strong> Concise, compelling content
<ul>
<li>Consider venue travel (how to make it worth their while?)</li>
<li>Provide alternatives to git-based submissions</li>
<li>All talks don&rsquo;t have to be technical! Diversify to appeal to wider audiences
<ul>
<li>Announcements for future events, work-life talks</li>
<li>We are more than just the technology we work with</li>
</ul>
</li>
</ul>
</li>
<li><strong>When?</strong> Consistency
<ul>
<li>Helps with venue scheduling</li>
<li>Helps retain attendee focus and build habits</li>
</ul>
</li>
</ul>
<p>Carol also gave suggestions for common points to think about for improved inclusion. All of these need active, not passive inclusion.</p>
<ul>
<li>Special needs / disabilities</li>
<li>Food allergies</li>
<li>Beverage preference (often alcohol/non-alcoholic)</li>
<li>Language</li>
<li>Traffic-light communication stickers</li>
<li>Photography lanyards</li>
<li>Gender pronouns</li>
</ul>

<h2 id="beyond-devconf-cz">Beyond DevConf CZ&nbsp;<a class="hanchor" href="#beyond-devconf-cz" aria-label="Anchor link for: Beyond DevConf CZ">🔗</a></h2>
<p>While the sessions are excellent and fulfilling (and sometimes frustrating when you miss a good talk with a full room), DevConf is also more than the sessions. It&rsquo;s also the people and conversations that happen in the &ldquo;hallway track.&rdquo; It was nice to see many old friends and make new ones.</p>
<p>I spent a few extra days before and after DevConf CZ in Brno. In some of that time, my colleague <a href="https://nolski.rocks/">Mike Nolan</a> and I rehearsed in-person for our FOSDEM talk the following weekend (to come in a future blog post). I also enjoyed coffee and waffles with Marie, Sumantro, and Misc!</p>
<p>
<figure>
  <img src="/blog/2020/02/IMG_20200124_212601881_HDR-scaled.jpg" alt="" loading="lazy">
</figure>
</p>
<p>
<figure>
  <img src="/blog/2020/02/IMG_20200124_212616232-rotated.jpg" alt="" loading="lazy">
</figure>
</p>
<p>
<figure>
  <img src="/blog/2020/02/IMG_20200129_105148632_HDR-scaled.jpg" alt="" loading="lazy">
</figure>
</p>
<p>
<figure>
  <img src="/blog/2020/02/IMG_20200129_124253219.jpg" alt="" loading="lazy">
</figure>
</p>
<blockquote>
<p>A few memories of a great week in Brno</p>
</blockquote>

<h2 id="until-next-time">Until next time!&nbsp;<a class="hanchor" href="#until-next-time" aria-label="Anchor link for: Until next time!">🔗</a></h2>
<p>I learn a lot and have a lot of fun at DevConf CZ. I&rsquo;m happy to return for a third year. Hats-off to the organizers and volunteers who pulled it all off. Each year, DevConf gradually makes improvements. It&rsquo;s nice to see inclusion prioritized across the board.</p>
<p>Thanks also goes out to <a href="https://fossrit.github.io/librecorps/">RIT LibreCorps</a> for sponsoring my trip. Extra thanks to Jona Azizaj!</p>]]></description></item><item><title>How to automatically scale Kubernetes with Horizontal Pod Autoscaling</title><link>https://jwheel.org/blog/2018/03/kubernetes-horizontal-pod-autoscaling/</link><pubDate>Tue, 06 Mar 2018 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2018/03/kubernetes-horizontal-pod-autoscaling/</guid><description><![CDATA[<p>Scale is a critical part of how we develop applications in today&rsquo;s world of infrastructure. Now, containers and container orchestration like Docker and <a href="https://jwfblog.wpenginepowered.com/2017/07/introduction-kubernetes-fedora/">Kubernetes</a> make it easier to think about scale. One of the &ldquo;magical&rdquo; things about The potential of Kubernetes is fully realized when you have a sudden increase in load, your infrastructure scales up and grows to accommodate. How does this work? With <strong>Horizontal Pod Autoscaling</strong>, Kubernetes adds more pods when you have more load and drops them once things return to normal.</p>
<p>This article covers Horizontal Pod Autoscaling, what it is, and how to try it out with the <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/">Kubernetes guestbook</a> example. By the end of this article, you will…</p>
<ul>
<li>Understand what Horizontal Pod Autoscaling (HPA) is</li>
<li>Be able to create an HPA in Kubernetes</li>
<li>Create an HPA for the Guestbook and watch it work with <a href="https://github.com/JoeDog/siege">Siege</a></li>
</ul>

<h2 id="what-is-horizontal-pod-autoscaling">What is Horizontal Pod Autoscaling?&nbsp;<a class="hanchor" href="#what-is-horizontal-pod-autoscaling" aria-label="Anchor link for: What is Horizontal Pod Autoscaling?">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaling</a> (HPA) is a Kubernetes API resource to dynamically grow an environment. To help simplify things, consider it in three pieces:</p>
<ul>
<li><strong>Horizontal</strong>: Think of <em>horizontal</em> growth, i.e. adding more nodes to your available pool (unlike <em>vertical</em>, which would be adding more memory / CPU to your existing nodes)</li>
<li><strong>Pod</strong>: Your deployable units in Kubernetes</li>
<li><strong>Autoscaling</strong>: Automatically scaling out when needed</li>
</ul>
<p>
<figure>
  <img src="/blog/2017/08/k8s-hpa.png" alt="Diagram to explain how Horizontal Pod Autoscaler (HPA) works" loading="lazy">
  <figcaption>Diagram to explain how a Horizontal Pod Autoscaler (HPA) works. From Kubernetes documentation (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" class="bare">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a>).</figcaption>
</figure>
</p>
<p>To help visualize it, imagine you have a <a href="http://flask.pocoo.org/">Python Flask</a> web server that reads and writes data to a <a href="https://redis.io/">Redis</a> back-end. Your web server is the front-end for all of your incoming traffic. You run it with three pods in Kubernetes, with 512MB of RAM and 50m of CPU. Now, suddenly, BuzzFeed writes an article about your app, Kanye West name drops the app in a TV interview, and the president of the United States retweets a link to your site.</p>
<p>Oops.</p>
<p>Now you have a serious problem on your hands, where your tiny application is overloaded. Three pods aren&rsquo;t cutting it anymore. You get woken up at 3:00am to hastily adjust the number of replicas and rapidly scale your infrastructure. While you&rsquo;re wondering <em>how this happened</em>, you also wonder… isn&rsquo;t there an easier way? Could I have avoided this panicked, pre-dawn scaling crisis? Yes, there is! At least, somewhat.</p>

<h4 id="building-to-scale">Building to scale&nbsp;<a class="hanchor" href="#building-to-scale" aria-label="Anchor link for: Building to scale">🔗</a></h4>
<p>By creating and managing your deployments with HPAs, your application grows horizontally to handle the load. As the CPU utilization rises, HPAs trigger the addition of more pods to scale automatically. Previously, you could create a Horizontal Pod Autoscaler that would begin scaling when cumulative CPU utilization was at 60%. You could also tell it to scale to a maximum of 500 pods, but no less than three. So then, when the Apocalypse of Viral Sharing happened to your web application, it could have grown dynamically.</p>
<p>If you want to dive deeper in the technical implementation of HPAs, you can read more in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Kubernetes documentation</a>.</p>

<h2 id="create-a-horizontal-pod-autoscaler">Create a Horizontal Pod Autoscaler&nbsp;<a class="hanchor" href="#create-a-horizontal-pod-autoscaler" aria-label="Anchor link for: Create a Horizontal Pod Autoscaler">🔗</a></h2>
<p>Now that you understand how a Horizontal Pod Autoscaler (HPA) is helpful, how do you create one? Like any other resource in Kubernetes, define HPAs in a YAML definition file. Here&rsquo;s a template for getting started.</p>
<pre tabindex="0"><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
  namespace: my-app-space
  labels:
    app: my-app
    tier: frontend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: my-app-deployment
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 60
</code></pre><p>This is the minimal spec you need to deploy an HPA. It&rsquo;s not that different from other Kubernetes resources you may have seen.</p>

<h4 id="explaining-the-configuration">Explaining the configuration&nbsp;<a class="hanchor" href="#explaining-the-configuration" aria-label="Anchor link for: Explaining the configuration">🔗</a></h4>
<p>Let&rsquo;s look at what some of the specific lines are.</p>
<ul>
<li><code>spec.scaleTargetRef.name</code>: Name of resource to scale (e.g. name of a deployment)</li>
<li><code>spec.minReplicas</code>: Minimum number of replicas running when CPU use is minimal</li>
<li><code>spec.maxReplicas</code>: Maximum number of replicas running when CPU use peaks</li>
<li><code>spec.targetCPUUtilizationPercentage</code>: Percentage threshold when HPA begins scaling out pods</li>
</ul>
<p>When starting out for the first time, tweak these values based on the amount of traffic you expect to receive or what your budget is. Load testing your application is one way to see the HPAs do their job.</p>

<h2 id="obliterating-the-guestbook">Obliterating the Guestbook&nbsp;<a class="hanchor" href="#obliterating-the-guestbook" aria-label="Anchor link for: Obliterating the Guestbook">🔗</a></h2>
<p>But this guide wouldn&rsquo;t be complete without a live demo to try. You can create one with an existing application and put it to the test. This section assumes you have a running <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/">Guestbook application</a> in your Kubernetes environment. As a quick refresh, the Guestbook is a three-part application:</p>
<ul>
<li>PHP web application for writing messages into a virtual guestbook</li>
<li>Primary Redis node for writing new messages from web page</li>
<li>Replica Redis nodes for reading the data into web page</li>
</ul>
<p>We&rsquo;ll add an HPA as a fourth part to scale the PHP web application for new traffic.</p>

<h4 id="create-the-hpa-for-guestbook">Create the HPA for Guestbook&nbsp;<a class="hanchor" href="#create-the-hpa-for-guestbook" aria-label="Anchor link for: Create the HPA for Guestbook">🔗</a></h4>
<p>Now, create a new HPA spec file for the guestbook.</p>
<pre tabindex="0"><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: guestbook-frontend
  namespace: guestbook
  labels:
    app: guestbook
    env: production
    tier: frontend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: guestbook-frontend
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75
</code></pre><p>Put this into a file and create the HPA with <code>kubectl</code>.</p>
<pre tabindex="0"><code>$ kubectl apply --record -f guestbook-frontend-hpa.yaml
</code></pre><p>Now, the Horizontal Pod Autoscaler is operational and monitoring the CPU utilization of your deployment.</p>

<h4 id="load-test-with-siege">Load test with Siege&nbsp;<a class="hanchor" href="#load-test-with-siege" aria-label="Anchor link for: Load test with Siege">🔗</a></h4>
<p>To force the HPA into action, we&rsquo;ll use <a href="https://github.com/JoeDog/siege">Siege</a>, an HTTP load testing and benchmark utility. Siege is a multi-threaded load testing tool and has a few other capabilities included to make it a good option for putting some force onto a simple web app.</p>
<p>First, put various permutations of the URL in a plaintext file. By doing this, Siege can randomly scan the URLs in he text file and ping them in &ldquo;Internet mode&rdquo; by randomly selecting a URL from the list for each request. This could look like the following…</p>
<pre tabindex="0"><code>http://my-guestbook.example.com/
http://my-guestbook.example.com/index.html
http://my-guestbook.example.com/guestbook.php
http://my-guestbook.example.com/guestbook.php?cmd=get&amp;key=messages
</code></pre><p>Once this is done, you can fire up Siege to begin load testing. In this case, to get fast results, we&rsquo;ll use 255 concurrent users for five minutes, using Internet and benchmark modes.</p>
<pre tabindex="0"><code>$ siege --verbose --benchmark --internet --concurrent 255 --time 10M --file siege-urls.txt
</code></pre><p>You should see Siege begin to rapidly send requests to your Guestbook application. Now that the action is in progress, you can slowly observe your CPU utilization begin to climb. Watch it slowly change by using <code>watch</code>.</p>
<pre tabindex="0"><code>$ watch -d -n 2 -b -c kubectl get hpa -l app=guestbook
</code></pre><p>During the five minute load test, you should notice CPU usage rise and then new replicas will appear. Depending on what your original requests and limits are for the deployment, you will see different results. Next, try setting the deployment&rsquo;s requests / limits to lower values if nothing seems to happen while testing.</p>

<h2 id="learn-more-about-horizontal-pod-autoscaler">Learn more about Horizontal Pod Autoscaler&nbsp;<a class="hanchor" href="#learn-more-about-horizontal-pod-autoscaler" aria-label="Anchor link for: Learn more about Horizontal Pod Autoscaler">🔗</a></h2>
<p>Horizontal Pod Autoscalers are a stable resource in Kubernetes and are available for you to begin playing around with now. To learn more, read the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">documentation</a> or see another example in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/">official walkthrough</a>.</p>]]></description></item><item><title>Introducing InfluxDB: Time-series database stack</title><link>https://jwheel.org/blog/2017/08/influxdb-time-series-database/</link><pubDate>Tue, 15 Aug 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/08/influxdb-time-series-database/</guid><description><![CDATA[<p><a href="https://opensource.com/article/17/8/influxdb-time-series-database-stack"><em>Article originally published on Opensource.com.</em></a></p>
<hr>
<p>The needs and demands of infrastructure environments changes every year. With time, systems become more complex and involved. But when infrastructure grows and becomes more complex, it&rsquo;s meaningless if we don&rsquo;t understand it and what&rsquo;s happening in our environment. This is why monitoring tools and software are often used in these environments, so operators and administrators see problems and fix them in real-time. But what if we want to predict problems before they happen? Collecting metrics and data about our environment give us a window into how our infrastructure is performing and lets us make predictions based on data. When we know and understand what&rsquo;s happening, we can prevent problems before they happen.</p>
<p>But how do we collect and store this data? For example, if we want to collect data on the CPU usage of 100 machines every ten seconds, we&rsquo;re generating a lot of data. On top of that, what if each machine is running fifteen containers? What if you want to generate data about each of those individual containers too? What about by the process? This is where time-series data becomes helpful. Time-series databases store time-series data. But what does that mean? We&rsquo;ll explain all of this and more and introduce you to InfluxDB, an open source time-series database. By the end of this article, you will understand…</p>
<ul>
<li>What time-series data / databases are</li>
<li>Quick introduction to InfluxDB and the TICK stack</li>
<li>How to install InfluxDB and other tools</li>
</ul>

<h2 id="introducing-time-series-concepts">Introducing time-series concepts&nbsp;<a class="hanchor" href="#introducing-time-series-concepts" aria-label="Anchor link for: Introducing time-series concepts">🔗</a></h2>
<p>
<figure>
  <img src="/blog/2017/07/rbdms-table-example.gif" alt="Example of table, or how a RDBMS like MySQL stores data" loading="lazy">
  <figcaption>Example of table, or how a RDBMS like MySQL stores data. Image from DevShed (<a href="http://www.devshed.com/c/a/php/using-the-active-record-pattern-with-php-and-mysql/" class="bare">http://www.devshed.com/c/a/php/using-the-active-record-pattern-with-php-and-mysql/</a>).</figcaption>
</figure>
</p>
<p>If you&rsquo;re familiar with relational database management software (RDBMS), like MySQL, <a href="http://www.informit.com/articles/article.aspx?p=377067&amp;seqNum=3">tables, columns, and primary keys</a> are familiar terms. Everything is like a spreadsheet, with columns and rows. Some data might be unique, other parts might be the same as other rows. RBDMS&rsquo;s like MySQL are widely used and are great for <strong>reliable transactions</strong> that follow <a href="https://en.wikipedia.org/wiki/ACID">ACID</a> (Atomicity, Consistency, Isolation, Durability) compliance.</p>
<p>With relational database software, you&rsquo;re usually working with data that is something you could model in a table. You might update certain data by overwriting and replacing it. But what if you&rsquo;re collecting on data on something that generates a lot of data and you want to watch change over time? Take a self-driving car. The car is constantly collecting information about its environment. It takes this data and it analyzes changes over time to behave correctly. The amount of data might be tens of gigabytes an hour. While you could use a relational database to collect this data, they&rsquo;re not built for this. When it comes to scaling and usability of the data you&rsquo;re collecting, an RBDMS isn&rsquo;t the best tool for the job.</p>

<h4 id="why-time-series-is-a-good-fit">Why time-series is a good fit&nbsp;<a class="hanchor" href="#why-time-series-is-a-good-fit" aria-label="Anchor link for: Why time-series is a good fit">🔗</a></h4>
<p>And this is where time-series data makes sense. Let&rsquo;s say you&rsquo;re collecting data about a city traffic, temperature from farming equipment, or the production rate of an assembly line. Instead of going into a table with rows and columns, imagine pushing multiple rows of data that are uniquely sorted by a timestamp. This visual might help make more sense of this.</p>
<p>
<figure>
  <img src="/blog/2017/07/picture-the-cloud.gif" alt="Imagine rows and rows of data, uniquely sorted by timestamps" loading="lazy">
  <figcaption>Imagine rows and rows of data, uniquely sorted by timestamps. Image from Timescale (<a href="https://blog.timescale.com/what-the-heck-is-time-series-data-and-why-do-i-need-a-time-series-database-dcf3b1b18563" class="bare">https://blog.timescale.com/what-the-heck-is-time-series-data-and-why-do-i-need-a-time-series-database-dcf3b1b18563</a>).</figcaption>
</figure>
</p>
<p>Having the data in this format makes it easier to track and watch change over time. When data accumulates, you can see how something behaved in the past, how it&rsquo;s behaving now, and how it might behave in the future. Your options to make smarter data decisions expands!</p>
<p>Curious how the data is stored and formatted? It depends on the time-series database (TSDB) you use. InfluxDB stores the data in the <a href="https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/">Line Protocol</a> format. <a href="https://docs.influxdata.com/influxdb/v1.3/tools/api/#query">Queries</a> return the data in JSON.</p>
<p>
<figure>
  <img src="/blog/2017/07/influxdb-data-format.jpg" alt="How InfluxDB stores time-series data in JSON" loading="lazy">
  <figcaption>How InfluxDB stores time-series data in Line Protocol (<a href="https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/" class="bare">https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/</a>). Image from Roberto Gaudenzi (<a href="https://www.slideshare.net/RobertoGaudenzi1/introduction-to-influx-db" class="bare">https://www.slideshare.net/RobertoGaudenzi1/introduction-to-influx-db</a>).</figcaption>
</figure>
</p>
<p>If you&rsquo;re still confused or trying to understand time-series data or why you would want to use it over another solution, you can read an excellent, in-depth explanation from <a href="https://blog.timescale.com/what-the-heck-is-time-series-data-and-why-do-i-need-a-time-series-database-dcf3b1b18563">Timescale&rsquo;s blog</a> or <a href="https://www.influxdata.com/modern-time-series-platform/">InfluxData&rsquo;s blog</a>.</p>

<h2 id="influxdb-a-time-series-database">InfluxDB: A time-series database&nbsp;<a class="hanchor" href="#influxdb-a-time-series-database" aria-label="Anchor link for: InfluxDB: A time-series database">🔗</a></h2>
<p><a href="https://www.influxdata.com/time-series-platform/influxdb/">InfluxDB</a> is an open source time-series database software developed by <a href="https://www.influxdata.com/">InfluxData</a>. It&rsquo;s written in Go (a compiled language), which means you can start using it without installing any dependencies. It supports multiple data ingestion protocols, such as <a href="https://www.influxdata.com/time-series-platform/telegraf/">Telegraf</a> (also from InfluxData), <a href="https://graphiteapp.org/">Graphite</a>, <a href="https://collectd.org/">collectd</a>, and <a href="http://opentsdb.net/">OpenTSDB</a>. This leaves you with flexible options for how you want to collect data and where you&rsquo;re pulling it from. It&rsquo;s also one of the <a href="https://db-engines.com/en/ranking/time&#43;series&#43;dbms">fastest-growing</a> time-series database software available. You can find the source code for InfluxDB on <a href="https://github.com/influxdata/influxdb">GitHub</a>.</p>
<p>This article will focus on three tools in InfluxData&rsquo;s TICK stack for how you can build a time-series database and begin collecting and processing data.</p>

<h4 id="tick-stack">TICK stack&nbsp;<a class="hanchor" href="#tick-stack" aria-label="Anchor link for: TICK stack">🔗</a></h4>
<p>InfluxData creates a platform based on four open source projects that work and play well with each other for time-series data. When used together, you can collect, store, process, and view the data easily. The four pieces of the platform are known as the <a href="https://www.influxdata.com/time-series-platform/">TICK stack</a>. This stands for…</p>
<ul>
<li><strong>_T_elegraf</strong>: Plugin-driven server agent for collecting / reporting metrics</li>
<li><strong>_I_nfluxDB</strong>: Scalable data store for metrics, events, and real-time analytics</li>
<li><strong>_C_hronograf</strong>: Monitoring / visualization UI for TICK stack (not covered in this article)</li>
<li><strong>_K_apacitor</strong>: Framework for processing, monitoring, and alerting on time-series data</li>
</ul>
<p>These tools work and integrate well with the other pieces by design. However, it&rsquo;s also easy to substitute one piece out for another tool of your choice. For this article, we&rsquo;ll explore three parts of the TICK stack: InfluxDB, Telegraf, and Kapacitor.</p>
<p>
<figure>
  <img src="/blog/2017/07/tick-stack-diagram.png" alt="Diagram of how the different components of the InfluxDB TICK stack connect with each other" loading="lazy">
  <figcaption>Diagram of how the different components of the TICK stack connect with each other. From influxdata.com (<a href="https://www.influxdata.com/time-series-platform/" class="bare">https://www.influxdata.com/time-series-platform/</a>).</figcaption>
</figure>
</p>

<h4 id="influxdb"><a href="https://docs.influxdata.com/influxdb/">InfluxDB</a>&nbsp;<a class="hanchor" href="#influxdb" aria-label="Anchor link for: InfluxDB">🔗</a></h4>
<p>As mentioned before, InfluxDB is the time-series database (TSDB) of the TICK stack. Data collected from your environment is stored into InfluxDB. There are a few things that stand out about InfluxDB from other time-series databases.</p>

<h6 id="emphasis-on-performance">Emphasis on performance&nbsp;<a class="hanchor" href="#emphasis-on-performance" aria-label="Anchor link for: Emphasis on performance">🔗</a></h6>
<p>InfluxDB is designed with performance as one of the top priorities. This allows you to use data quickly and easily, even under heavy loads. To do this, InfluxDB focuses on quickly ingesting the data and using compression to keep it manageable. To query and write data, it uses an HTTP(S) API.</p>
<p>The performance notes are noteworthy standing up the amount of data InfluxDB is capable of handling. It can handle up to a million points of data per second, at a precise level even to the nanosecond.</p>

<h6 id="sql-like-queries">SQL-like queries&nbsp;<a class="hanchor" href="#sql-like-queries" aria-label="Anchor link for: SQL-like queries">🔗</a></h6>
<p>If you&rsquo;re familiar with SQL-like syntax, querying data from InfluxDB will feel familiar. It uses its own SQL-like syntax, <a href="https://docs.influxdata.com/influxdb/v1.3/query_language">InfluxQL</a>, for queries. As an example, imagine you&rsquo;re collecting data on used disk space on a machine. If you wanted to see that data, you could write a query that might look like this.</p>
<pre tabindex="0"><code>SELECT mean(diskspace_used) as mean_disk_used
FROM disk_stats
WHERE time() &gt;= 3m
GROUP BY time(10d)
</code></pre><p>If you&rsquo;re familiar with SQL syntax, this won&rsquo;t feel too different. The above statement will pull the mean values of used disk space from a three-month period and group them by every ten days.</p>

<h6 id="downsampling--data-retention">Downsampling / data retention&nbsp;<a class="hanchor" href="#downsampling--data-retention" aria-label="Anchor link for: Downsampling / data retention">🔗</a></h6>
<p>When working with large amounts of data, storing it becomes a concern. Over time, it can accumulate to huge sizes. With InfluxDB, you can <strong>downsample</strong> into less precise, but smaller metrics that you can store for longer periods of time. <strong>Data retention policies</strong> for your data enable you to do this.</p>
<p>For example, pretend you have sensors collecting data on the amount of RAM in a number of machines. You might collect metrics on the amount of memory in use by multiple users, the system, cached memory, and more. While it might make sense to hang on to that data for thirty days to watch what&rsquo;s happening, after thirty days, you might not need it that precise. Instead, you might only want the ratio of total memory to memory in use. Using data retention policies, you can tell InfluxDB to hang on to the precise data for all the different usages for thirty days. After thirty days, you can average data to be less precise, and you can hold on to that data for six months, forever, or however long you like. This compromise meets in the middle between keeping historical data and reducing disk usage.</p>

<h4 id="telegraf"><a href="https://docs.influxdata.com/telegraf/">Telegraf</a>&nbsp;<a class="hanchor" href="#telegraf" aria-label="Anchor link for: Telegraf">🔗</a></h4>
<p>If InfluxDB is where all of your data is going, you need a way to collect and gather the data first. Telegraf is a metric collection daemon that gathers various metrics from system components, IoT sensors, and more. It&rsquo;s <a href="https://github.com/influxdata/telegraf">open source</a> and written completely in Go. Like InfluxDB, Telegraf is also written by the InfluxData team and is built to work with InfluxDB. It also includes support for different databases, such as MySQL / MariaDB, MongoDB, Redis, and more. You can read more about it on <a href="https://www.influxdata.com/time-series-platform/telegraf/">InfluxData&rsquo;s website</a>.</p>
<p>Telegraf is modular and heavily based on plugins. This means that Telegraf is either lean and minimal or as full and complex as you need it. Out of the box, it supports over a hundred plugins for various input sources. This includes Apache, Ceph, Docker, IPTables, Kubernetes, NGINX, and Varnish, just to name a few. You can see all the plugins, including processing and output plugins in their <a href="https://github.com/influxdata/telegraf#input-plugins">README</a>.</p>
<p>Even if you&rsquo;re not using InfluxDB as a data store, you may find Telegraf useful as a way to collect this data and information about your systems or sensors.</p>

<h4 id="kapacitor"><a href="https://docs.influxdata.com/kapacitor/">Kapacitor</a>&nbsp;<a class="hanchor" href="#kapacitor" aria-label="Anchor link for: Kapacitor">🔗</a></h4>
<p>Now we have a way to collect and store our data. But what about doing things with it? Kapacitor is the piece of the stack that lets you process and work with the data in a few different ways. It supports both stream and batch data. Stream data means you can actively work and shape the data in real-time, even before it makes it to your data store. Batch data means you retroactively perform actions on samples, or batches, of the data.</p>
<p>One of the biggest pluses for Kapacitor is that it enables you to have real-time alerts for events happening in your environment. CPU usage overloading or temperatures too high? You can set up several different alert systems, including but not limited to email, triggering a command, Slack, HipChat, OpsGenie, and many more. You can see the full list in the <a href="https://docs.influxdata.com/kapacitor/v1.3/nodes/alert_node/">documentation</a>.</p>
<p>Like the previous tools, Kapacitor is also <a href="https://github.com/influxdata/kapacitor">open source</a> and you can read more about the project in their <a href="https://github.com/influxdata/kapacitor/blob/master/README.md">README</a>.</p>

<h2 id="installing-the-tick-stack">Installing the TICK stack&nbsp;<a class="hanchor" href="#installing-the-tick-stack" aria-label="Anchor link for: Installing the TICK stack">🔗</a></h2>
<p>Packages are available for nearly every distribution. You can install these packages from the command line. Use the instructions for your distribution.</p>

<h4 id="fedora">Fedora&nbsp;<a class="hanchor" href="#fedora" aria-label="Anchor link for: Fedora">🔗</a></h4>
<pre tabindex="0"><code>sudo dnf install https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64.rpm \
https://dl.influxdata.com/telegraf/releases/telegraf-1.3.4-1.x86_64.rpm \
https://dl.influxdata.com/kapacitor/releases/kapacitor-1.3.1.x86_64.rpm
</code></pre>
<h4 id="centos-7--rhel-7">CentOS 7 / RHEL 7&nbsp;<a class="hanchor" href="#centos-7--rhel-7" aria-label="Anchor link for: CentOS 7 / RHEL 7">🔗</a></h4>
<pre tabindex="0"><code>sudo yum install https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64.rpm \
https://dl.influxdata.com/telegraf/releases/telegraf-1.3.4-1.x86_64.rpm \
https://dl.influxdata.com/kapacitor/releases/kapacitor-1.3.1.x86_64.rpm
</code></pre>
<h4 id="ubuntu--debian">Ubuntu / Debian&nbsp;<a class="hanchor" href="#ubuntu--debian" aria-label="Anchor link for: Ubuntu / Debian">🔗</a></h4>
<pre tabindex="0"><code>wget https://dl.influxdata.com/influxdb/releases/influxdb_1.3.1_amd64.deb \
https://dl.influxdata.com/telegraf/releases/telegraf_1.3.4-1_amd64.deb \
https://dl.influxdata.com/kapacitor/releases/kapacitor_1.3.1_amd64.deb
sudo dpkg -i influxdb_1.3.1_amd64.deb telegraf_1.3.4-1_amd64.deb kapacitor_1.3.1_amd64.deb
</code></pre>
<h4 id="other-distributions">Other distributions&nbsp;<a class="hanchor" href="#other-distributions" aria-label="Anchor link for: Other distributions">🔗</a></h4>
<p>For help with other distributions, see the <a href="https://portal.influxdata.com/downloads">Downloads</a> page.</p>

<h2 id="see-the-data-be-the-data">See the data, be the data&nbsp;<a class="hanchor" href="#see-the-data-be-the-data" aria-label="Anchor link for: See the data, be the data">🔗</a></h2>
<p>Now that you have the tools installed, you can experiment with some of these tools. There&rsquo;s plenty of upstream documentation on all three projects. You can the docs here:</p>
<ul>
<li><a href="https://docs.influxdata.com/influxdb/">InfluxDB documentation</a></li>
<li><a href="https://docs.influxdata.com/telegraf/">Telegraf documentation</a></li>
<li><a href="https://docs.influxdata.com/kapacitor/">Kapacitor documentation</a></li>
</ul>
<p>Additionally, for more help, you can visit the <a href="https://community.influxdata.com/">InfluxData community forums</a>. Happy hacking!</p>]]></description></item><item><title>Sign at the line: Deploying an app to CoreOS Tectonic</title><link>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</link><pubDate>Fri, 04 Aug 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. The last post and this post build on top of the Fedora Magazine series. The third post introduced how to <a href="https://jwfblog.wpenginepowered.com/2017/07/tectonic-amazon-web-services-aws/">deploy CoreOS Tectonic</a> to Amazon Web Services (AWS). This fourth post teaches how to deploy a simple web application to your Tectonic installation.</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article picks up from where we left off last when you installed Tectonic to Amazon Web Services (AWS). By the end of this article, you will…</p>
<ul>
<li>Start up <a href="https://redis.io/">Redis</a> master and slave pods</li>
<li>Start a front-end pod that interacts with the Redis pods</li>
<li>Deploy a simple web app for all of your friends to leave you messages</li>
</ul>
<p>Compared to previous articles, this article will be a little more hands-on. Also like before, this is based off an excellent tutorial in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">upstream Kubernetes documentation</a>. Let&rsquo;s get started!</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>This tutorial assumes you followed the <a href="https://fedoramagazine.org/minikube-kubernetes/">Minikube how-to</a> earlier in this series and that you already <a href="https://fedoramagazine.org/tectonic-amazon-web-services-aws/">have a Tectonic installation</a> running (doesn&rsquo;t have to be on AWS). In case you&rsquo;re jumping in now, make sure you have the Kubernetes client tools installed on your Fedora system, like <code>kubectl</code>. If not, you can install them now.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes-client
</code></pre>
<h2 id="configure-kubectl-for-tectonic">Configure <code>kubectl</code> for Tectonic&nbsp;<a class="hanchor" href="#configure-kubectl-for-tectonic" aria-label="Anchor link for: Configure kubectl for Tectonic">🔗</a></h2>
<p>To use <code>kubectl</code> with your Tectonic installation, you need to have a valid configuration in <code>~/.kube/config</code> for your cluster. This is how <code>kubectl</code> knows where and how to talk to Tectonic. To get these values, first log into the Tectonic Console you installed.</p>
<ol>
<li>Click <em>username</em> (usually <em>admin</em>) &gt; <em>My Account</em> on the bottom left.</li>
<li>Click <em>Download Configuration</em>.</li>
<li>When the <em>Set Up kubectl</em> window opens, click <em>Verify Identity</em>.</li>
<li>Enter your username and password, and click <em>Login</em>.</li>
<li>From the <em>Login Successful</em> screen, copy the provided code.</li>
<li>Switch back to Tectonic and enter the code in the field.</li>
</ol>
<p>Now you will be able to download <code>kubectl-config</code> from Tectonic. There&rsquo;s two ways to proceed from here.</p>

<h4 id="add-a-new-configuration">Add a new configuration&nbsp;<a class="hanchor" href="#add-a-new-configuration" aria-label="Anchor link for: Add a new configuration">🔗</a></h4>
<p>If this is your first time using <code>kubectl</code>, your configuration is likely empty. If it&rsquo;s empty or you don&rsquo;t care about overwriting an old configuration, you can run the following commands to add the configuration.</p>
<pre tabindex="0"><code>$ mkdir ~/.kube/
$ mv ~/Downloads/minikube-config ~/.kube/config
$ chmod 600 ~/.kube/config
</code></pre>
<h4 id="append-to-an-existing-configuration">Append to an existing configuration&nbsp;<a class="hanchor" href="#append-to-an-existing-configuration" aria-label="Anchor link for: Append to an existing configuration">🔗</a></h4>
<p>If you already have a configuration, like from Minikube, you might not want to wipe it all out. In this case, you can merge the files manually together. You&rsquo;ll need to copy the <code>clusters</code>, <code>users</code>, and <code>contexts</code> from the Tectonic configuration into your existing one. The benefit of doing this is that you&rsquo;ll be able to change contexts to switch from one cluster to another.</p>

<h4 id="test-your-configuration">Test your configuration&nbsp;<a class="hanchor" href="#test-your-configuration" aria-label="Anchor link for: Test your configuration">🔗</a></h4>
<p>Once you finished your configuration, test to see if it works.</p>
<pre tabindex="0"><code>$ kubectl config use-context tectonic       # if you have multiple contexts in config
$ kubectl get nodes
NAME                                        STATUS    AGE
ip-10-0-0-59.us-east-2.compute.internal     Ready     1d
ip-10-0-23-239.us-east-2.compute.internal   Ready     1d
ip-10-0-44-211.us-east-2.compute.internal   Ready     1d
ip-10-0-61-218.us-east-2.compute.internal   Ready     1d
ip-10-0-67-239.us-east-2.compute.internal   Ready     1d
ip-10-0-95-51.us-east-2.compute.internal    Ready     1d
</code></pre><p>Huzzah! Now we&rsquo;re ready to get to work.</p>

<h2 id="getting-the-deployment-and-service-files">Getting the deployment and service files&nbsp;<a class="hanchor" href="#getting-the-deployment-and-service-files" aria-label="Anchor link for: Getting the deployment and service files">🔗</a></h2>
<p>All of the example files come from the official Kubernetes GitHub repo. You can find them in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">Guestbook example</a>. To get started, create a new directory and download all of the files.</p>
<pre tabindex="0"><code>$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/redis-{master,slave}-{deployment,service}.yaml \
       https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/frontend-{deployment,service}.yaml
</code></pre><p>We&rsquo;ll explain what all of these do in next steps. All of these next steps will start with the command to run, followed by a short explanation of what&rsquo;s actually happening.</p>

<h2 id="start-the-redis-master">Start the Redis master&nbsp;<a class="hanchor" href="#start-the-redis-master" aria-label="Anchor link for: Start the Redis master">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-master-service.yaml
service &#34;redis-master&#34; created
$ kubectl create -f redis-master-deployment.yaml
deployment &#34;redis-master&#34; created
</code></pre>
<h4 id="define-the-deployment">Define the deployment&nbsp;<a class="hanchor" href="#define-the-deployment" aria-label="Anchor link for: Define the deployment">🔗</a></h4>
<p>The <code>redis-master-deployment.yaml</code> file downloaded earlier defines the deployment and its characteristics. In this case, we have one pod that runs the Redis master in a container. Since we&rsquo;re using a deployment, that means if our pod goes down, Kubernetes will <strong>spin up a new pod</strong> to replace it. Worth noting in this example, if the pod <em>did</em> go down, there would be a potential for data loss until the new one replaces the old one (since the Redis master is not highly available, i.e. there are multiple).</p>

<h4 id="define-the-service">Define the service&nbsp;<a class="hanchor" href="#define-the-service" aria-label="Anchor link for: Define the service">🔗</a></h4>
<p>Our service in this example is a <strong>named load balancer</strong> that <strong>proxies traffic</strong> across one or many containers. Even though we only have one Redis master pod, we still want to use a service. This is a deterministic way of making the route to the master with a dynamic (or elastic) IP address.</p>
<p>Labeling the pods is important in this case, as Kubernetes will use the pods&rsquo; labels to determine which pods receive the traffic sent to the service, and load balance it accordingly.</p>

<h4 id="create-the-service">Create the service&nbsp;<a class="hanchor" href="#create-the-service" aria-label="Anchor link for: Create the service">🔗</a></h4>
<p>The next important step is to create the service. Note that we&rsquo;re doing this <em>before</em> we create the deployment. It&rsquo;s best practice to create the service first. This allows the scheduler to later spread the service across the deployments you create to support your application.</p>
<p>After creating the service, you can check its status by running this command. You should see similar output.</p>
<pre tabindex="0"><code>$ kubectl get services
NAME              CLUSTER-IP       EXTERNAL-IP       PORT(S)       AGE
redis-master      10.0.76.248      &lt;none&gt;            6379/TCP      1s
</code></pre><p>Now your Redis master serivce is up and running! The next step will be to create the Redis master deployment.</p>
<p>If you look at the service configuration file, you&rsquo;ll notice <code>port</code> and <code>targetPort</code> are two defined variables. Once everything is up and running, these will be important for determining how the traffic from the slaves to the masters is routed.</p>
<ol>
<li>Redis slave connects to <code>port</code> on Redis master service</li>
<li>Traffic forwarded from service&rsquo;s <code>port</code> to <code>targetPort</code> on pod the service listens to</li>
</ol>

<h4 id="create-the-deployment">Create the deployment&nbsp;<a class="hanchor" href="#create-the-deployment" aria-label="Anchor link for: Create the deployment">🔗</a></h4>
<p>Next, we created the Redis master pod in the cluster. To see our deployment and pods, we can run the following commands to see what was created.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
redis-master   1         1         1            1           27s
</code></pre><pre tabindex="0"><code>$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
redis-master-2353460263-1ecey   1/1       Running   0          1m
...
</code></pre><p>You should see all of the pods in your cluster so far. For now, that&rsquo;s just the Redis master. Let&rsquo;s give it some friends!</p>

<h2 id="start-the-redis-slaves">Start the Redis slaves&nbsp;<a class="hanchor" href="#start-the-redis-slaves" aria-label="Anchor link for: Start the Redis slaves">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-slave-service.yaml
service &#34;redis-slave&#34; created
$ kubectl create -f redis-slave-deployment.yaml
deployment &#34;redis-slave&#34; created
</code></pre>
<h4 id="defining-the-deployment">Defining the deployment&nbsp;<a class="hanchor" href="#defining-the-deployment" aria-label="Anchor link for: Defining the deployment">🔗</a></h4>
<p>In the configuration file, we defined two replicas, unlike the master. By doing this, it tells Kubernetes that the minimum number of pods that should always be running is two. If one of your pods goes down, Kubernetes automatically creates a new one to support the application. If you want, you can try killing the Docker process for one of your pods to see it happen in real time.</p>

<h2 id="start-the-guestbook-front-end">Start the guestbook front-end&nbsp;<a class="hanchor" href="#start-the-guestbook-front-end" aria-label="Anchor link for: Start the guestbook front-end">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f frontend-service.yaml
service &#34;frontend&#34; created
$ kubectl create -f frontend-deployment.yaml
deployment &#34;frontend&#34; created
</code></pre><p>The front-end is a PHP application with an AJAX interface and Angular-based UI. When using the form on the front-end application, it talks to the Redis master or slave, depending on if it&rsquo;s reading or writing to Redis. Again, we&rsquo;re deploying the front-end with multiple replicas. In this case, there will be three pods to support the front-end.</p>

<h2 id="say-hello">Say hello!&nbsp;<a class="hanchor" href="#say-hello" aria-label="Anchor link for: Say hello!">🔗</a></h2>
<p>Once you&rsquo;ve finished deploying everything, your web app should now be accessible! To get the full domain from AWS, run this command to figure out where to look.</p>
<pre tabindex="0"><code>$ kubectl get deploy/frontend svc/frontend -o wide
NAME           CLUSTER-IP   EXTERNAL-IP                                                             PORT(S)        AGE       SELECTOR
svc/frontend   10.3.0.175   aaebd8247ef2311e6a045021d1620193-54019671.us-east-2.elb.amazonaws.com   80:31020/TCP   1m        k8s-app=guestbook,tier=frontend

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/frontend   3         3         3            3           1m
</code></pre><p>Congratulations, we&rsquo;re all finished!</p>

<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Once you&rsquo;re finished or when you want to stop running the guestbook, it&rsquo;s easy to get rid of the deployments and services we created. Using labels, all the deployments and services can be deleted with one command.</p>
<pre tabindex="0"><code>$ kubectl delete deployments,services -l &#34;app in (redis, guestbook)&#34;
</code></pre><p>And now your guestbook application is offline. (It was nice while it lasted!)</p>

<h2 id="learn-more-about-kubernetes-and-tectonic">Learn more about Kubernetes and Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes-and-tectonic" aria-label="Anchor link for: Learn more about Kubernetes and Tectonic">🔗</a></h2>
<p>If you want to explore more about Kubernetes, you can read some of the earlier articles in this series. You can also read the original tutorial published by Kubernetes <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">on GitHub</a>. Additionally, the upstream documentation for <a href="https://kubernetes.io/docs/home/">Kubernetes</a> and <a href="https://coreos.com/tectonic/docs/latest/">Tectonic</a> is thorough and can help answer more advanced questions.</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Deploy CoreOS Tectonic to Amazon Web Services (AWS)</title><link>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</link><pubDate>Fri, 28 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. This post builds on top of the Fedora Magazine series by showing how to deploy CoreOS Tectonic to Amazon Web Services (AWS).</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article takes off from running Kubernetes on your own hardware and moves us one step closer to the cloud. By the end of this article, you will…</p>
<ul>
<li>Understand what CoreOS Tectonic is</li>
<li>Set up Amazon Web Services (AWS) for Tectonic</li>
<li>Deploy Tectonic to AWS</li>
</ul>
<p>This article is also based off of the excellent tutorial provided in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/creating-aws.html">CoreOS documentation</a>. Let&rsquo;s get started!</p>

<h2 id="what-is-tectonic">What is Tectonic?&nbsp;<a class="hanchor" href="#what-is-tectonic" aria-label="Anchor link for: What is Tectonic?">🔗</a></h2>
<p>In the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first article</a>, some of the key concepts of Kubernetes and why it&rsquo;s useful were explained. Kubernetes automates the deployment and setting up of your infrastructure across the three layers (users, masters, nodes). If you&rsquo;re working on your own at a small scale, Kubernetes itself can be plenty to meet your needs. However, there is still a decent amount of human involvement in managing the different pieces of Kubernetes. If you&rsquo;re working with multiple people in a team and across different environments, vanilla Kubernetes can be a lot to manage. For an enterprise environment, there&rsquo;s still some unmet needs. This is where Tectonic steps in.</p>
<p>Tectonic is a commercial product offered by <a href="https://coreos.com/">CoreOS</a>, the providers of <a href="https://coreos.com/os/docs/latest">Container Linux</a> and the original developers of <code>etcd</code>, now one of the core components of Kubernetes. Tectonic takes all of the open source components and pre-packages them. The self-proclaimed goal of doing this is to let anyone build a Google-style infrastructure into a cloud or on-premise environment. The outcome for the user is that it&rsquo;s easy to install a Kubernetes infrastructure across many different environments. In addition to simplifying the installation of the various components of a Kubernetes stack, Tectonic also provides a management console, a container registry for building and sharing containers, additional tools for deployment, and a few other nice features.</p>
<p>If we think about Kubernetes as a cake like we did before with three layers, Tectonic is like the box you set it in. Now, you can take your cake anywhere, move it around, and stack it with other cakes-in-a-box. All of your cakes are in their own boxes and you don&rsquo;t have to worry about them accidentally being damaged. If you&rsquo;re still a little confused, this diagram might help make more sense of it.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/platform-features.png" alt="Understanding where CoreOS Tectonic fits into the Kubernetes puzzle" loading="lazy">
  <figcaption>Understanding where Tectonic fits into the Kubernetes puzzle. From coreos.com/tectonic (<a href="https://coreos.com/tectonic/" class="bare">https://coreos.com/tectonic/</a>)</figcaption>
</figure>
</p>
<p>Fortunately, Tectonic has a free license that lets you use it for ten nodes. In this example, we&rsquo;ll register, get a free license, and deploy it into AWS.</p>
<p>(<em>Note</em>: If you want to revert anything we do in this example, there&rsquo;s an easy way to dismantle it across AWS and bring your bill to $0.00.)</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>In order to successfully run this guide, there&rsquo;s a few things you&rsquo;ll need first.</p>
<ul>
<li><strong>Amazon Web Services (AWS) account</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://aws.amazon.com">here</a></li>
</ul>
</li>
<li><strong>CoreOS Tectonic account and license</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://account.coreos.com/">here</a></li>
</ul>
</li>
<li><strong>A root-level or sub-domain</strong> (<em>e.g. example.com or k8s.example.com</em>)
<ul>
<li>If you look around, you can probably find some for less than USD$1 a year if you need one</li>
</ul>
</li>
<li><strong>Curiosity</strong>!</li>
</ul>

<h2 id="setting-up-dns-with-route-53">Setting up DNS with Route 53&nbsp;<a class="hanchor" href="#setting-up-dns-with-route-53" aria-label="Anchor link for: Setting up DNS with Route 53">🔗</a></h2>
<p>The first things we&rsquo;ll do is set up our domain with Route 53 in AWS. Route 53 can do a lot of things, like DNS management, traffic management, availability monitoring, domain registration, and more. However, we&rsquo;re only going to be using it for DNS management. Tectonic will use this to automatically provision DNS records for internal and external use.</p>

<h4 id="add-your-domain">Add your domain&nbsp;<a class="hanchor" href="#add-your-domain" aria-label="Anchor link for: Add your domain">🔗</a></h4>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-add-domain-route-53-283x300.png" alt="Adding a new domain to AWS Route 53 for Tectonic" loading="lazy">
  <figcaption>Adding a new domain to AWS Route 53 for Tectonic</figcaption>
</figure>
</p>
<p>To add your domain to Route 53, follow these steps from AWS.</p>
<ol>
<li>From <em>Services</em>, select <em>Networking &amp; Content Delivery</em> &gt; <em>Route 53</em>.</li>
<li>Select <em>Hosted zones</em> from the left pane and click <em>Create Hosted Zone</em>.</li>
<li>Enter your domain or sub-domain, add a comment if you want, and choose a Public Zone for the type.</li>
</ol>
<p>Once you&rsquo;ve done this, you can go ahead and click &ldquo;<em>Create</em>&rdquo;.</p>

<h4 id="change-the-nameservers">Change the nameservers&nbsp;<a class="hanchor" href="#change-the-nameservers" aria-label="Anchor link for: Change the nameservers">🔗</a></h4>
<p>After adding the hosted zone to Route 53, you&rsquo;ll need to change the nameservers for your domain via the domain registrar (whoever you bought the domain from). Usually it should be easy to find this, but it varies among registrars. If you&rsquo;re having a hard figuring out how to do this, try searching for a how-to or contacting your registrar&rsquo;s support.</p>
<p>After you added the hosted zone, you should see the nameservers in Route 53. There will be four nameservers provided there. You can copy and paste them from Route 53 to your domain registrar. Also note that if you&rsquo;re using a subdomain, the instructions might be a little different. You can read how to do this in the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/creating-migrating.html">Route 53 documentation</a>.</p>
<p>The nameservers could take minutes or hours to update, depending on how lucky you are. If you&rsquo;re impatient and want to check, open up a terminal and run this command. If you see the AWS nameservers in the output, then your domain has propagated and is now usable by Route 53.</p>
<pre tabindex="0"><code>dig -t ns &lt;example.com&gt;
</code></pre>
<h2 id="configuring-ec2-with-ssh-key-pair">Configuring EC2 with SSH key pair&nbsp;<a class="hanchor" href="#configuring-ec2-with-ssh-key-pair" aria-label="Anchor link for: Configuring EC2 with SSH key pair">🔗</a></h2>
<p>This guide assumes you already have an SSH key pair created on your system. If you don&rsquo;t have one generated, you can read how to generate one <a href="https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/">here</a>.</p>
<p>The next step for us is to add an SSH key pair to EC2, one of the compute engine products offered by AWS. We&rsquo;ll import an existing key on your system into EC2.</p>
<ol>
<li>From AWS, go to <em>Services</em> &gt; <em>Compute</em> &gt; <em>EC2</em>.</li>
<li>Confirm that you are in the <strong>correct EC2 region</strong> by checking the location next to your name in the menu bar.</li>
<li>Under <em>Network &amp; Security</em>, click <em>Key Pairs</em>.</li>
<li>Click <em>Import Key Pair</em>.</li>
<li>Either upload your public key file (<code>~/.ssh/id_rsa.pub</code>) or paste it into the text field. Don&rsquo;t forget to give it a name.</li>
</ol>
<p>And that&rsquo;s all you need to do!</p>

<h2 id="assigning-aws-user-privileges">Assigning AWS user privileges&nbsp;<a class="hanchor" href="#assigning-aws-user-privileges" aria-label="Anchor link for: Assigning AWS user privileges">🔗</a></h2>
<p>Tectonic does the magic of setting up AWS for you, so you don&rsquo;t have to manually add and create the services from the web interface. In order to do this, you need to add a user account that Tectonic can use to do all of the provisioning it needs. To do this, you&rsquo;ll need to create a new Access ID and Secret key pair from AWS.</p>
<ol>
<li>Select <em>Services</em> &gt; <em>Security, Identity &amp; Compliance</em> &gt; <em>IAM</em>.</li>
<li>From the left hand pane, click <em>Users</em>, then click <em>Add user</em>.</li>
<li>Set the user details:
<ol>
<li><em>User name</em> can be anything you like (I used <code>tectonic-mydomain.com</code>)</li>
<li><em>Access type</em> only needs to be <em>Programmatic access</em></li>
</ol>
</li>
<li>For permissions, click <em>Add user to group</em> and create a new group for your user.</li>
<li>When creating a new group, attach only the policies needed by Tectonic to operate correctly:
<ol>
<li><code>AmazonEC2FullAccess</code></li>
<li><code>IAMFullAccess</code></li>
<li><code>AmazonS3FullAccess</code></li>
<li><code>AmazonVPCFullAccess</code></li>
<li><code>AmazonRoute53FullAccess</code></li>
</ol>
</li>
<li>Finish creating the user. You&rsquo;ll then see the <em>Access key ID</em> and <em>Secret access key</em>. Hold onto these, you&rsquo;ll need them later. You won&rsquo;t get to see the secret key again!</li>
</ol>
<p>Now we&rsquo;re ready to install Tectonic! Let&rsquo;s grab your credentials next.</p>

<h2 id="download-tectonic-credentials">Download Tectonic credentials&nbsp;<a class="hanchor" href="#download-tectonic-credentials" aria-label="Anchor link for: Download Tectonic credentials">🔗</a></h2>
<p>Jump back over to the <a href="https://account.coreos.com/">CoreOS accounts page</a>. When you&rsquo;re logged in, you&rsquo;ll see the <em>Account Assets</em> area. Download the CoreOS license file and pull secret. Later on in the installer, you&rsquo;ll need to insert these to finish the installation.</p>

<h2 id="running-the-installer">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer" aria-label="Anchor link for: Running the installer">🔗</a></h2>
<p>Now things get interesting! We finally get to install and deploy Tectonic into AWS. The installer takes the form of a graphical installer in your web browser. To use the installer, you need to download the binary and run it. If you&rsquo;re curious, you can find the installer source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>.</p>

<h4 id="download-and-run-installer">Download and run installer&nbsp;<a class="hanchor" href="#download-and-run-installer" aria-label="Anchor link for: Download and run installer">🔗</a></h4>
<p>First, open up a new terminal window and navigate to a directory you want to download the installer to. Even though you likely won&rsquo;t need to run the installer again, you will want to hang on to this if you ever want to easily dismantle everything in AWS later.</p>
<pre tabindex="0"><code>curl -O https://releases.tectonic.com/tectonic-1.6.4-tectonic.1.tar.gz
</code></pre><p>Next, extract the tarball and navigate into the directory.</p>
<pre tabindex="0"><code>tar -xzvf tectonic-1.6.4-tectonic.1.tar.gz
cd tectonic/tectonic-installer
</code></pre><p>Now execute the installer binary. After running this, a new browser window will open that features the graphical installer.</p>
<pre tabindex="0"><code>./linux/installer
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-installer-aws.png" alt="Now we&rsquo;re ready to deploy Tectonic into AWS!" loading="lazy">
  <figcaption>Now we’re ready to deploy Tectonic into AWS!</figcaption>
</figure>
</p>

<h4 id="running-the-installer-1">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer-1" aria-label="Anchor link for: Running the installer">🔗</a></h4>
<p>The installer is thorough and assumes safe defaults for most of the steps. Be sure to have your AWS Access and Secret ID keys on hand. You should be able to run through the installer without issue. If you&rsquo;re confused about what any of the values mean or want to make custom changes, you can read more in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/installing-tectonic.html">upstream documentation</a>.</p>
<p>Once you&rsquo;re finished, congrats! You&rsquo;ve successfully installed Tectonic!</p>

<h2 id="check-out-your-tectonic-install">Check out your Tectonic install&nbsp;<a class="hanchor" href="#check-out-your-tectonic-install" aria-label="Anchor link for: Check out your Tectonic install">🔗</a></h2>
<p>Once you finish the installation successfully, your Tectonic installation will be accessible within AWS. You can navigate to the domain you specified during the install to find it. Unless you added a CA authority and certificates, your browser will probably complain about invalid SSL certificates, but you can ignore the warning safely. It might also take a few minutes before the URL is accessible, so if you were looking for a coffee or tea break, now would be a good time!</p>
<p>Once you&rsquo;re logged in, you should see something like this.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-status-page.png" alt="Looking at a freshly installed Tectonic status page on AWS" loading="lazy">
  <figcaption>Looking at a freshly installed Tectonic status page on AWS</figcaption>
</figure>
</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/prometheus-monitoring.png" alt="A more advanced use case of what Tectonic can do with monitoring" loading="lazy">
  <figcaption>A more advanced use case of what Tectonic can do with monitoring</figcaption>
</figure>
</p>

<h2 id="blow-it-all-away">Blow it all away!&nbsp;<a class="hanchor" href="#blow-it-all-away" aria-label="Anchor link for: Blow it all away!">🔗</a></h2>
<p>If you&rsquo;re like me, you might be frustrated by guides that tell you how to install things but not how to take it all apart. Fortunately, this guide not only tells you how to do that, but the Tectonic installer also makes it super easy to do. If you&rsquo;re sure that you&rsquo;re done with Tectonic and don&rsquo;t want any leftovers to remain in AWS, this is the best way to do it, instead of deleting everything manually from the AWS Console.</p>
<p>Every installation has a time-stamped folder in the <code>tectonic</code> directory we used earlier. First, you need to navigate into the specific folder for the cluster you installed. It&rsquo;s important to be inside of this directory first.</p>
<pre tabindex="0"><code>cd tectonic/tectonic-installer/linux/clusters/&lt;CLUSTERNAME&gt;
</code></pre><p><code>&lt;CLUSTERNAME&gt;</code> will be the time-stamped directory. Once you&rsquo;re in the folder, run this command to trigger the uninstaller. After running this, you&rsquo;ll see the installer slowly dismantle everything and delete any leftovers in AWS.</p>
<pre tabindex="0"><code>../../terraform destroy
</code></pre><p>Once it finishes, you should see an output message confirming how many AWS resources were destroyed. And now you&rsquo;re back to where you started.</p>

<h2 id="learn-more-about-tectonic">Learn more about Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-tectonic" aria-label="Anchor link for: Learn more about Tectonic">🔗</a></h2>
<p>If you thought this was exciting and want to learn more, there is no shortage of resources for you to read. You can learn more about Tectonic from the <a href="https://coreos.com/tectonic/">CoreOS website</a> or the <a href="https://tectonic.com/blog/announcing-tectonic/">original release announcement</a>. You can also dig into the installer&rsquo;s source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>. If you&rsquo;re still trying to wrap your head around Tectonic, there&rsquo;s a good write-up <a href="https://virtualizationreview.com/articles/2017/04/04/coreos-tectonic-to-shake-up-kubernetes.aspx">on virtualizationreview.com</a>.</p>
<p>Next week, we&rsquo;ll install a simple guestbook application to our Tectonic installation to see how it all works and what you can do with it. Stay tuned!</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Clustered computing on Fedora with Minikube</title><link>https://jwheel.org/blog/2017/07/minikube-kubernetes/</link><pubDate>Fri, 07 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/minikube-kubernetes/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/minikube-kubernetes/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. This second post shows you how to build a single-node Kubernetes deployment on your own computer.</em></p>
<hr>
<p>Once you have a better understanding of what the key concepts and terminology in Kubernetes are, getting started is easier. Like many programming tutorials, this tutorial shows you how to build a &ldquo;Hello World&rdquo; application and deploy it locally on your computer using Kubernetes. This is a simple tutorial because there aren&rsquo;t multiple nodes to work with. Instead, the only device we&rsquo;re using is a single node (a.k.a. your computer). By the end, you&rsquo;ll see how to deploy a Node.js application into a Kubernetes pod and manage it with a deployment on Fedora.</p>
<p>This tutorial isn&rsquo;t made from scratch. You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the official Kubernetes documentation. This article adds some changes that will let you do the same thing on your own Fedora computer.</p>

<h2 id="introducing-minikube">Introducing Minikube&nbsp;<a class="hanchor" href="#introducing-minikube" aria-label="Anchor link for: Introducing Minikube">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> is an official tool developed by the Kubernetes team to help make testing it out easier. It lets you run a single-node Kubernetes cluster through a virtual machine on your own hardware. Beyond using it to play around with or experiment for the first time, it&rsquo;s also useful as a testing tool if you&rsquo;re working with Kubernetes daily. It does support many of the features you&rsquo;d want in a production Kubernetes environment, like DNS, NodePorts, and container run-times.</p>

<h2 id="installation">Installation&nbsp;<a class="hanchor" href="#installation" aria-label="Anchor link for: Installation">🔗</a></h2>
<p>This tutorial requires virtual machine and container software. There are many options you can use. Minikube supports <code>virtualbox</code>, <code>vmwarefusion</code>, <code>kvm</code>, and <code>xhyve</code> drivers for virtualization. However, this guide will use KVM since it&rsquo;s already packaged and available in Fedora. We&rsquo;ll also use Node.js for building the application and Docker for putting it in a container.</p>

<h4 id="pre-requirements">Pre-requirements&nbsp;<a class="hanchor" href="#pre-requirements" aria-label="Anchor link for: Pre-requirements">🔗</a></h4>
<p>You can install the prerequisites with this command.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes libvirt-daemon-kvm kvm nodejs docker
</code></pre><p>After installing these packages, you&rsquo;ll need to add your user to the right group to let you use KVM. The following commands will add your user to the group and then update your current session for the group change to take effect.</p>
<pre tabindex="0"><code>$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
</code></pre>
<h4 id="docker-kvm-drivers">Docker KVM drivers&nbsp;<a class="hanchor" href="#docker-kvm-drivers" aria-label="Anchor link for: Docker KVM drivers">🔗</a></h4>
<p>If using KVM, you will also need to install the KVM drivers to work with Docker. You need to add <a href="https://github.com/docker/machine/releases">Docker Machine</a> and the <a href="https://github.com/dhiltgen/docker-machine-kvm/releases/">Docker Machine KVM Driver</a> to your local path. You can check their pages on GitHub for the latest versions, or you can run the following commands for specific versions. These were tested on a Fedora 25 installation.</p>

<h5 id="docker-machine">Docker Machine&nbsp;<a class="hanchor" href="#docker-machine" aria-label="Anchor link for: Docker Machine">🔗</a></h5>
<pre tabindex="0"><code>$ curl -L https://github.com/docker/machine/releases/download/v0.12.0/docker-machine-`uname -s`-`uname -m` &gt;/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
</code></pre>
<h5 id="docker-machine-kvm-driver">Docker Machine KVM Driver&nbsp;<a class="hanchor" href="#docker-machine-kvm-driver" aria-label="Anchor link for: Docker Machine KVM Driver">🔗</a></h5>
<p>This installs the CentOS 7 driver, but it also works with Fedora.</p>
<pre tabindex="0"><code>$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 &gt;/tmp/docker-machine-driver-kvm
$ chmod +x /tmp/docker-machine-driver-kvm
$ sudo cp /tmp/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm
</code></pre>
<h4 id="installing-minikube">Installing Minikube&nbsp;<a class="hanchor" href="#installing-minikube" aria-label="Anchor link for: Installing Minikube">🔗</a></h4>
<p>The final step for installation is getting Minikube itself. Currently, there is no package in Fedora available, and official documentation recommends grabbing the binary and moving it your local path. To download the binary, make it executable, and move it to your path, run the following.</p>
<pre tabindex="0"><code>$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin/
</code></pre><p>Now you&rsquo;re ready to build your cluster.</p>

<h2 id="create-the-minikube-cluster">Create the Minikube cluster&nbsp;<a class="hanchor" href="#create-the-minikube-cluster" aria-label="Anchor link for: Create the Minikube cluster">🔗</a></h2>
<p>Now that you have everything installed and in the right place, you can create your Minikube cluster and get started. To start Minikube, run this command.</p>
<pre tabindex="0"><code>$ minikube start --vm-driver=kvm
</code></pre><p>Next, you&rsquo;ll need to set the context. Context is how <code>kubectl</code> (the command-line interface for Kubernetes) knows what it&rsquo;s dealing with. To set the context for Minikube, run this command.</p>
<pre tabindex="0"><code>$ kubectl config use-context minikube
</code></pre><p>As a check, make sure that <code>kubectl</code> can communicate with your cluster by running this command.</p>
<pre tabindex="0"><code>$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use &#39;kubectl cluster-info dump&#39;.
</code></pre>
<h2 id="build-your-application">Build your application&nbsp;<a class="hanchor" href="#build-your-application" aria-label="Anchor link for: Build your application">🔗</a></h2>
<p>Now that Kubernetes is ready, we need to have an application to deploy in it. This article uses the same Node.js application as the official tutorial in the Kubernetes documentation. Create a folder called <code>hellonode</code> and create a new file called <code>server.js</code> with your favorite text editor.</p>
<pre tabindex="0"><code>var http = require(&#39;http&#39;);

var handleRequest = function(request, response) {
 console.log(&#39;Received request for URL: &#39; + request.url);
 response.writeHead(200);
 response.end(&#39;Hello world!&#39;);
};
var www = http.createServer(handleRequest);
www.listen(8080);
</code></pre><p>Now try running your application and running it.</p>
<pre tabindex="0"><code>$ node server.js
</code></pre><p>While it&rsquo;s running, you should be able to access it on <a href="http://localhost:8080/">localhost:8080</a>. Once you verify it&rsquo;s working, hit <code>Ctrl+C</code> to kill the process.</p>

<h2 id="create-docker-container">Create Docker container&nbsp;<a class="hanchor" href="#create-docker-container" aria-label="Anchor link for: Create Docker container">🔗</a></h2>
<p>Now you have an application to deploy! The next step is to get it packaged into a Docker container (that you&rsquo;ll pass to Kubernetes later). You&rsquo;ll need to create a <code>Dockerfile</code> in the same folder as your <code>server.js</code> file. This guide uses an existing Node.js Docker image. It exposes your application on port 8080, copies <code>server.js</code> to the image, and runs it as a server. Your <code>Dockerfile</code> should look like this.</p>
<pre tabindex="0"><code>FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js
</code></pre><p>If you&rsquo;re familiar with Docker, you&rsquo;re likely used to pushing your image to a registry. In this case, since we&rsquo;re deploying it to Minikube, you can build it using the same Docker host as the Minikube virtual machine. For this to happen, you&rsquo;ll need to use the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env)
</code></pre><p>Now you can build your Docker image with the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v1 .
</code></pre><p>Huzzah! Now you have an image Minikube can run.</p>

<h2 id="create-minikube-deployment">Create Minikube deployment&nbsp;<a class="hanchor" href="#create-minikube-deployment" aria-label="Anchor link for: Create Minikube deployment">🔗</a></h2>
<p>If you remember from the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first part</a> of this series, deployments watch your application&rsquo;s health and reschedule it if it dies. Deployments are the supported way of creating and scaling pods. <code>kubectl run</code> creates a deployment to manage a pod. We&rsquo;ll create one that uses the <code>hello-node</code> Docker image we just built.</p>
<pre tabindex="0"><code>$ kubectl run hello-node --image=hello-node:v1 --port=8080
</code></pre><p>Next, check that the deployment was created successfully.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           30s
</code></pre><p>Creating the deployment also creates the pod where the application is running. You can view the pod with this command.</p>
<pre tabindex="0"><code>$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-1644695913-k2314   1/1       Running   0          3
</code></pre><p>Finally, let&rsquo;s look at what the configuration looks like. If you&rsquo;re familiar with Ansible, the configuration files for Kubernetes also use easy-to-read YAML. You can see the full configuration with this command.</p>
<pre tabindex="0"><code>$ kubectl config view
</code></pre><p><code>kubectl</code> does many things. To read more about what you can do with it, you can read the <a href="https://kubernetes.io/docs/user-guide/kubectl-overview/">documentation</a>.</p>

<h2 id="create-service">Create service&nbsp;<a class="hanchor" href="#create-service" aria-label="Anchor link for: Create service">🔗</a></h2>
<p>Right now, the pod is only accessible inside of the Kubernetes pod with its internal IP address. To see it in a web browser, you&rsquo;ll need to expose it as a service. To expose it as a service, run this command.</p>
<pre tabindex="0"><code>$ kubectl expose deployment hello-node --type=LoadBalancer
</code></pre><p>The type was specified as a <code>LoadBalancer</code> because Kubernetes will expose the IP outside of the cluster. If you were running a load balancer in a cloud environment, this how you&rsquo;d provision an external IP address. However, in this case, it exposes your application as a service in Minikube. And now, finally, you get to see your application. Running this command will open a new browser window with your application.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/minikube-hello-world-browser-e1497995645454.png" alt="Minikube: Exposing Hello Minikube application in browser" loading="lazy">
</figure>
</p>
<p>Congratulations, you deployed your first containerized application via Kubernetes! But now, what if you need to our small Hello World application?</p>

<h2 id="how-do-we-push-changes">How do we push changes?&nbsp;<a class="hanchor" href="#how-do-we-push-changes" aria-label="Anchor link for: How do we push changes?">🔗</a></h2>
<p>The time has come when you&rsquo;re ready to make an update and push it. Edit your <code>server.js</code> file and change &ldquo;Hello world!&rdquo; to &ldquo;Hello again, world!&rdquo;</p>
<pre tabindex="0"><code>response.end(&#39;Hello again, world!&#39;);
</code></pre><p>And we&rsquo;ll build another Docker image. Note the version bump.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v2 .
</code></pre><p>Next, you need to give Kubernetes the new image to deploy.</p>
<pre tabindex="0"><code>$ kubectl set image deployment/hello-node hello-node=hello-node:v2
</code></pre><p>And now, your update is pushed! Like before, run this command to have it open in a new browser window.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>If your application doesn&rsquo;t come up any different, double-check that you updated the right image. You can troubleshoot by getting a shell into your pod by running the following command. You can get the pod name from the command run earlier (<code>kubectl get pods</code>). Once you&rsquo;re in the shell, check if the <code>server.js</code> file shows your changes.</p>
<pre tabindex="0"><code>$ kubectl exec -it &lt;pod-name&gt; bash
</code></pre>
<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Now that we&rsquo;re done, we can clean up the environment. To clear up the resources in your cluster, run these two commands.</p>
<pre tabindex="0"><code>$ kubectl delete service hello-node
$ kubectl delete deployment hello-node
</code></pre><p>If you&rsquo;re done playing with Minikube, you can also stop it.</p>
<pre tabindex="0"><code>$ minikube stop
</code></pre><p>If you&rsquo;re done using Minikube for a while, you can unset Minikube Docker daemon that we set earlier in this guide.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env -u)
</code></pre>
<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the Kubernetes documentation. If you want to read more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is thorough and comprehensive.</p>
<p>Questions, Minikube stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Introduction to Kubernetes with Fedora</title><link>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</link><pubDate>Mon, 03 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This article is part of a short series that introduces Kubernetes. This beginner-oriented series covers some higher level concepts and gives examples of using Kubernetes on Fedora.</em></p>
<hr>
<p>The information technology world changes daily, and the demands of building scalable infrastructure become more important. Containers aren&rsquo;t anything new these days, and have various uses and implementations. But what about building scalable, containerized applications? By itself, Docker and other tools don&rsquo;t quite cut it, as far as building the infrastructure to support containers. How do you deploy, scale, and manage containerized applications in your infrastructure? This is where tools such as Kubernetes comes in. <a href="https://kubernetes.io/">Kubernetes</a> is an open source system that automates deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google before being donated to the <a href="https://en.wikipedia.org/wiki/Linux_Foundation#Cloud_Native_Computing_Foundation">Cloud Native Computing Foundation</a>, a project of the <a href="https://www.linuxfoundation.org/">Linux Foundation</a>. This article gives a quick precursor to what Kubernetes is and what some of the buzzwords really mean.</p>

<h2 id="what-is-kubernetes">What is Kubernetes?&nbsp;<a class="hanchor" href="#what-is-kubernetes" aria-label="Anchor link for: What is Kubernetes?">🔗</a></h2>
<p>Kubernetes simplifies and automates the process of deploying containerized applications at scale. Just like Ansible <a href="https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/">orchestrates software</a>, Kubernetes orchestrates deploying infrastructure that supports the software. There are various &ldquo;layers of the cake&rdquo; that make Kubernetes a strong solution for building resilient infrastructure. It also assists with making systems that can grow at scale. If your application has increasing demands such as higher traffic, Kubernetes helps grow your environment to support increasing demands. This is one reason why Kubernetes is helpful for building long-term solutions for complex problems (even if it&rsquo;s not complex… yet).</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/kubernetes-high-level-design.jpg" alt="Kubernetes: The high level design" loading="lazy">
  <figcaption>Kubernetes: The high level design. Daniel Smith, Robert Bailey, Kit Merker (<a href="https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015" class="bare">https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015</a>).</figcaption>
</figure>
</p>
<p>At a high level overview, imagine three different layers.</p>
<ul>
<li><strong>Users</strong>: People who deploy or create containerized applications to run in your infrastructure</li>
<li><strong>Master(s)</strong>: Manages and schedules your software across various other machines, for example in a clustered computing environment</li>
<li><strong>Nodes</strong>: Various machines to support the application, called <em>kubelets</em></li>
</ul>
<p>These three layers are orchestrated and automated by Kubernetes. One of the key pieces of the master (not included in the visual) is <strong>etcd</strong>. etcd is a lightweight and distributed key/value store that holds configuration data. Each node, or kubelet, can access this data in etcd through a HTTP/JSON API interface. The components of communication between master and node such as etcd are explained <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/">in the official documentation</a>.</p>
<p>Another important detail not shown in the diagram is that you might have many masters. In a high-availability (HA) set-up, you can keep your infrastructure resilient by having multiple masters in case one happens to go down.</p>

<h2 id="terminology">Terminology&nbsp;<a class="hanchor" href="#terminology" aria-label="Anchor link for: Terminology">🔗</a></h2>
<p>It&rsquo;s important to understand the concepts of Kubernetes before you start to play around with it. There are many core concepts in Kubernetes, such as services, volumes, secrets, daemon sets, and jobs. However, this article explains four that are helpful for the next exercise of building a mini Kubernetes cluster. The three concepts are <em>pods</em>, <em>labels</em>, <em>replica sets</em>, and <em>deployments</em>.</p>

<h4 id="pods"><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/">Pods</a>&nbsp;<a class="hanchor" href="#pods" aria-label="Anchor link for: Pods">🔗</a></h4>
<p>If you imagine Kubernetes as a Lego® castle, pods are the smallest block you can pick out. By themselves, they are the smallest unit you can deploy. The containers of an application fit into a pod. The pod can be one container, but it can also be as many as needed. Containers in a pod are unique since they share the Linux namespace and aren&rsquo;t isolated from each other. In a world before containers, this would be similar to running an application on the same host machine.</p>
<p>When the pods share the same namespace, all the containers in a pod:</p>
<ul>
<li>Share an IP address</li>
<li>Share port space</li>
<li>Find each other over <em>localhost</em></li>
<li>Communicate over IPC namespace</li>
<li>Have access to shared volumes</li>
</ul>
<p>But what&rsquo;s the point of having pods? The main purpose of pods is to have groups of &ldquo;helping&rdquo; containers on the same namespace (co-located) and integrated together (co-managed) along with the main application container. Some examples might be logging or monitoring tools that check the health of your application, or backup tools that act when certain data changes.</p>
<p>In the big picture, containers in a single pod are always scheduled together too. However, Kubernetes doesn&rsquo;t automatically reschedule them to a new node if the node dies (more on this later).</p>

<h4 id="labels"><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">Labels</a>&nbsp;<a class="hanchor" href="#labels" aria-label="Anchor link for: Labels">🔗</a></h4>
<p>Labels are a simple but important concept in Kubernetes. Labels are key/value pairs attached to <em>objects</em> in Kubernetes, like pods. They let you specify unique attributes of objects that actually mean something to humans. You can attach them when you create an object, and modify or add them later. Labels help you organize and select different sets of objects to interact with when performing actions inside of Kubernetes. For example, you can identify:</p>
<ul>
<li><strong>Software releases</strong>: Alpha, beta, stable</li>
<li><strong>Environments</strong>: Development, production</li>
<li><strong>Tiers</strong>: Front-end, back-end</li>
</ul>
<p>Labels are as flexible as you need them to be, and this list isn&rsquo;t comprehensive. Be creative when thinking of how to apply them.</p>

<h4 id="replica-sets"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">Replica sets</a>&nbsp;<a class="hanchor" href="#replica-sets" aria-label="Anchor link for: Replica sets">🔗</a></h4>
<p>Replica sets are where some of the magic begins to happen with automatic scheduling or rescheduling. Replica sets ensure that a number of pod instances (called <em>replicas</em>) are running at any moment. If your web application needs to constantly have four pods in the front-end and two in the back-end, the replica sets are your insurance that number is always maintained. This also makes Kubernetes great for scaling. If you need to scale up or down, change the number of replicas.</p>
<p>When reading about replica sets, you might also see <em>replication controllers</em>. They are somewhat interchangeable, but replication controllers are older, semi-deprecated, and less powerful than replica sets. The main difference is that sets work with more advanced set-based selectors &ndash; which goes back to labels. Ideally, you won&rsquo;t have to worry about this much today.</p>
<p>Even though replica sets are where the scheduling magic happens to help make your infrastructure resilient, you won&rsquo;t actually interact with them much. Replica sets are managed by deployments, so it&rsquo;s unusual to directly create or manipulate replica sets. And guess what&rsquo;s next?</p>

<h4 id="deployments"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployments</a>&nbsp;<a class="hanchor" href="#deployments" aria-label="Anchor link for: Deployments">🔗</a></h4>
<p>Deployments are another important concept inside of Kubernetes. Deployments are a declarative way to deploy and manage software. If you&rsquo;re familiar with Ansible, you can compare deployments to the playbooks of Ansible. If you&rsquo;re building your infrastructure out, you want to make sure it is easily reproducible without much manual work. Deployments are the way to do this.</p>
<p>Deployments offer functionality such as revision history, so it&rsquo;s always easy to rollback changes if something doesn&rsquo;t work out. They also manage any updates you push out to your application, and if something isn&rsquo;t working, it will stop rolling out your update and revert back to the last working state. Deployments follow the mathematical property of <a href="https://en.wikipedia.org/wiki/Idempotence">idempotence</a>, which means you define your specs once and use them many times to get the same result.</p>
<p>Deployments also get into imperative and declarative ways to build infrastructure, but this explanation is a quick, fly-by overview. You can read more <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">detailed information</a> in the official documentation.</p>

<h2 id="installing-on-fedora">Installing on Fedora&nbsp;<a class="hanchor" href="#installing-on-fedora" aria-label="Anchor link for: Installing on Fedora">🔗</a></h2>
<p>If you want to start playing with Kubernetes, install it and some useful tools from the Fedora repositories.</p>
<pre tabindex="0"><code>sudo dnf install kubernetes
</code></pre><p>This command provides the bare minimum needed to get started. You can also install other cool tools like <em>cockpit-kubernetes</em> (integration with <a href="http://cockpit-project.org/">Cockpit</a>) and <em>kubernetes-ansible</em> (provisioning Kubernetes with <a href="https://www.ansible.com/">Ansible</a> playbooks and roles).</p>

<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>If you want to read more about Kubernetes or want to explore the concepts more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is fantastic, but there are also other helpful guides from <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes">DigitalOcean</a> and <a href="https://blog.giantswarm.io/understanding-basic-kubernetes-concepts-i-introduction-to-pods-labels-replicas/">Giant Swarm</a>. The next article in the series will explore building a mini Kubernetes cluster on your own computer to see how it really works.</p>
<p>Questions, Kubernetes stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Netflix and Linux: The First 60 Seconds</title><link>https://jwheel.org/blog/2015/12/netflix-and-linux-the-first-60-seconds/</link><pubDate>Wed, 02 Dec 2015 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2015/12/netflix-and-linux-the-first-60-seconds/</guid><description><![CDATA[<p>
<figure>
  <img src="/blog/2015/12/Linux-and-Netflix.jpg" alt="Netflix and Linux, friends at last" loading="lazy">
  <figcaption>Netflix and Linux may not agree on the desktop, but they do in the cloud. <em>Source</em>: blockless.com (<a href="https://blog.blockless.com/how-to-switch-to-linux-without-losing-your-netflix-access/" class="bare">https://blog.blockless.com/how-to-switch-to-linux-without-losing-your-netflix-access/</a>)</figcaption>
</figure>
</p>

<h2 id="netflix-linux-and-60-seconds">Netflix, Linux, and 60 seconds&nbsp;<a class="hanchor" href="#netflix-linux-and-60-seconds" aria-label="Anchor link for: Netflix, Linux, and 60 seconds">🔗</a></h2>
<p>While they have their differences on the desktop, there is one place where Netflix and Linux get along beautifully: the cloud. Netflix system administrator <a href="https://plus.google.com/112610724469645265130">Brendan Gregg</a> recently published an article on the <a href="http://techblog.netflix.com/2015/11/linux-performance-analysis-in-60s.html">Netflix blog</a> about what their administrators do in the first 60 seconds when analyzing performance. You should <a href="http://techblog.netflix.com/2015/11/linux-performance-analysis-in-60s.html">give it a read</a>!</p>
]]></description></item></channel></rss>