<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Scalability</title><link>https://jwheel.org/tags/scalability/</link><description>Homepage of Justin Wheeler, an Open Source contributor and Free Software advocate from Georgia, USA.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>Justin Wheeler</managingEditor><lastBuildDate>Thu, 06 Oct 2022 00:00:00 +0000</lastBuildDate><atom:link href="https://jwheel.org/rss/tags/scalability/index.xml" rel="self" type="application/rss+xml"/><item><title>XPOST: Spurring new Digital Public Goods</title><link>https://jwheel.org/blog/2022/10/new-digital-public-goods/</link><pubDate>Thu, 06 Oct 2022 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2022/10/new-digital-public-goods/</guid><description><![CDATA[<p><a href="https://www.unicef.org/innovation/stories/spurring-new-digital-public-goods"><em>Originally published on 27 September 2022 via unicef.org</em>.</a></p>
<hr>
<p>This year, the <a href="https://www.unicefinnovationfund.org/">UNICEF Venture Fund</a> celebrates <a href="https://www.unicef.org/innovation/venturefund/blockchain-financial-inclusion-graduation">five graduating companies</a> from a recent investment round. For the first time, many of these companies are exiting from the Venture Fund having already earned recognition as <a href="https://digitalpublicgoods.net/registry/">Digital Public Goods (DPGs)</a>. With the support of a cross-sectional team of mentors, these graduating companies worked to achieve compliance with the <a href="https://digitalpublicgoods.net/standard/">DPG Standard</a>.</p>
<p>The <a href="https://digitalpublicgoods.net/standard/">Digital Public Good Standard</a> offers a nine-point baseline for evaluation and recognition of Open Source software, content, data, and standards that adhere to privacy and other applicable laws and best practices, do no harm by design, and help attain the Sustainable Development Goals (SDGs). Once a solution is recognised as a digital public good it is discoverable on the <a href="https://digitalpublicgoods.net/registry/">DPG Registry</a>.</p>
<p>This recognition acknowledges their use of vetted Open Source licenses, useful documentation, and adherence to relevant best practices and local data protection laws. What makes this achievement a first for the Venture Fund is that these recognitions were achieved by the companies during the investment round. Typically, companies that go on from the Venture Fund achieve recognition after a year or more of graduation. This new shift is made possible by the growing investment in Technical Assistance at the Venture Fund and the leadership of a robust team of mentors.</p>
<p>This article introduces the Technical Assistance mentoring programmes offered by the UNICEF Venture Fund, the addition of new mentors in the last year, the shift of mentor focus around the DPG Standard, and the results achieved to date from the latest graduating Venture Fund cohort.</p>

<h1 id="origins-of-technical-assistance-at-the-venture-fund">Origins of Technical Assistance at the Venture Fund&nbsp;<a class="hanchor" href="#origins-of-technical-assistance-at-the-venture-fund" aria-label="Anchor link for: Origins of Technical Assistance at the Venture Fund">🔗</a></h1>
<p>The Venture Fund offers different areas of Technical Assistance to start-up companies who apply and are selected to receive early-stage seed investment by UNICEF. Originally starting in 2018, the Technical Assistance programmes only included Business Development and Open Source. Over the years, we have piloted and pivoted mentorship models with input from our portfolio of startups. Today, the Technical Assistance programmes cover a range of topics across an experienced team of mentors, depending on the relevance to the start-up companies:</p>
<ul>
<li>Blockchain with Arun Maharajan and Alex Sherbuck (former)</li>
<li>Business Development with Jamil Wyne and Philippa Martinelli (former)</li>
<li>Evidence of Impact with Milena Bacalja Perianes and Jennifer Sawyer</li>
<li>Data Privacy &amp; Security with Lydia Kwong</li>
<li>Data Science &amp; A.I. with Daniel Alvarez</li>
<li>Open Source with Justin Wheeler, Abigail Cabunoc Mayes (former), and Vipul Siddharth</li>
<li>Software Development with Iván Perdomo</li>
</ul>
<p>The mentors work closely with the experienced team of portfolio managers (Meghan Warner, Kennedy Kitheka, and Madison Marks) to guide and coach Venture Fund companies to achieve their targets and success indicators during the investment round.</p>
<p>Starting in 2021, the Venture Fund broadened the Technical Assistance programmes to include Software Development, Data Science &amp; A.I., Data Privacy &amp; Security, and Evidence of Impact. This was a marked change in growing the support and expertise made available to start-up companies during their investment round. However, as the team of mentors and Technical Assistance offerings expanded, there was a growing need to bring a common rallying point across all programmes. How could the mentors ensure their Technical Assistance programmes complemented one another without duplicating topics or repeating conversations?</p>
<p>Further complementing the core Technical Assistance programme, <a href="https://www.unicefinnovationfund.org/broadcast/expert-posts/unicef-innovation-fund-blockchain-cohort-onboarding-workshops">specialized workshops</a> were held by like-minded institutions outside the Venture Fund’s core team of mentors , along with personalized mentorship sessions. The recent Blockchain Cohort, for example, benefitted from targeted mentorship from AW3L, a blockchain consulting firm that share many of UNICEF&rsquo;s values around leveraging blockchain for social impact.</p>
<blockquote>
<p>“Blockchain has immense potential, but it remains just a tool and its impact is dependent on what we do with it. That&rsquo;s why it is crucial to have local entrepreneurs on the ground building use-cases that solve real problems unique to their geography. We are therefore extremely happy and proud to support UNICEF and its portfolio companies to tackle real-world problems in emerging markets by utilizing blockchain technology.”</p>
<p>Martijn van de Weerdt, Founder, AW3L</p>
</blockquote>

<h1 id="how-the-dpg-standard-unified-the-mentoring-streams">How the DPG Standard unified the mentoring streams&nbsp;<a class="hanchor" href="#how-the-dpg-standard-unified-the-mentoring-streams" aria-label="Anchor link for: How the DPG Standard unified the mentoring streams">🔗</a></h1>
<p>The DPG Standard became a common rallying point for the UNICEF Technical Assistance programmes. As our mentoring programmes increased and topic areas broadened, we needed coordination and a synchronized stream of Technical Assistance programmes. In the last year, the Venture Fund reviewed its workplan development and strategy to enable more solutions to achieve recognition as a digital public good at or near the graduation point for a Venture Fund portfolio. The most recent graduating cohort, the <a href="https://www.unicef.org/innovation/venturefund/blockchain-financial-inclusion-graduation">2021 Blockchain cohort</a>, represents this improved alignment, with 4 of 5 companies receiving recognition of their products as digital public goods by their graduation this year.</p>
<p>How does recognition of an open solution as a Digital Public Good help Venture Fund startups? It is an acknowledgment by the Digital Public Goods Alliance of a commitment and adherence to best practices and steps taken to protect data privacy and do no harm. Additionally, recognition as a DPG unlocks stronger potential for adoption and deployment of the solution by global stakeholders by providing greater visibility in a public roster of open solutions that adhere to best practices and standards. The recognition of 80% of an off-boarding Venture Fund portfolio speaks to both the intrinsic capabilities of the companies and the value of the Technical Assistance programmes and mentorship provided to them by the Venture Fund.</p>
<p>While past Venture Fund companies have received recognition as digital public goods before, this is the first time that a company achieved the recognition at the time of their graduation from the Venture Fund. Aligning the Technical Assistance programmes around the DPG Standard provided common frameworks and mental models for the diverse team of mentors to support the companies and help them achieve the Standard as an important part of their product development lifecycle.</p>
<blockquote>
<p>“As an early-stage startup, we struggled with a clear business model. Especially in the last six months of the investment, support from the mentor network helped in building clear business growth and impact metric plans. Also a year ago, we were very heavy on the tech side but lacked considerable planning on network and visibility growth. We have developed a customer persona and a pricing model, and now have a clearer vision of our Total Available Market, Serviceable Available Market, and Serviceable Obtainable Market (TAM, SAM, and SOM) models.”</p>
<p>Rumee Singh, Co-Founder, Rumsan</p>
</blockquote>

<h1 id="further-farther-together">Further, farther, together&nbsp;<a class="hanchor" href="#further-farther-together" aria-label="Anchor link for: Further, farther, together">🔗</a></h1>
<p>What comes next? The Technical Assistance programmes at the UNICEF Venture Fund are gearing up for additional cohorts benefiting from our seed-stage investment: a <a href="https://www.unicef.org/innovation/innovationfund/ai-ds-learning-health-2022">Data Science &amp; A.I. cohort</a> and an upcoming Blockchain cohort. These early-stage companies undergo a technical assistance programme involving a technical and strategic workshop series and monthly mentorship meetings. Graduates of our seed-stage investment that  have received additional capital through our <a href="https://www.unicef.org/innovation/growth-funding">Growth Funding</a> to take their solution to the next level of impact also benefit from customized mentorship to support their evolution from good prototype developments to solutions that can be implemented and scaled, with sustainable business models and proven pilots.</p>
<p>Additionally, mentors are developing digital toolkits to enable Venture Fund companies and anyone to read up and study best practices for building and sustaining digital public goods. Most of these toolkits will be released digitally online under Open Source licenses. You can find three of these toolkits below:</p>
<ul>
<li><a href="https://unicef.github.io/ooi-toolkit-ds/">Data Science &amp; A.I.</a></li>
<li><a href="https://unicef.github.io/drone-4sdgtoolkit/">Drones</a></li>
<li><a href="https://unicef.github.io/inventory/">Open Source</a></li>
</ul>
<p>Since the first Technical Assistance programmes were launched in 2018, the Venture Fund has seen improved results that correlate with the Technical Assistance programmes. In the <a href="https://www.unicef.org/innovation/venturefund/blockchain-financial-inclusion-graduation">most recent Blockchain 2021 cohort</a>, across 500+ hours of mentoring, the cohort collectively reached over 700,000 beneficiaries, raised $4M in follow-on funding, and 4 of 5 graduating companies were recognized as a digital public good before graduation. This also marked a new record of external contributors, with a total of 39 people who contributed to repositories across all portfolio companies. The expert guidance and coaching provided by the team of UNICEF mentors aids the start-ups in achieving new record heights.</p>
<blockquote>
<p>“UNICEF’s support helped Xcapit build value, with a premium put on discovery, iteration, survey, and experimentation with the end user. The guidance at the right time is priceless. It prevented us from facing a major problem in the future when our blockchain UNICEF mentor guided us when we were deciding the technology to create our wallet. Changing our mindset to become a fully open source company was also challenging. We had the best guidance we could ask, and we successfully overcame the difficulties and doubts, understanding the benefits of open collaboration.”</p>
<p>Antonella Perrone, COO, Xcapit</p>
</blockquote>

<h1 id="contribute-to-technical-assistance-knowledge-and-mentoring">Contribute to Technical Assistance knowledge and mentoring&nbsp;<a class="hanchor" href="#contribute-to-technical-assistance-knowledge-and-mentoring" aria-label="Anchor link for: Contribute to Technical Assistance knowledge and mentoring">🔗</a></h1>
<p>The UNICEF mentor toolkits are open source and you can also participate. The toolkits are currently accepting contributions for UI/UX and front-end development, as well as content curation and authorship. Get involved with the toolkits by participating via GitHub:</p>
<ul>
<li><a href="https://github.com/unicef/inventory-hugo-theme">UNICEF Inventory theme</a> (see “<a href="https://github.com/unicef/inventory-hugo-theme/issues?q=is%3Aissue&#43;is%3Aopen&#43;label%3A%22I%3A&#43;good&#43;first&#43;issue%22&#43;no%3Aassignee">good first issues</a>”)</li>
<li><a href="https://github.com/unicef/inventory">UNICEF Open Source Inventory</a></li>
<li><a href="https://github.com/unicef/ooi-toolkit-ds">UNICEF Data Science &amp; A.I. toolkit</a></li>
</ul>
<p>With the Digital Public Goods Alliance, we built upon our learnings and successes from portfolio companies and created the <a href="https://unicef.github.io/publicgoods-accelerator-guide/">DPG Accelerator Guide</a> as a collection of resources for accelerators to also support local ventures in developing digital public goods, setting them up for scale and impact.</p>
]]></description></item><item><title>Sustain OSS 2018: quick rewind</title><link>https://jwheel.org/blog/2018/11/sustain-oss-2018-quick-rewind/</link><pubDate>Tue, 13 Nov 2018 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2018/11/sustain-oss-2018-quick-rewind/</guid><description><![CDATA[<p>This year, I attended the second edition of the <a href="https://sustainoss.org/">Sustain Open Source Summit</a> (a.k.a. Sustain OSS) on October 25th, 2018 in London. Sustain OSS is a one-day discussion on various topics about sustainability in open source ecosystems. It&rsquo;s also a collection of diverse roles across the world of open source. From small project maintainers to open source program managers at the largest tech companies in the world, designers to government employees, there is a mix of backgrounds in the room. Yet there is a shared context around the most systemic problems faced by open source projects, communities, and people around the world.</p>
<p>The shared context is the most valuable piece of the conference. As a first-time attendee, I was blown away by the depth and range of topics covered by attendees. This blog post covers a narrow perspective of Sustain OSS through the sessions I participated and co-facilitated in.</p>

<h2 id="speed-breakout-groups">Speed breakout groups&nbsp;<a class="hanchor" href="#speed-breakout-groups" aria-label="Anchor link for: Speed breakout groups">🔗</a></h2>
<p>The morning started with speed breakout groups of between six to twelve people. Several attendees acted as facilitators for discussion on special topics. Every attendee could about half of all groups. I took extensive notes in the following groups:</p>
<ul>
<li>Charitable participation in open source</li>
<li>Diversity and inclusion</li>
<li>Turning open source projects into sustainable projects / companies</li>
<li>Design in open source</li>
<li>Open source financial sustainability models</li>
</ul>

<h3 id="sustain-oss-high-level-takeaways">Sustain OSS: High-level takeaways&nbsp;<a class="hanchor" href="#sustain-oss-high-level-takeaways" aria-label="Anchor link for: Sustain OSS: High-level takeaways">🔗</a></h3>
<p>To save you time, these are my high-level takeaways across all breakout groups I participated in:</p>
<ul>
<li>
<p>Open source isn&rsquo;t something just done in people&rsquo;s free time</p>
</li>
<li>
<p>Complex systems can enable systemic bias in terms of what &ldquo;open source&rdquo; means</p>
</li>
<li>
<p>Sustainability as topic of first priority / consideration, not an afterthought</p>
</li>
<li>
<p>There is no &ldquo;silver bullet&rdquo; solution to any of these challenges; they all require adaption to work across communities, projects, and organizations</p>
</li>
</ul>

<h3 id="charitable-participation-in-open-source">Charitable participation in open source&nbsp;<a class="hanchor" href="#charitable-participation-in-open-source" aria-label="Anchor link for: Charitable participation in open source">🔗</a></h3>
<p>This breakout group focused on the connection between charitable organizations and free software projects. It was facilitated by the esteemed <a href="https://twitter.com/o0karen0o">Karen Sandler</a> of the <a href="https://sfconservancy.org/">Software Freedom Conservancy</a>.</p>
<p>Overall, the conversation was split among creating ethical software, finding sustainable funding models, and balancing how much control to relinquish as a managing organization of an open source project. Some felt pride and ideology were strong drivers for contributors to ideological projects (which also mirrors my experience at <a href="http://unicefstories.org/magicbox/">UNICEF</a>). These could be key motivations to understand for contributors. Additionally, the challenge around sustainable funding models was common across charitable foundations focused on free software. Grant funding is a common strategy employed by charitable organizations, but the short-term nature of grants puts additional strain on resources to continue searching for new funding. Lastly, for charitable organizations overseeing or supporting free software projects, there was uncertainty over how much control should be left to projects. Attendees generally expressed a desire to let projects do what they want, but it sometimes came at the risk of additional overhead for the organization when everyone does something of everything. The concern over toxic communities came up, and how some issues remain buried until farther along in a relationship with a project. One successful solution employed was to hold monthly meetings among all member projects of an organization to address difficulties.</p>
<p>One interesting detail that captured my attention: one attendee noted how extensive effort into fundraising campaigns targeted to members of a foundation actually increased member engagement with the foundation.</p>

<h3 id="diversity-and-inclusion">Diversity and inclusion&nbsp;<a class="hanchor" href="#diversity-and-inclusion" aria-label="Anchor link for: Diversity and inclusion">🔗</a></h3>
<p>My biggest takeaway from this session was the danger in thinking of open source as something we do in our free time. This can be exclusive to different genders, races, and socioeconomic statuses. Some &ldquo;free time&rdquo; is more equal than others. The actionable piece for me is to be more conscious in building and growing communities to support different levels of contribution in a community.</p>
<p>The question I wanted to explore after reflecting is to ask of those who feel disadvantaged:</p>
<ul>
<li>What factors makes a project more or less inviting for you?</li>
<li>What can we do better when designing for participation in our communities?</li>
</ul>

<h3 id="turning-open-source-projects-into-sustainable-ones">Turning open source projects into sustainable ones&nbsp;<a class="hanchor" href="#turning-open-source-projects-into-sustainable-ones" aria-label="Anchor link for: Turning open source projects into sustainable ones">🔗</a></h3>
<p>My notes weren&rsquo;t thorough on this session, but there was an interesting point on trademark that came up during discussion of the <a href="https://commonsclause.com/">Commons Clause</a>. One participant was pursuing trademark law to enforce commercial protections and sustainability. They gave an example of a large corporation advertising support with a major open source project (e.g. a major software/hardware vendor supporting a specific NodeJS version). They wanted to use this as a way to create a more financially sustainable model for some projects.</p>

<h3 id="design-in-open-source">Design in open source&nbsp;<a class="hanchor" href="#design-in-open-source" aria-label="Anchor link for: Design in open source">🔗</a></h3>
<p>This breakout group focused on sustainable design and design practices in open source communities. The role of designers in technical projects was also discussed and how we can build technical communities to be more inclusive for designers. It was facilitated by <a href="https://elioqoshi.me/about-me/">Elio Qoshi</a>.</p>
<p>My takeaways from this breakout were that established ways of working can be unfriendly to designers and there is a need to emphasize diversity across different roles in a project or organization. Certain tools, platforms, or other mechanisms for contributing have poor user interfaces. They can push people away because of barriers to contributing with a frustrating user experience. Next, the need for diversity in roles was noted, with an example of engineers leading project management. Sometimes bias or oversights afforded as an engineer accidentally excludes others like designers or writers from contributing to our project. We should endeavor for people to spend more time on their preferred and most effective methods of contribution.</p>

<h3 id="financial-sustainability-models">Financial sustainability models&nbsp;<a class="hanchor" href="#financial-sustainability-models" aria-label="Anchor link for: Financial sustainability models">🔗</a></h3>
<p>This breakout session focused on the traditional sense of sustainability: in finances and resources. Attendees discussed different models used to fund open source projects and foundations. The session was facilitated by the founder of the <a href="https://musicbrainz.org/doc/About">MusicBrainz</a> project, <a href="https://twitter.com/MayhemBCN">Robert Kaye</a>.</p>
<p>The model used by <a href="https://metabrainz.org/about">MetaBrainz</a> essentially as a data broker was interesting and unique. MetaBrainz offers commercial data usage at a cost, and companies using their data have a strong need for the data and see value in it. Through other parts of their model since changing three years ago, they had significant gains in their revenue and were able to increase paid staff working on the projects.</p>
<p>The Amazon invoice cake is also an amusing story, but you should ask Robert directly about it.</p>


<h2 id="hour-breakout-sessions">Hour breakout sessions&nbsp;<a class="hanchor" href="#hour-breakout-sessions" aria-label="Anchor link for: Hour breakout sessions">🔗</a></h2>
<p>After lunch, attendees participated in two hour-long breakout sessions to explore specific topics in greater detail.</p>

<h3 id="human-aspect-of-governance">Human aspect of governance&nbsp;<a class="hanchor" href="#human-aspect-of-governance" aria-label="Anchor link for: Human aspect of governance">🔗</a></h3>
<p>Longer form notes are available below. I won&rsquo;t go into detail since it has its own document with notes and highlights.</p>
<p><a href="/docs/Open-source-human-governance-Sustain-OSS-London-2018.pdf">Human aspects of open source governance - Sustain OSS London 2018</a><a href="/docs/Open-source-human-governance-Sustain-OSS-London-2018.pdf">Download</a></p>

<h3 id="university-engagement">University engagement&nbsp;<a class="hanchor" href="#university-engagement" aria-label="Anchor link for: University engagement">🔗</a></h3>
<p>Together with <a href="https://twitter.com/epistemographer">Josh Greenberg</a> of the <a href="https://sloan.org/">Alfred P. Sloan Foundation</a>, we co-facilitated a spontaneous session on how universities can engage with open source communities and vice versa.</p>
<p>In our session, two major topics were discussed:</p>
<ul>
<li>
<p>Education (e.g. curriculum, institutions, programs, etc.)</p>
</li>
<li>
<p>Research</p>
</li>
</ul>
<p>We asked all participants why they decided to participate and what questions they had, even though we weren&rsquo;t able to answer all of them:</p>
<ol>
<li>How do we get the word out?</li>
<li>What research is most valuable for open source?</li>
<li>How to long-term sustain projects?</li>
<li>How to actually do and support research?</li>
<li>How to engage both students and faculty?</li>
<li>How to harness / enable institutions to make positive contributions to ecosystem?</li>
</ol>
<p>For education, we agreed that introducing and teaching open source in curriculum better serves students and the institution (both financially and in career satisfaction). Many technology companies today are participating in open source and it is an important skill to have for students entering the workforce. For research, students are already doing research and proposing topics, so better student engagement in open source is better for research.</p>
<p>Our takeaways were to better engage with existing organizations working on these problems for years already (e.g. <a href="http://teachingopensource.org/POSSE/">POSSE</a>), shifting the perspective of universities to be stewards of FOSS, and using collegiate hackathons as a way to better engage with undergraduate students.</p>
<p>One additional point that stood out to me was the emphasis across all breakout participants for a need of good communication skills to be successful. In many cases, the companies hiring top tech talent (from our breakout attendees) listed this as most desirable skill. Technology and new skills can be learned, but teaching good communication skills and how to work collaboratively are not easily learned.</p>

<h2 id="other-takeaways">Other takeaways&nbsp;<a class="hanchor" href="#other-takeaways" aria-label="Anchor link for: Other takeaways">🔗</a></h2>
<p>One takeaway I couldn&rsquo;t fit elsewhere was my changed perspective on &ldquo;technical&rdquo; vs. &ldquo;non-technical&rdquo; work. The phrase &ldquo;non-technical work&rdquo; implies an &ldquo;other space where development does not occur&rdquo;. Does the phrase place unequal priority on technical work? One action item is to avoid using &ldquo;non-technical work&rdquo; as an umbrella term, and instead call these areas by what they are: design, documentation, writing, marketing, community building, etc.</p>
<p>For me, I still want an umbrella term for these things, but I&rsquo;m open-minded for better alternatives to non-technical.</p>

<h3 id="skill-share-conflict-resolution">Skill share: conflict resolution&nbsp;<a class="hanchor" href="#skill-share-conflict-resolution" aria-label="Anchor link for: Skill share: conflict resolution">🔗</a></h3>
<p>The last event of Sustain OSS was a 1x1 skill share. Roughly half of the attendees identified a &ldquo;skill&rdquo; they could teach someone else in the room. The other half of attendees paired with someone teaching a skill they wanted to learn more about. I paired with <a href="https://www.jonobacon.com/about/bio/">Jono Bacon</a> on a short breakout on conflict resolution.</p>
<p>Jono detailed steps of working through and resolving conflict, including how to identify root problems, how to make steps to resolve them, and some personal philosophy of how we build and maintain relationships with others.</p>
<p>An important first step is to identify the critical point: this could be an ongoing crisis, dealing with interpersonal conflict, or dealing with burnout. When someone is explaining a problem, listen fully to them and understand what they are saying. Let them get it off their chest. Is there something else causing this behavior? Tap into the cloud of ranting and determine what the root cause is.</p>
<p>Once common ground is established, make a plan to resolve it. Jono&rsquo;s advice was to create written next steps and be explicit about expectations. This way, everyone is on the same page of what the next steps are and everyone involved has signed off on these next steps (this creates a sense of commitment and the next steps become written as &ldquo;law&rdquo;). Encourage others to restate the goals of conflict resolution in their own words. Once you have written goals and expectations, the crucial next step is follow-up. Check in on a regular basis with the person or people involved. Try to be neutral and unbiased when listening to others in these conversations. Go in with an open mind.</p>
<p>Lastly, we contextualized conflict resolution in personal philosophy of how we build and maintain relationships with others – both in and out of our open source projects. Sometimes the best way to address difficult interpersonal problems is to stop avoiding them and simply address them. Much easier said than done, but otherwise there is no escaping the perpetuated cycle of conflict if someone doesn&rsquo;t make a first step.</p>
<p>It&rsquo;s not just about code.</p>

<h2 id="thank-you">Thank you&nbsp;<a class="hanchor" href="#thank-you" aria-label="Anchor link for: Thank you">🔗</a></h2>
<p>To wrap up this Sustain OSS report, a few obligatory thank-yous are needed:</p>
<ul>
<li>
<p><strong><a href="https://sloan.org/">Sloan Foundation</a> / <a href="https://www.fordfoundation.org/">Ford Foundation</a></strong>: For the financial support I needed to attend and participate in the event – this is never something I take for granted and I am happy to have received a scholarship to attend and participate</p>
</li>
<li>
<p><strong><a href="https://twitter.com/epistemographer">Josh Greenberg</a> @ <a href="https://sloan.org/">Sloan Foundation</a></strong>: For helping me get over some imposter syndrome and co-facilitate the university engagement breakout session with me – thanks for the gentle push</p>
</li>
<li>
<p><strong><a href="https://twitter.com/MayhemBCN">Robert Kaye</a> @ <a href="https://metabrainz.org/">MetaBrainz</a></strong>: For being generally awesome and finally giving me someone to nerd out about all these crazy ideas of how free culture and music can actually be related!</p>
</li>
<li>
<p><strong><a href="https://www.rit.edu/gccis/stephen-jacobs">Stephen Jacobs</a></strong>: For always being supportive for yet another trip abroad and helping me map a strategy to get the most out of Sustain OSS</p>
</li>
</ul>
<p>Sustain OSS gave me a lot to think about and consider. I&rsquo;m glad and fortunate to have attended. I hope this event report gives additional visibility to some of the conversations held in London this year.</p>]]></description></item><item><title>How to automatically scale Kubernetes with Horizontal Pod Autoscaling</title><link>https://jwheel.org/blog/2018/03/kubernetes-horizontal-pod-autoscaling/</link><pubDate>Tue, 06 Mar 2018 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2018/03/kubernetes-horizontal-pod-autoscaling/</guid><description><![CDATA[<p>Scale is a critical part of how we develop applications in today&rsquo;s world of infrastructure. Now, containers and container orchestration like Docker and <a href="https://jwfblog.wpenginepowered.com/2017/07/introduction-kubernetes-fedora/">Kubernetes</a> make it easier to think about scale. One of the &ldquo;magical&rdquo; things about The potential of Kubernetes is fully realized when you have a sudden increase in load, your infrastructure scales up and grows to accommodate. How does this work? With <strong>Horizontal Pod Autoscaling</strong>, Kubernetes adds more pods when you have more load and drops them once things return to normal.</p>
<p>This article covers Horizontal Pod Autoscaling, what it is, and how to try it out with the <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/">Kubernetes guestbook</a> example. By the end of this article, you will…</p>
<ul>
<li>Understand what Horizontal Pod Autoscaling (HPA) is</li>
<li>Be able to create an HPA in Kubernetes</li>
<li>Create an HPA for the Guestbook and watch it work with <a href="https://github.com/JoeDog/siege">Siege</a></li>
</ul>

<h2 id="what-is-horizontal-pod-autoscaling">What is Horizontal Pod Autoscaling?&nbsp;<a class="hanchor" href="#what-is-horizontal-pod-autoscaling" aria-label="Anchor link for: What is Horizontal Pod Autoscaling?">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaling</a> (HPA) is a Kubernetes API resource to dynamically grow an environment. To help simplify things, consider it in three pieces:</p>
<ul>
<li><strong>Horizontal</strong>: Think of <em>horizontal</em> growth, i.e. adding more nodes to your available pool (unlike <em>vertical</em>, which would be adding more memory / CPU to your existing nodes)</li>
<li><strong>Pod</strong>: Your deployable units in Kubernetes</li>
<li><strong>Autoscaling</strong>: Automatically scaling out when needed</li>
</ul>
<p>
<figure>
  <img src="/blog/2017/08/k8s-hpa.png" alt="Diagram to explain how Horizontal Pod Autoscaler (HPA) works" loading="lazy">
  <figcaption>Diagram to explain how a Horizontal Pod Autoscaler (HPA) works. From Kubernetes documentation (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" class="bare">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a>).</figcaption>
</figure>
</p>
<p>To help visualize it, imagine you have a <a href="http://flask.pocoo.org/">Python Flask</a> web server that reads and writes data to a <a href="https://redis.io/">Redis</a> back-end. Your web server is the front-end for all of your incoming traffic. You run it with three pods in Kubernetes, with 512MB of RAM and 50m of CPU. Now, suddenly, BuzzFeed writes an article about your app, Kanye West name drops the app in a TV interview, and the president of the United States retweets a link to your site.</p>
<p>Oops.</p>
<p>Now you have a serious problem on your hands, where your tiny application is overloaded. Three pods aren&rsquo;t cutting it anymore. You get woken up at 3:00am to hastily adjust the number of replicas and rapidly scale your infrastructure. While you&rsquo;re wondering <em>how this happened</em>, you also wonder… isn&rsquo;t there an easier way? Could I have avoided this panicked, pre-dawn scaling crisis? Yes, there is! At least, somewhat.</p>

<h4 id="building-to-scale">Building to scale&nbsp;<a class="hanchor" href="#building-to-scale" aria-label="Anchor link for: Building to scale">🔗</a></h4>
<p>By creating and managing your deployments with HPAs, your application grows horizontally to handle the load. As the CPU utilization rises, HPAs trigger the addition of more pods to scale automatically. Previously, you could create a Horizontal Pod Autoscaler that would begin scaling when cumulative CPU utilization was at 60%. You could also tell it to scale to a maximum of 500 pods, but no less than three. So then, when the Apocalypse of Viral Sharing happened to your web application, it could have grown dynamically.</p>
<p>If you want to dive deeper in the technical implementation of HPAs, you can read more in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Kubernetes documentation</a>.</p>

<h2 id="create-a-horizontal-pod-autoscaler">Create a Horizontal Pod Autoscaler&nbsp;<a class="hanchor" href="#create-a-horizontal-pod-autoscaler" aria-label="Anchor link for: Create a Horizontal Pod Autoscaler">🔗</a></h2>
<p>Now that you understand how a Horizontal Pod Autoscaler (HPA) is helpful, how do you create one? Like any other resource in Kubernetes, define HPAs in a YAML definition file. Here&rsquo;s a template for getting started.</p>
<pre tabindex="0"><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
  namespace: my-app-space
  labels:
    app: my-app
    tier: frontend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: my-app-deployment
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 60
</code></pre><p>This is the minimal spec you need to deploy an HPA. It&rsquo;s not that different from other Kubernetes resources you may have seen.</p>

<h4 id="explaining-the-configuration">Explaining the configuration&nbsp;<a class="hanchor" href="#explaining-the-configuration" aria-label="Anchor link for: Explaining the configuration">🔗</a></h4>
<p>Let&rsquo;s look at what some of the specific lines are.</p>
<ul>
<li><code>spec.scaleTargetRef.name</code>: Name of resource to scale (e.g. name of a deployment)</li>
<li><code>spec.minReplicas</code>: Minimum number of replicas running when CPU use is minimal</li>
<li><code>spec.maxReplicas</code>: Maximum number of replicas running when CPU use peaks</li>
<li><code>spec.targetCPUUtilizationPercentage</code>: Percentage threshold when HPA begins scaling out pods</li>
</ul>
<p>When starting out for the first time, tweak these values based on the amount of traffic you expect to receive or what your budget is. Load testing your application is one way to see the HPAs do their job.</p>

<h2 id="obliterating-the-guestbook">Obliterating the Guestbook&nbsp;<a class="hanchor" href="#obliterating-the-guestbook" aria-label="Anchor link for: Obliterating the Guestbook">🔗</a></h2>
<p>But this guide wouldn&rsquo;t be complete without a live demo to try. You can create one with an existing application and put it to the test. This section assumes you have a running <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/">Guestbook application</a> in your Kubernetes environment. As a quick refresh, the Guestbook is a three-part application:</p>
<ul>
<li>PHP web application for writing messages into a virtual guestbook</li>
<li>Primary Redis node for writing new messages from web page</li>
<li>Replica Redis nodes for reading the data into web page</li>
</ul>
<p>We&rsquo;ll add an HPA as a fourth part to scale the PHP web application for new traffic.</p>

<h4 id="create-the-hpa-for-guestbook">Create the HPA for Guestbook&nbsp;<a class="hanchor" href="#create-the-hpa-for-guestbook" aria-label="Anchor link for: Create the HPA for Guestbook">🔗</a></h4>
<p>Now, create a new HPA spec file for the guestbook.</p>
<pre tabindex="0"><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: guestbook-frontend
  namespace: guestbook
  labels:
    app: guestbook
    env: production
    tier: frontend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: guestbook-frontend
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75
</code></pre><p>Put this into a file and create the HPA with <code>kubectl</code>.</p>
<pre tabindex="0"><code>$ kubectl apply --record -f guestbook-frontend-hpa.yaml
</code></pre><p>Now, the Horizontal Pod Autoscaler is operational and monitoring the CPU utilization of your deployment.</p>

<h4 id="load-test-with-siege">Load test with Siege&nbsp;<a class="hanchor" href="#load-test-with-siege" aria-label="Anchor link for: Load test with Siege">🔗</a></h4>
<p>To force the HPA into action, we&rsquo;ll use <a href="https://github.com/JoeDog/siege">Siege</a>, an HTTP load testing and benchmark utility. Siege is a multi-threaded load testing tool and has a few other capabilities included to make it a good option for putting some force onto a simple web app.</p>
<p>First, put various permutations of the URL in a plaintext file. By doing this, Siege can randomly scan the URLs in he text file and ping them in &ldquo;Internet mode&rdquo; by randomly selecting a URL from the list for each request. This could look like the following…</p>
<pre tabindex="0"><code>http://my-guestbook.example.com/
http://my-guestbook.example.com/index.html
http://my-guestbook.example.com/guestbook.php
http://my-guestbook.example.com/guestbook.php?cmd=get&amp;key=messages
</code></pre><p>Once this is done, you can fire up Siege to begin load testing. In this case, to get fast results, we&rsquo;ll use 255 concurrent users for five minutes, using Internet and benchmark modes.</p>
<pre tabindex="0"><code>$ siege --verbose --benchmark --internet --concurrent 255 --time 10M --file siege-urls.txt
</code></pre><p>You should see Siege begin to rapidly send requests to your Guestbook application. Now that the action is in progress, you can slowly observe your CPU utilization begin to climb. Watch it slowly change by using <code>watch</code>.</p>
<pre tabindex="0"><code>$ watch -d -n 2 -b -c kubectl get hpa -l app=guestbook
</code></pre><p>During the five minute load test, you should notice CPU usage rise and then new replicas will appear. Depending on what your original requests and limits are for the deployment, you will see different results. Next, try setting the deployment&rsquo;s requests / limits to lower values if nothing seems to happen while testing.</p>

<h2 id="learn-more-about-horizontal-pod-autoscaler">Learn more about Horizontal Pod Autoscaler&nbsp;<a class="hanchor" href="#learn-more-about-horizontal-pod-autoscaler" aria-label="Anchor link for: Learn more about Horizontal Pod Autoscaler">🔗</a></h2>
<p>Horizontal Pod Autoscalers are a stable resource in Kubernetes and are available for you to begin playing around with now. To learn more, read the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">documentation</a> or see another example in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/">official walkthrough</a>.</p>]]></description></item><item><title>Sign at the line: Deploying an app to CoreOS Tectonic</title><link>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</link><pubDate>Fri, 04 Aug 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/08/deploying-app-tectonic/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. The last post and this post build on top of the Fedora Magazine series. The third post introduced how to <a href="https://jwfblog.wpenginepowered.com/2017/07/tectonic-amazon-web-services-aws/">deploy CoreOS Tectonic</a> to Amazon Web Services (AWS). This fourth post teaches how to deploy a simple web application to your Tectonic installation.</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article picks up from where we left off last when you installed Tectonic to Amazon Web Services (AWS). By the end of this article, you will…</p>
<ul>
<li>Start up <a href="https://redis.io/">Redis</a> master and slave pods</li>
<li>Start a front-end pod that interacts with the Redis pods</li>
<li>Deploy a simple web app for all of your friends to leave you messages</li>
</ul>
<p>Compared to previous articles, this article will be a little more hands-on. Also like before, this is based off an excellent tutorial in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">upstream Kubernetes documentation</a>. Let&rsquo;s get started!</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>This tutorial assumes you followed the <a href="https://fedoramagazine.org/minikube-kubernetes/">Minikube how-to</a> earlier in this series and that you already <a href="https://fedoramagazine.org/tectonic-amazon-web-services-aws/">have a Tectonic installation</a> running (doesn&rsquo;t have to be on AWS). In case you&rsquo;re jumping in now, make sure you have the Kubernetes client tools installed on your Fedora system, like <code>kubectl</code>. If not, you can install them now.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes-client
</code></pre>
<h2 id="configure-kubectl-for-tectonic">Configure <code>kubectl</code> for Tectonic&nbsp;<a class="hanchor" href="#configure-kubectl-for-tectonic" aria-label="Anchor link for: Configure kubectl for Tectonic">🔗</a></h2>
<p>To use <code>kubectl</code> with your Tectonic installation, you need to have a valid configuration in <code>~/.kube/config</code> for your cluster. This is how <code>kubectl</code> knows where and how to talk to Tectonic. To get these values, first log into the Tectonic Console you installed.</p>
<ol>
<li>Click <em>username</em> (usually <em>admin</em>) &gt; <em>My Account</em> on the bottom left.</li>
<li>Click <em>Download Configuration</em>.</li>
<li>When the <em>Set Up kubectl</em> window opens, click <em>Verify Identity</em>.</li>
<li>Enter your username and password, and click <em>Login</em>.</li>
<li>From the <em>Login Successful</em> screen, copy the provided code.</li>
<li>Switch back to Tectonic and enter the code in the field.</li>
</ol>
<p>Now you will be able to download <code>kubectl-config</code> from Tectonic. There&rsquo;s two ways to proceed from here.</p>

<h4 id="add-a-new-configuration">Add a new configuration&nbsp;<a class="hanchor" href="#add-a-new-configuration" aria-label="Anchor link for: Add a new configuration">🔗</a></h4>
<p>If this is your first time using <code>kubectl</code>, your configuration is likely empty. If it&rsquo;s empty or you don&rsquo;t care about overwriting an old configuration, you can run the following commands to add the configuration.</p>
<pre tabindex="0"><code>$ mkdir ~/.kube/
$ mv ~/Downloads/minikube-config ~/.kube/config
$ chmod 600 ~/.kube/config
</code></pre>
<h4 id="append-to-an-existing-configuration">Append to an existing configuration&nbsp;<a class="hanchor" href="#append-to-an-existing-configuration" aria-label="Anchor link for: Append to an existing configuration">🔗</a></h4>
<p>If you already have a configuration, like from Minikube, you might not want to wipe it all out. In this case, you can merge the files manually together. You&rsquo;ll need to copy the <code>clusters</code>, <code>users</code>, and <code>contexts</code> from the Tectonic configuration into your existing one. The benefit of doing this is that you&rsquo;ll be able to change contexts to switch from one cluster to another.</p>

<h4 id="test-your-configuration">Test your configuration&nbsp;<a class="hanchor" href="#test-your-configuration" aria-label="Anchor link for: Test your configuration">🔗</a></h4>
<p>Once you finished your configuration, test to see if it works.</p>
<pre tabindex="0"><code>$ kubectl config use-context tectonic       # if you have multiple contexts in config
$ kubectl get nodes
NAME                                        STATUS    AGE
ip-10-0-0-59.us-east-2.compute.internal     Ready     1d
ip-10-0-23-239.us-east-2.compute.internal   Ready     1d
ip-10-0-44-211.us-east-2.compute.internal   Ready     1d
ip-10-0-61-218.us-east-2.compute.internal   Ready     1d
ip-10-0-67-239.us-east-2.compute.internal   Ready     1d
ip-10-0-95-51.us-east-2.compute.internal    Ready     1d
</code></pre><p>Huzzah! Now we&rsquo;re ready to get to work.</p>

<h2 id="getting-the-deployment-and-service-files">Getting the deployment and service files&nbsp;<a class="hanchor" href="#getting-the-deployment-and-service-files" aria-label="Anchor link for: Getting the deployment and service files">🔗</a></h2>
<p>All of the example files come from the official Kubernetes GitHub repo. You can find them in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">Guestbook example</a>. To get started, create a new directory and download all of the files.</p>
<pre tabindex="0"><code>$ wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/redis-{master,slave}-{deployment,service}.yaml \
       https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/frontend-{deployment,service}.yaml
</code></pre><p>We&rsquo;ll explain what all of these do in next steps. All of these next steps will start with the command to run, followed by a short explanation of what&rsquo;s actually happening.</p>

<h2 id="start-the-redis-master">Start the Redis master&nbsp;<a class="hanchor" href="#start-the-redis-master" aria-label="Anchor link for: Start the Redis master">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-master-service.yaml
service &#34;redis-master&#34; created
$ kubectl create -f redis-master-deployment.yaml
deployment &#34;redis-master&#34; created
</code></pre>
<h4 id="define-the-deployment">Define the deployment&nbsp;<a class="hanchor" href="#define-the-deployment" aria-label="Anchor link for: Define the deployment">🔗</a></h4>
<p>The <code>redis-master-deployment.yaml</code> file downloaded earlier defines the deployment and its characteristics. In this case, we have one pod that runs the Redis master in a container. Since we&rsquo;re using a deployment, that means if our pod goes down, Kubernetes will <strong>spin up a new pod</strong> to replace it. Worth noting in this example, if the pod <em>did</em> go down, there would be a potential for data loss until the new one replaces the old one (since the Redis master is not highly available, i.e. there are multiple).</p>

<h4 id="define-the-service">Define the service&nbsp;<a class="hanchor" href="#define-the-service" aria-label="Anchor link for: Define the service">🔗</a></h4>
<p>Our service in this example is a <strong>named load balancer</strong> that <strong>proxies traffic</strong> across one or many containers. Even though we only have one Redis master pod, we still want to use a service. This is a deterministic way of making the route to the master with a dynamic (or elastic) IP address.</p>
<p>Labeling the pods is important in this case, as Kubernetes will use the pods&rsquo; labels to determine which pods receive the traffic sent to the service, and load balance it accordingly.</p>

<h4 id="create-the-service">Create the service&nbsp;<a class="hanchor" href="#create-the-service" aria-label="Anchor link for: Create the service">🔗</a></h4>
<p>The next important step is to create the service. Note that we&rsquo;re doing this <em>before</em> we create the deployment. It&rsquo;s best practice to create the service first. This allows the scheduler to later spread the service across the deployments you create to support your application.</p>
<p>After creating the service, you can check its status by running this command. You should see similar output.</p>
<pre tabindex="0"><code>$ kubectl get services
NAME              CLUSTER-IP       EXTERNAL-IP       PORT(S)       AGE
redis-master      10.0.76.248      &lt;none&gt;            6379/TCP      1s
</code></pre><p>Now your Redis master serivce is up and running! The next step will be to create the Redis master deployment.</p>
<p>If you look at the service configuration file, you&rsquo;ll notice <code>port</code> and <code>targetPort</code> are two defined variables. Once everything is up and running, these will be important for determining how the traffic from the slaves to the masters is routed.</p>
<ol>
<li>Redis slave connects to <code>port</code> on Redis master service</li>
<li>Traffic forwarded from service&rsquo;s <code>port</code> to <code>targetPort</code> on pod the service listens to</li>
</ol>

<h4 id="create-the-deployment">Create the deployment&nbsp;<a class="hanchor" href="#create-the-deployment" aria-label="Anchor link for: Create the deployment">🔗</a></h4>
<p>Next, we created the Redis master pod in the cluster. To see our deployment and pods, we can run the following commands to see what was created.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
redis-master   1         1         1            1           27s
</code></pre><pre tabindex="0"><code>$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
redis-master-2353460263-1ecey   1/1       Running   0          1m
...
</code></pre><p>You should see all of the pods in your cluster so far. For now, that&rsquo;s just the Redis master. Let&rsquo;s give it some friends!</p>

<h2 id="start-the-redis-slaves">Start the Redis slaves&nbsp;<a class="hanchor" href="#start-the-redis-slaves" aria-label="Anchor link for: Start the Redis slaves">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f redis-slave-service.yaml
service &#34;redis-slave&#34; created
$ kubectl create -f redis-slave-deployment.yaml
deployment &#34;redis-slave&#34; created
</code></pre>
<h4 id="defining-the-deployment">Defining the deployment&nbsp;<a class="hanchor" href="#defining-the-deployment" aria-label="Anchor link for: Defining the deployment">🔗</a></h4>
<p>In the configuration file, we defined two replicas, unlike the master. By doing this, it tells Kubernetes that the minimum number of pods that should always be running is two. If one of your pods goes down, Kubernetes automatically creates a new one to support the application. If you want, you can try killing the Docker process for one of your pods to see it happen in real time.</p>

<h2 id="start-the-guestbook-front-end">Start the guestbook front-end&nbsp;<a class="hanchor" href="#start-the-guestbook-front-end" aria-label="Anchor link for: Start the guestbook front-end">🔗</a></h2>
<pre tabindex="0"><code>$ kubectl create -f frontend-service.yaml
service &#34;frontend&#34; created
$ kubectl create -f frontend-deployment.yaml
deployment &#34;frontend&#34; created
</code></pre><p>The front-end is a PHP application with an AJAX interface and Angular-based UI. When using the form on the front-end application, it talks to the Redis master or slave, depending on if it&rsquo;s reading or writing to Redis. Again, we&rsquo;re deploying the front-end with multiple replicas. In this case, there will be three pods to support the front-end.</p>

<h2 id="say-hello">Say hello!&nbsp;<a class="hanchor" href="#say-hello" aria-label="Anchor link for: Say hello!">🔗</a></h2>
<p>Once you&rsquo;ve finished deploying everything, your web app should now be accessible! To get the full domain from AWS, run this command to figure out where to look.</p>
<pre tabindex="0"><code>$ kubectl get deploy/frontend svc/frontend -o wide
NAME           CLUSTER-IP   EXTERNAL-IP                                                             PORT(S)        AGE       SELECTOR
svc/frontend   10.3.0.175   aaebd8247ef2311e6a045021d1620193-54019671.us-east-2.elb.amazonaws.com   80:31020/TCP   1m        k8s-app=guestbook,tier=frontend

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/frontend   3         3         3            3           1m
</code></pre><p>Congratulations, we&rsquo;re all finished!</p>

<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Once you&rsquo;re finished or when you want to stop running the guestbook, it&rsquo;s easy to get rid of the deployments and services we created. Using labels, all the deployments and services can be deleted with one command.</p>
<pre tabindex="0"><code>$ kubectl delete deployments,services -l &#34;app in (redis, guestbook)&#34;
</code></pre><p>And now your guestbook application is offline. (It was nice while it lasted!)</p>

<h2 id="learn-more-about-kubernetes-and-tectonic">Learn more about Kubernetes and Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes-and-tectonic" aria-label="Anchor link for: Learn more about Kubernetes and Tectonic">🔗</a></h2>
<p>If you want to explore more about Kubernetes, you can read some of the earlier articles in this series. You can also read the original tutorial published by Kubernetes <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook">on GitHub</a>. Additionally, the upstream documentation for <a href="https://kubernetes.io/docs/home/">Kubernetes</a> and <a href="https://coreos.com/tectonic/docs/latest/">Tectonic</a> is thorough and can help answer more advanced questions.</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Deploy CoreOS Tectonic to Amazon Web Services (AWS)</title><link>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</link><pubDate>Fri, 28 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/tectonic-amazon-web-services-aws/</guid><description><![CDATA[<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. The second post showed how to build a <a href="https://fedoramagazine.org/minikube-kubernetes/">single-node Kubernetes deployment</a> on your own computer. This post builds on top of the Fedora Magazine series by showing how to deploy CoreOS Tectonic to Amazon Web Services (AWS).</em></p>
<hr>
<p>Welcome back to the <strong>Kubernetes and Fedora</strong> series. Each week, we build on the previous articles in the series to help introduce you to using Kubernetes. This article takes off from running Kubernetes on your own hardware and moves us one step closer to the cloud. By the end of this article, you will…</p>
<ul>
<li>Understand what CoreOS Tectonic is</li>
<li>Set up Amazon Web Services (AWS) for Tectonic</li>
<li>Deploy Tectonic to AWS</li>
</ul>
<p>This article is also based off of the excellent tutorial provided in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/creating-aws.html">CoreOS documentation</a>. Let&rsquo;s get started!</p>

<h2 id="what-is-tectonic">What is Tectonic?&nbsp;<a class="hanchor" href="#what-is-tectonic" aria-label="Anchor link for: What is Tectonic?">🔗</a></h2>
<p>In the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first article</a>, some of the key concepts of Kubernetes and why it&rsquo;s useful were explained. Kubernetes automates the deployment and setting up of your infrastructure across the three layers (users, masters, nodes). If you&rsquo;re working on your own at a small scale, Kubernetes itself can be plenty to meet your needs. However, there is still a decent amount of human involvement in managing the different pieces of Kubernetes. If you&rsquo;re working with multiple people in a team and across different environments, vanilla Kubernetes can be a lot to manage. For an enterprise environment, there&rsquo;s still some unmet needs. This is where Tectonic steps in.</p>
<p>Tectonic is a commercial product offered by <a href="https://coreos.com/">CoreOS</a>, the providers of <a href="https://coreos.com/os/docs/latest">Container Linux</a> and the original developers of <code>etcd</code>, now one of the core components of Kubernetes. Tectonic takes all of the open source components and pre-packages them. The self-proclaimed goal of doing this is to let anyone build a Google-style infrastructure into a cloud or on-premise environment. The outcome for the user is that it&rsquo;s easy to install a Kubernetes infrastructure across many different environments. In addition to simplifying the installation of the various components of a Kubernetes stack, Tectonic also provides a management console, a container registry for building and sharing containers, additional tools for deployment, and a few other nice features.</p>
<p>If we think about Kubernetes as a cake like we did before with three layers, Tectonic is like the box you set it in. Now, you can take your cake anywhere, move it around, and stack it with other cakes-in-a-box. All of your cakes are in their own boxes and you don&rsquo;t have to worry about them accidentally being damaged. If you&rsquo;re still a little confused, this diagram might help make more sense of it.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/platform-features.png" alt="Understanding where CoreOS Tectonic fits into the Kubernetes puzzle" loading="lazy">
  <figcaption>Understanding where Tectonic fits into the Kubernetes puzzle. From coreos.com/tectonic (<a href="https://coreos.com/tectonic/" class="bare">https://coreos.com/tectonic/</a>)</figcaption>
</figure>
</p>
<p>Fortunately, Tectonic has a free license that lets you use it for ten nodes. In this example, we&rsquo;ll register, get a free license, and deploy it into AWS.</p>
<p>(<em>Note</em>: If you want to revert anything we do in this example, there&rsquo;s an easy way to dismantle it across AWS and bring your bill to $0.00.)</p>

<h2 id="pre-requisites">Pre-requisites&nbsp;<a class="hanchor" href="#pre-requisites" aria-label="Anchor link for: Pre-requisites">🔗</a></h2>
<p>In order to successfully run this guide, there&rsquo;s a few things you&rsquo;ll need first.</p>
<ul>
<li><strong>Amazon Web Services (AWS) account</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://aws.amazon.com">here</a></li>
</ul>
</li>
<li><strong>CoreOS Tectonic account and license</strong> (<em>free</em>)
<ul>
<li>Register <a href="https://account.coreos.com/">here</a></li>
</ul>
</li>
<li><strong>A root-level or sub-domain</strong> (<em>e.g. example.com or k8s.example.com</em>)
<ul>
<li>If you look around, you can probably find some for less than USD$1 a year if you need one</li>
</ul>
</li>
<li><strong>Curiosity</strong>!</li>
</ul>

<h2 id="setting-up-dns-with-route-53">Setting up DNS with Route 53&nbsp;<a class="hanchor" href="#setting-up-dns-with-route-53" aria-label="Anchor link for: Setting up DNS with Route 53">🔗</a></h2>
<p>The first things we&rsquo;ll do is set up our domain with Route 53 in AWS. Route 53 can do a lot of things, like DNS management, traffic management, availability monitoring, domain registration, and more. However, we&rsquo;re only going to be using it for DNS management. Tectonic will use this to automatically provision DNS records for internal and external use.</p>

<h4 id="add-your-domain">Add your domain&nbsp;<a class="hanchor" href="#add-your-domain" aria-label="Anchor link for: Add your domain">🔗</a></h4>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-add-domain-route-53-283x300.png" alt="Adding a new domain to AWS Route 53 for Tectonic" loading="lazy">
  <figcaption>Adding a new domain to AWS Route 53 for Tectonic</figcaption>
</figure>
</p>
<p>To add your domain to Route 53, follow these steps from AWS.</p>
<ol>
<li>From <em>Services</em>, select <em>Networking &amp; Content Delivery</em> &gt; <em>Route 53</em>.</li>
<li>Select <em>Hosted zones</em> from the left pane and click <em>Create Hosted Zone</em>.</li>
<li>Enter your domain or sub-domain, add a comment if you want, and choose a Public Zone for the type.</li>
</ol>
<p>Once you&rsquo;ve done this, you can go ahead and click &ldquo;<em>Create</em>&rdquo;.</p>

<h4 id="change-the-nameservers">Change the nameservers&nbsp;<a class="hanchor" href="#change-the-nameservers" aria-label="Anchor link for: Change the nameservers">🔗</a></h4>
<p>After adding the hosted zone to Route 53, you&rsquo;ll need to change the nameservers for your domain via the domain registrar (whoever you bought the domain from). Usually it should be easy to find this, but it varies among registrars. If you&rsquo;re having a hard figuring out how to do this, try searching for a how-to or contacting your registrar&rsquo;s support.</p>
<p>After you added the hosted zone, you should see the nameservers in Route 53. There will be four nameservers provided there. You can copy and paste them from Route 53 to your domain registrar. Also note that if you&rsquo;re using a subdomain, the instructions might be a little different. You can read how to do this in the <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/creating-migrating.html">Route 53 documentation</a>.</p>
<p>The nameservers could take minutes or hours to update, depending on how lucky you are. If you&rsquo;re impatient and want to check, open up a terminal and run this command. If you see the AWS nameservers in the output, then your domain has propagated and is now usable by Route 53.</p>
<pre tabindex="0"><code>dig -t ns &lt;example.com&gt;
</code></pre>
<h2 id="configuring-ec2-with-ssh-key-pair">Configuring EC2 with SSH key pair&nbsp;<a class="hanchor" href="#configuring-ec2-with-ssh-key-pair" aria-label="Anchor link for: Configuring EC2 with SSH key pair">🔗</a></h2>
<p>This guide assumes you already have an SSH key pair created on your system. If you don&rsquo;t have one generated, you can read how to generate one <a href="https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/">here</a>.</p>
<p>The next step for us is to add an SSH key pair to EC2, one of the compute engine products offered by AWS. We&rsquo;ll import an existing key on your system into EC2.</p>
<ol>
<li>From AWS, go to <em>Services</em> &gt; <em>Compute</em> &gt; <em>EC2</em>.</li>
<li>Confirm that you are in the <strong>correct EC2 region</strong> by checking the location next to your name in the menu bar.</li>
<li>Under <em>Network &amp; Security</em>, click <em>Key Pairs</em>.</li>
<li>Click <em>Import Key Pair</em>.</li>
<li>Either upload your public key file (<code>~/.ssh/id_rsa.pub</code>) or paste it into the text field. Don&rsquo;t forget to give it a name.</li>
</ol>
<p>And that&rsquo;s all you need to do!</p>

<h2 id="assigning-aws-user-privileges">Assigning AWS user privileges&nbsp;<a class="hanchor" href="#assigning-aws-user-privileges" aria-label="Anchor link for: Assigning AWS user privileges">🔗</a></h2>
<p>Tectonic does the magic of setting up AWS for you, so you don&rsquo;t have to manually add and create the services from the web interface. In order to do this, you need to add a user account that Tectonic can use to do all of the provisioning it needs. To do this, you&rsquo;ll need to create a new Access ID and Secret key pair from AWS.</p>
<ol>
<li>Select <em>Services</em> &gt; <em>Security, Identity &amp; Compliance</em> &gt; <em>IAM</em>.</li>
<li>From the left hand pane, click <em>Users</em>, then click <em>Add user</em>.</li>
<li>Set the user details:
<ol>
<li><em>User name</em> can be anything you like (I used <code>tectonic-mydomain.com</code>)</li>
<li><em>Access type</em> only needs to be <em>Programmatic access</em></li>
</ol>
</li>
<li>For permissions, click <em>Add user to group</em> and create a new group for your user.</li>
<li>When creating a new group, attach only the policies needed by Tectonic to operate correctly:
<ol>
<li><code>AmazonEC2FullAccess</code></li>
<li><code>IAMFullAccess</code></li>
<li><code>AmazonS3FullAccess</code></li>
<li><code>AmazonVPCFullAccess</code></li>
<li><code>AmazonRoute53FullAccess</code></li>
</ol>
</li>
<li>Finish creating the user. You&rsquo;ll then see the <em>Access key ID</em> and <em>Secret access key</em>. Hold onto these, you&rsquo;ll need them later. You won&rsquo;t get to see the secret key again!</li>
</ol>
<p>Now we&rsquo;re ready to install Tectonic! Let&rsquo;s grab your credentials next.</p>

<h2 id="download-tectonic-credentials">Download Tectonic credentials&nbsp;<a class="hanchor" href="#download-tectonic-credentials" aria-label="Anchor link for: Download Tectonic credentials">🔗</a></h2>
<p>Jump back over to the <a href="https://account.coreos.com/">CoreOS accounts page</a>. When you&rsquo;re logged in, you&rsquo;ll see the <em>Account Assets</em> area. Download the CoreOS license file and pull secret. Later on in the installer, you&rsquo;ll need to insert these to finish the installation.</p>

<h2 id="running-the-installer">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer" aria-label="Anchor link for: Running the installer">🔗</a></h2>
<p>Now things get interesting! We finally get to install and deploy Tectonic into AWS. The installer takes the form of a graphical installer in your web browser. To use the installer, you need to download the binary and run it. If you&rsquo;re curious, you can find the installer source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>.</p>

<h4 id="download-and-run-installer">Download and run installer&nbsp;<a class="hanchor" href="#download-and-run-installer" aria-label="Anchor link for: Download and run installer">🔗</a></h4>
<p>First, open up a new terminal window and navigate to a directory you want to download the installer to. Even though you likely won&rsquo;t need to run the installer again, you will want to hang on to this if you ever want to easily dismantle everything in AWS later.</p>
<pre tabindex="0"><code>curl -O https://releases.tectonic.com/tectonic-1.6.4-tectonic.1.tar.gz
</code></pre><p>Next, extract the tarball and navigate into the directory.</p>
<pre tabindex="0"><code>tar -xzvf tectonic-1.6.4-tectonic.1.tar.gz
cd tectonic/tectonic-installer
</code></pre><p>Now execute the installer binary. After running this, a new browser window will open that features the graphical installer.</p>
<pre tabindex="0"><code>./linux/installer
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-installer-aws.png" alt="Now we&rsquo;re ready to deploy Tectonic into AWS!" loading="lazy">
  <figcaption>Now we’re ready to deploy Tectonic into AWS!</figcaption>
</figure>
</p>

<h4 id="running-the-installer-1">Running the installer&nbsp;<a class="hanchor" href="#running-the-installer-1" aria-label="Anchor link for: Running the installer">🔗</a></h4>
<p>The installer is thorough and assumes safe defaults for most of the steps. Be sure to have your AWS Access and Secret ID keys on hand. You should be able to run through the installer without issue. If you&rsquo;re confused about what any of the values mean or want to make custom changes, you can read more in the <a href="https://coreos.com/tectonic/docs/latest/tutorials/installing-tectonic.html">upstream documentation</a>.</p>
<p>Once you&rsquo;re finished, congrats! You&rsquo;ve successfully installed Tectonic!</p>

<h2 id="check-out-your-tectonic-install">Check out your Tectonic install&nbsp;<a class="hanchor" href="#check-out-your-tectonic-install" aria-label="Anchor link for: Check out your Tectonic install">🔗</a></h2>
<p>Once you finish the installation successfully, your Tectonic installation will be accessible within AWS. You can navigate to the domain you specified during the install to find it. Unless you added a CA authority and certificates, your browser will probably complain about invalid SSL certificates, but you can ignore the warning safely. It might also take a few minutes before the URL is accessible, so if you were looking for a coffee or tea break, now would be a good time!</p>
<p>Once you&rsquo;re logged in, you should see something like this.</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/tectonic-status-page.png" alt="Looking at a freshly installed Tectonic status page on AWS" loading="lazy">
  <figcaption>Looking at a freshly installed Tectonic status page on AWS</figcaption>
</figure>
</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/prometheus-monitoring.png" alt="A more advanced use case of what Tectonic can do with monitoring" loading="lazy">
  <figcaption>A more advanced use case of what Tectonic can do with monitoring</figcaption>
</figure>
</p>

<h2 id="blow-it-all-away">Blow it all away!&nbsp;<a class="hanchor" href="#blow-it-all-away" aria-label="Anchor link for: Blow it all away!">🔗</a></h2>
<p>If you&rsquo;re like me, you might be frustrated by guides that tell you how to install things but not how to take it all apart. Fortunately, this guide not only tells you how to do that, but the Tectonic installer also makes it super easy to do. If you&rsquo;re sure that you&rsquo;re done with Tectonic and don&rsquo;t want any leftovers to remain in AWS, this is the best way to do it, instead of deleting everything manually from the AWS Console.</p>
<p>Every installation has a time-stamped folder in the <code>tectonic</code> directory we used earlier. First, you need to navigate into the specific folder for the cluster you installed. It&rsquo;s important to be inside of this directory first.</p>
<pre tabindex="0"><code>cd tectonic/tectonic-installer/linux/clusters/&lt;CLUSTERNAME&gt;
</code></pre><p><code>&lt;CLUSTERNAME&gt;</code> will be the time-stamped directory. Once you&rsquo;re in the folder, run this command to trigger the uninstaller. After running this, you&rsquo;ll see the installer slowly dismantle everything and delete any leftovers in AWS.</p>
<pre tabindex="0"><code>../../terraform destroy
</code></pre><p>Once it finishes, you should see an output message confirming how many AWS resources were destroyed. And now you&rsquo;re back to where you started.</p>

<h2 id="learn-more-about-tectonic">Learn more about Tectonic&nbsp;<a class="hanchor" href="#learn-more-about-tectonic" aria-label="Anchor link for: Learn more about Tectonic">🔗</a></h2>
<p>If you thought this was exciting and want to learn more, there is no shortage of resources for you to read. You can learn more about Tectonic from the <a href="https://coreos.com/tectonic/">CoreOS website</a> or the <a href="https://tectonic.com/blog/announcing-tectonic/">original release announcement</a>. You can also dig into the installer&rsquo;s source code <a href="https://github.com/coreos/tectonic-installer">on GitHub</a>. If you&rsquo;re still trying to wrap your head around Tectonic, there&rsquo;s a good write-up <a href="https://virtualizationreview.com/articles/2017/04/04/coreos-tectonic-to-shake-up-kubernetes.aspx">on virtualizationreview.com</a>.</p>
<p>Next week, we&rsquo;ll install a simple guestbook application to our Tectonic installation to see how it all works and what you can do with it. Stay tuned!</p>
<p>Questions, Tectonic stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Clustered computing on Fedora with Minikube</title><link>https://jwheel.org/blog/2017/07/minikube-kubernetes/</link><pubDate>Fri, 07 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/minikube-kubernetes/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/minikube-kubernetes/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This is a short series to introduce Kubernetes, what it does, and how to experiment with it on Fedora. This is a beginner-oriented series to help introduce some higher level concepts and give examples of using it on Fedora. In the first post, we covered <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">key concepts in Kubernetes</a>. This second post shows you how to build a single-node Kubernetes deployment on your own computer.</em></p>
<hr>
<p>Once you have a better understanding of what the key concepts and terminology in Kubernetes are, getting started is easier. Like many programming tutorials, this tutorial shows you how to build a &ldquo;Hello World&rdquo; application and deploy it locally on your computer using Kubernetes. This is a simple tutorial because there aren&rsquo;t multiple nodes to work with. Instead, the only device we&rsquo;re using is a single node (a.k.a. your computer). By the end, you&rsquo;ll see how to deploy a Node.js application into a Kubernetes pod and manage it with a deployment on Fedora.</p>
<p>This tutorial isn&rsquo;t made from scratch. You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the official Kubernetes documentation. This article adds some changes that will let you do the same thing on your own Fedora computer.</p>

<h2 id="introducing-minikube">Introducing Minikube&nbsp;<a class="hanchor" href="#introducing-minikube" aria-label="Anchor link for: Introducing Minikube">🔗</a></h2>
<p><a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> is an official tool developed by the Kubernetes team to help make testing it out easier. It lets you run a single-node Kubernetes cluster through a virtual machine on your own hardware. Beyond using it to play around with or experiment for the first time, it&rsquo;s also useful as a testing tool if you&rsquo;re working with Kubernetes daily. It does support many of the features you&rsquo;d want in a production Kubernetes environment, like DNS, NodePorts, and container run-times.</p>

<h2 id="installation">Installation&nbsp;<a class="hanchor" href="#installation" aria-label="Anchor link for: Installation">🔗</a></h2>
<p>This tutorial requires virtual machine and container software. There are many options you can use. Minikube supports <code>virtualbox</code>, <code>vmwarefusion</code>, <code>kvm</code>, and <code>xhyve</code> drivers for virtualization. However, this guide will use KVM since it&rsquo;s already packaged and available in Fedora. We&rsquo;ll also use Node.js for building the application and Docker for putting it in a container.</p>

<h4 id="pre-requirements">Pre-requirements&nbsp;<a class="hanchor" href="#pre-requirements" aria-label="Anchor link for: Pre-requirements">🔗</a></h4>
<p>You can install the prerequisites with this command.</p>
<pre tabindex="0"><code>$ sudo dnf install kubernetes libvirt-daemon-kvm kvm nodejs docker
</code></pre><p>After installing these packages, you&rsquo;ll need to add your user to the right group to let you use KVM. The following commands will add your user to the group and then update your current session for the group change to take effect.</p>
<pre tabindex="0"><code>$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
</code></pre>
<h4 id="docker-kvm-drivers">Docker KVM drivers&nbsp;<a class="hanchor" href="#docker-kvm-drivers" aria-label="Anchor link for: Docker KVM drivers">🔗</a></h4>
<p>If using KVM, you will also need to install the KVM drivers to work with Docker. You need to add <a href="https://github.com/docker/machine/releases">Docker Machine</a> and the <a href="https://github.com/dhiltgen/docker-machine-kvm/releases/">Docker Machine KVM Driver</a> to your local path. You can check their pages on GitHub for the latest versions, or you can run the following commands for specific versions. These were tested on a Fedora 25 installation.</p>

<h5 id="docker-machine">Docker Machine&nbsp;<a class="hanchor" href="#docker-machine" aria-label="Anchor link for: Docker Machine">🔗</a></h5>
<pre tabindex="0"><code>$ curl -L https://github.com/docker/machine/releases/download/v0.12.0/docker-machine-`uname -s`-`uname -m` &gt;/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
</code></pre>
<h5 id="docker-machine-kvm-driver">Docker Machine KVM Driver&nbsp;<a class="hanchor" href="#docker-machine-kvm-driver" aria-label="Anchor link for: Docker Machine KVM Driver">🔗</a></h5>
<p>This installs the CentOS 7 driver, but it also works with Fedora.</p>
<pre tabindex="0"><code>$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 &gt;/tmp/docker-machine-driver-kvm
$ chmod +x /tmp/docker-machine-driver-kvm
$ sudo cp /tmp/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm
</code></pre>
<h4 id="installing-minikube">Installing Minikube&nbsp;<a class="hanchor" href="#installing-minikube" aria-label="Anchor link for: Installing Minikube">🔗</a></h4>
<p>The final step for installation is getting Minikube itself. Currently, there is no package in Fedora available, and official documentation recommends grabbing the binary and moving it your local path. To download the binary, make it executable, and move it to your path, run the following.</p>
<pre tabindex="0"><code>$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin/
</code></pre><p>Now you&rsquo;re ready to build your cluster.</p>

<h2 id="create-the-minikube-cluster">Create the Minikube cluster&nbsp;<a class="hanchor" href="#create-the-minikube-cluster" aria-label="Anchor link for: Create the Minikube cluster">🔗</a></h2>
<p>Now that you have everything installed and in the right place, you can create your Minikube cluster and get started. To start Minikube, run this command.</p>
<pre tabindex="0"><code>$ minikube start --vm-driver=kvm
</code></pre><p>Next, you&rsquo;ll need to set the context. Context is how <code>kubectl</code> (the command-line interface for Kubernetes) knows what it&rsquo;s dealing with. To set the context for Minikube, run this command.</p>
<pre tabindex="0"><code>$ kubectl config use-context minikube
</code></pre><p>As a check, make sure that <code>kubectl</code> can communicate with your cluster by running this command.</p>
<pre tabindex="0"><code>$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use &#39;kubectl cluster-info dump&#39;.
</code></pre>
<h2 id="build-your-application">Build your application&nbsp;<a class="hanchor" href="#build-your-application" aria-label="Anchor link for: Build your application">🔗</a></h2>
<p>Now that Kubernetes is ready, we need to have an application to deploy in it. This article uses the same Node.js application as the official tutorial in the Kubernetes documentation. Create a folder called <code>hellonode</code> and create a new file called <code>server.js</code> with your favorite text editor.</p>
<pre tabindex="0"><code>var http = require(&#39;http&#39;);

var handleRequest = function(request, response) {
 console.log(&#39;Received request for URL: &#39; + request.url);
 response.writeHead(200);
 response.end(&#39;Hello world!&#39;);
};
var www = http.createServer(handleRequest);
www.listen(8080);
</code></pre><p>Now try running your application and running it.</p>
<pre tabindex="0"><code>$ node server.js
</code></pre><p>While it&rsquo;s running, you should be able to access it on <a href="http://localhost:8080/">localhost:8080</a>. Once you verify it&rsquo;s working, hit <code>Ctrl+C</code> to kill the process.</p>

<h2 id="create-docker-container">Create Docker container&nbsp;<a class="hanchor" href="#create-docker-container" aria-label="Anchor link for: Create Docker container">🔗</a></h2>
<p>Now you have an application to deploy! The next step is to get it packaged into a Docker container (that you&rsquo;ll pass to Kubernetes later). You&rsquo;ll need to create a <code>Dockerfile</code> in the same folder as your <code>server.js</code> file. This guide uses an existing Node.js Docker image. It exposes your application on port 8080, copies <code>server.js</code> to the image, and runs it as a server. Your <code>Dockerfile</code> should look like this.</p>
<pre tabindex="0"><code>FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js
</code></pre><p>If you&rsquo;re familiar with Docker, you&rsquo;re likely used to pushing your image to a registry. In this case, since we&rsquo;re deploying it to Minikube, you can build it using the same Docker host as the Minikube virtual machine. For this to happen, you&rsquo;ll need to use the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env)
</code></pre><p>Now you can build your Docker image with the Minikube Docker daemon.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v1 .
</code></pre><p>Huzzah! Now you have an image Minikube can run.</p>

<h2 id="create-minikube-deployment">Create Minikube deployment&nbsp;<a class="hanchor" href="#create-minikube-deployment" aria-label="Anchor link for: Create Minikube deployment">🔗</a></h2>
<p>If you remember from the <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">first part</a> of this series, deployments watch your application&rsquo;s health and reschedule it if it dies. Deployments are the supported way of creating and scaling pods. <code>kubectl run</code> creates a deployment to manage a pod. We&rsquo;ll create one that uses the <code>hello-node</code> Docker image we just built.</p>
<pre tabindex="0"><code>$ kubectl run hello-node --image=hello-node:v1 --port=8080
</code></pre><p>Next, check that the deployment was created successfully.</p>
<pre tabindex="0"><code>$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           30s
</code></pre><p>Creating the deployment also creates the pod where the application is running. You can view the pod with this command.</p>
<pre tabindex="0"><code>$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-1644695913-k2314   1/1       Running   0          3
</code></pre><p>Finally, let&rsquo;s look at what the configuration looks like. If you&rsquo;re familiar with Ansible, the configuration files for Kubernetes also use easy-to-read YAML. You can see the full configuration with this command.</p>
<pre tabindex="0"><code>$ kubectl config view
</code></pre><p><code>kubectl</code> does many things. To read more about what you can do with it, you can read the <a href="https://kubernetes.io/docs/user-guide/kubectl-overview/">documentation</a>.</p>

<h2 id="create-service">Create service&nbsp;<a class="hanchor" href="#create-service" aria-label="Anchor link for: Create service">🔗</a></h2>
<p>Right now, the pod is only accessible inside of the Kubernetes pod with its internal IP address. To see it in a web browser, you&rsquo;ll need to expose it as a service. To expose it as a service, run this command.</p>
<pre tabindex="0"><code>$ kubectl expose deployment hello-node --type=LoadBalancer
</code></pre><p>The type was specified as a <code>LoadBalancer</code> because Kubernetes will expose the IP outside of the cluster. If you were running a load balancer in a cloud environment, this how you&rsquo;d provision an external IP address. However, in this case, it exposes your application as a service in Minikube. And now, finally, you get to see your application. Running this command will open a new browser window with your application.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/minikube-hello-world-browser-e1497995645454.png" alt="Minikube: Exposing Hello Minikube application in browser" loading="lazy">
</figure>
</p>
<p>Congratulations, you deployed your first containerized application via Kubernetes! But now, what if you need to our small Hello World application?</p>

<h2 id="how-do-we-push-changes">How do we push changes?&nbsp;<a class="hanchor" href="#how-do-we-push-changes" aria-label="Anchor link for: How do we push changes?">🔗</a></h2>
<p>The time has come when you&rsquo;re ready to make an update and push it. Edit your <code>server.js</code> file and change &ldquo;Hello world!&rdquo; to &ldquo;Hello again, world!&rdquo;</p>
<pre tabindex="0"><code>response.end(&#39;Hello again, world!&#39;);
</code></pre><p>And we&rsquo;ll build another Docker image. Note the version bump.</p>
<pre tabindex="0"><code>$ docker build -t hello-node:v2 .
</code></pre><p>Next, you need to give Kubernetes the new image to deploy.</p>
<pre tabindex="0"><code>$ kubectl set image deployment/hello-node hello-node=hello-node:v2
</code></pre><p>And now, your update is pushed! Like before, run this command to have it open in a new browser window.</p>
<pre tabindex="0"><code>$ minikube service hello-node
</code></pre><p>If your application doesn&rsquo;t come up any different, double-check that you updated the right image. You can troubleshoot by getting a shell into your pod by running the following command. You can get the pod name from the command run earlier (<code>kubectl get pods</code>). Once you&rsquo;re in the shell, check if the <code>server.js</code> file shows your changes.</p>
<pre tabindex="0"><code>$ kubectl exec -it &lt;pod-name&gt; bash
</code></pre>
<h2 id="cleaning-up">Cleaning up&nbsp;<a class="hanchor" href="#cleaning-up" aria-label="Anchor link for: Cleaning up">🔗</a></h2>
<p>Now that we&rsquo;re done, we can clean up the environment. To clear up the resources in your cluster, run these two commands.</p>
<pre tabindex="0"><code>$ kubectl delete service hello-node
$ kubectl delete deployment hello-node
</code></pre><p>If you&rsquo;re done playing with Minikube, you can also stop it.</p>
<pre tabindex="0"><code>$ minikube stop
</code></pre><p>If you&rsquo;re done using Minikube for a while, you can unset Minikube Docker daemon that we set earlier in this guide.</p>
<pre tabindex="0"><code>$ eval $(minikube docker-env -u)
</code></pre>
<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>You can find the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/">original tutorial</a> in the Kubernetes documentation. If you want to read more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is thorough and comprehensive.</p>
<p>Questions, Minikube stories, or tips for beginners? Add your comments below.</p>]]></description></item><item><title>Introduction to Kubernetes with Fedora</title><link>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</link><pubDate>Mon, 03 Jul 2017 00:00:00 +0000</pubDate><guid>https://jwheel.org/blog/2017/07/introduction-kubernetes-fedora/</guid><description><![CDATA[<p><em><strong>This article was originally published <a href="https://fedoramagazine.org/introduction-kubernetes-fedora/">on the Fedora Magazine</a>.</strong></em></p>
<hr>
<p><em>This article is part of a short series that introduces Kubernetes. This beginner-oriented series covers some higher level concepts and gives examples of using Kubernetes on Fedora.</em></p>
<hr>
<p>The information technology world changes daily, and the demands of building scalable infrastructure become more important. Containers aren&rsquo;t anything new these days, and have various uses and implementations. But what about building scalable, containerized applications? By itself, Docker and other tools don&rsquo;t quite cut it, as far as building the infrastructure to support containers. How do you deploy, scale, and manage containerized applications in your infrastructure? This is where tools such as Kubernetes comes in. <a href="https://kubernetes.io/">Kubernetes</a> is an open source system that automates deployment, scaling, and management of containerized applications. Kubernetes was originally developed by Google before being donated to the <a href="https://en.wikipedia.org/wiki/Linux_Foundation#Cloud_Native_Computing_Foundation">Cloud Native Computing Foundation</a>, a project of the <a href="https://www.linuxfoundation.org/">Linux Foundation</a>. This article gives a quick precursor to what Kubernetes is and what some of the buzzwords really mean.</p>

<h2 id="what-is-kubernetes">What is Kubernetes?&nbsp;<a class="hanchor" href="#what-is-kubernetes" aria-label="Anchor link for: What is Kubernetes?">🔗</a></h2>
<p>Kubernetes simplifies and automates the process of deploying containerized applications at scale. Just like Ansible <a href="https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/">orchestrates software</a>, Kubernetes orchestrates deploying infrastructure that supports the software. There are various &ldquo;layers of the cake&rdquo; that make Kubernetes a strong solution for building resilient infrastructure. It also assists with making systems that can grow at scale. If your application has increasing demands such as higher traffic, Kubernetes helps grow your environment to support increasing demands. This is one reason why Kubernetes is helpful for building long-term solutions for complex problems (even if it&rsquo;s not complex… yet).</p>
<p>
<figure>
  <img src="https://cdn.fedoramagazine.org/wp-content/uploads/2017/06/kubernetes-high-level-design.jpg" alt="Kubernetes: The high level design" loading="lazy">
  <figcaption>Kubernetes: The high level design. Daniel Smith, Robert Bailey, Kit Merker (<a href="https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015" class="bare">https://www.slideshare.net/RohitJnagal/kubernetes-intro-public-kubernetes-meetup-4212015</a>).</figcaption>
</figure>
</p>
<p>At a high level overview, imagine three different layers.</p>
<ul>
<li><strong>Users</strong>: People who deploy or create containerized applications to run in your infrastructure</li>
<li><strong>Master(s)</strong>: Manages and schedules your software across various other machines, for example in a clustered computing environment</li>
<li><strong>Nodes</strong>: Various machines to support the application, called <em>kubelets</em></li>
</ul>
<p>These three layers are orchestrated and automated by Kubernetes. One of the key pieces of the master (not included in the visual) is <strong>etcd</strong>. etcd is a lightweight and distributed key/value store that holds configuration data. Each node, or kubelet, can access this data in etcd through a HTTP/JSON API interface. The components of communication between master and node such as etcd are explained <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/">in the official documentation</a>.</p>
<p>Another important detail not shown in the diagram is that you might have many masters. In a high-availability (HA) set-up, you can keep your infrastructure resilient by having multiple masters in case one happens to go down.</p>

<h2 id="terminology">Terminology&nbsp;<a class="hanchor" href="#terminology" aria-label="Anchor link for: Terminology">🔗</a></h2>
<p>It&rsquo;s important to understand the concepts of Kubernetes before you start to play around with it. There are many core concepts in Kubernetes, such as services, volumes, secrets, daemon sets, and jobs. However, this article explains four that are helpful for the next exercise of building a mini Kubernetes cluster. The three concepts are <em>pods</em>, <em>labels</em>, <em>replica sets</em>, and <em>deployments</em>.</p>

<h4 id="pods"><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/">Pods</a>&nbsp;<a class="hanchor" href="#pods" aria-label="Anchor link for: Pods">🔗</a></h4>
<p>If you imagine Kubernetes as a Lego® castle, pods are the smallest block you can pick out. By themselves, they are the smallest unit you can deploy. The containers of an application fit into a pod. The pod can be one container, but it can also be as many as needed. Containers in a pod are unique since they share the Linux namespace and aren&rsquo;t isolated from each other. In a world before containers, this would be similar to running an application on the same host machine.</p>
<p>When the pods share the same namespace, all the containers in a pod:</p>
<ul>
<li>Share an IP address</li>
<li>Share port space</li>
<li>Find each other over <em>localhost</em></li>
<li>Communicate over IPC namespace</li>
<li>Have access to shared volumes</li>
</ul>
<p>But what&rsquo;s the point of having pods? The main purpose of pods is to have groups of &ldquo;helping&rdquo; containers on the same namespace (co-located) and integrated together (co-managed) along with the main application container. Some examples might be logging or monitoring tools that check the health of your application, or backup tools that act when certain data changes.</p>
<p>In the big picture, containers in a single pod are always scheduled together too. However, Kubernetes doesn&rsquo;t automatically reschedule them to a new node if the node dies (more on this later).</p>

<h4 id="labels"><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">Labels</a>&nbsp;<a class="hanchor" href="#labels" aria-label="Anchor link for: Labels">🔗</a></h4>
<p>Labels are a simple but important concept in Kubernetes. Labels are key/value pairs attached to <em>objects</em> in Kubernetes, like pods. They let you specify unique attributes of objects that actually mean something to humans. You can attach them when you create an object, and modify or add them later. Labels help you organize and select different sets of objects to interact with when performing actions inside of Kubernetes. For example, you can identify:</p>
<ul>
<li><strong>Software releases</strong>: Alpha, beta, stable</li>
<li><strong>Environments</strong>: Development, production</li>
<li><strong>Tiers</strong>: Front-end, back-end</li>
</ul>
<p>Labels are as flexible as you need them to be, and this list isn&rsquo;t comprehensive. Be creative when thinking of how to apply them.</p>

<h4 id="replica-sets"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">Replica sets</a>&nbsp;<a class="hanchor" href="#replica-sets" aria-label="Anchor link for: Replica sets">🔗</a></h4>
<p>Replica sets are where some of the magic begins to happen with automatic scheduling or rescheduling. Replica sets ensure that a number of pod instances (called <em>replicas</em>) are running at any moment. If your web application needs to constantly have four pods in the front-end and two in the back-end, the replica sets are your insurance that number is always maintained. This also makes Kubernetes great for scaling. If you need to scale up or down, change the number of replicas.</p>
<p>When reading about replica sets, you might also see <em>replication controllers</em>. They are somewhat interchangeable, but replication controllers are older, semi-deprecated, and less powerful than replica sets. The main difference is that sets work with more advanced set-based selectors &ndash; which goes back to labels. Ideally, you won&rsquo;t have to worry about this much today.</p>
<p>Even though replica sets are where the scheduling magic happens to help make your infrastructure resilient, you won&rsquo;t actually interact with them much. Replica sets are managed by deployments, so it&rsquo;s unusual to directly create or manipulate replica sets. And guess what&rsquo;s next?</p>

<h4 id="deployments"><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployments</a>&nbsp;<a class="hanchor" href="#deployments" aria-label="Anchor link for: Deployments">🔗</a></h4>
<p>Deployments are another important concept inside of Kubernetes. Deployments are a declarative way to deploy and manage software. If you&rsquo;re familiar with Ansible, you can compare deployments to the playbooks of Ansible. If you&rsquo;re building your infrastructure out, you want to make sure it is easily reproducible without much manual work. Deployments are the way to do this.</p>
<p>Deployments offer functionality such as revision history, so it&rsquo;s always easy to rollback changes if something doesn&rsquo;t work out. They also manage any updates you push out to your application, and if something isn&rsquo;t working, it will stop rolling out your update and revert back to the last working state. Deployments follow the mathematical property of <a href="https://en.wikipedia.org/wiki/Idempotence">idempotence</a>, which means you define your specs once and use them many times to get the same result.</p>
<p>Deployments also get into imperative and declarative ways to build infrastructure, but this explanation is a quick, fly-by overview. You can read more <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">detailed information</a> in the official documentation.</p>

<h2 id="installing-on-fedora">Installing on Fedora&nbsp;<a class="hanchor" href="#installing-on-fedora" aria-label="Anchor link for: Installing on Fedora">🔗</a></h2>
<p>If you want to start playing with Kubernetes, install it and some useful tools from the Fedora repositories.</p>
<pre tabindex="0"><code>sudo dnf install kubernetes
</code></pre><p>This command provides the bare minimum needed to get started. You can also install other cool tools like <em>cockpit-kubernetes</em> (integration with <a href="http://cockpit-project.org/">Cockpit</a>) and <em>kubernetes-ansible</em> (provisioning Kubernetes with <a href="https://www.ansible.com/">Ansible</a> playbooks and roles).</p>

<h2 id="learn-more-about-kubernetes">Learn more about Kubernetes&nbsp;<a class="hanchor" href="#learn-more-about-kubernetes" aria-label="Anchor link for: Learn more about Kubernetes">🔗</a></h2>
<p>If you want to read more about Kubernetes or want to explore the concepts more, there&rsquo;s plenty of great information online. The <a href="https://kubernetes.io/docs/home/">documentation</a> provided by Kubernetes is fantastic, but there are also other helpful guides from <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes">DigitalOcean</a> and <a href="https://blog.giantswarm.io/understanding-basic-kubernetes-concepts-i-introduction-to-pods-labels-replicas/">Giant Swarm</a>. The next article in the series will explore building a mini Kubernetes cluster on your own computer to see how it really works.</p>
<p>Questions, Kubernetes stories, or tips for beginners? Add your comments below.</p>]]></description></item></channel></rss>