<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
	<title>Wondering Chimp</title>
	<subtitle>A feed of the latest posts from my blog.</subtitle>
	<link href="https://wonderingchimp.com/feed.xml" rel="self"/>
	<link href="https://wonderingchimp.com/"/>
	<updated>2026-04-04T00:00:00Z</updated>
	<id>https://wonderingchimp.com</id>
	<author>
		<name>Marjan Bugarinovic</name>
		<email>wondering.chimp@tuta.com</email>
	</author>
	
		
		<entry>
			<title>Reducing resource footprint with TimescaleDB compression</title>
			<link href="https://wonderingchimp.com/posts/reducing-resource-footprint-with-timescaledb-compression/"/>
			<updated>2026-04-04T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/reducing-resource-footprint-with-timescaledb-compression/</id>
			<content type="html"><![CDATA[
				<p>When we speak about digital sustainability, we often talk in terms of decreasing resource utilisation. In these resources we mainly count memory and CPU as drivers to reducing the electricity usage. However, one aspect that often gets neglected is the data and storage usage.</p>
<p>Writing to/reading from storage also uses certain amount of electricity, which gets under the radar, hidden behind the all-time favourite memory and nowadays hot-topic CPU/GPU resources. One of the reasons why is because the electricity amount used is not as massive as in CPUs and GPUs. Even though <a href="https://github.com/timescale/timescaledb/issues/6186#issuecomment-1761211781">not massive</a>, it's still there, and important!</p>
<p>Having this in mind, led me to the realisation - OMG, we did this storage optimisation on the project I'm currently working on, by introducing compression on our database tables! And it was quite a big decrease, which I'll describe later on.</p>
<p>So, why don't I write what I've learned from that matter? Yeah, why not. It will also be good to revisit my knowledge about the basics of TimescaleDB.</p>
<p>This is how we did it - the <em>compressed</em> version. Pun intended.</p>
<h2>Setting the stage</h2>
<p>As a database solution on our project, we use tried and tested PostgreSQL. Now, without going too much into details about the data being stored there, running PostgreSQL for this kind of data we have (which are mostly metrics) is not optimal. Without wanting to do too much rework, data restructuring, schema refinement, and move to a more appropriate solution, we decided - let's try TimescaleDB on top of PostgreSQL.</p>
<p>This is where simplicity threw its arms in the air and went out of the room.</p>
<p>All jokes aside, we did decrease performance hit quite a bit by using TimescaleDB, even though its another layer of complexity on top of the existing one(s). But more on that later.</p>
<h2>What is TimescaleDB?</h2>
<p>As usual, let's start from the beginning.</p>
<p>In short - TimescaleDB is an open-source PostgreSQL extension that enables the time-series functionality on PostgreSQL database. It is based on PostgreSQL and has a full SQL support.</p>
<p>If you want your tables in PostgreSQL to be automatically partitioned, cleaned up, compressed, or aggregated - TimescaleDB is a way to go!</p>
<p>On our project, the first reason why we started investigating TimescaleDB was complex data retention scripts on PostgreSQL, then we realised it's much more than that!</p>
<h2>What are hypertables?</h2>
<p>The main feature of TimescaleDB are hypertables. These tables are PostgreSQL tables that automatically partition time-series data by time and optionally other dimensions. When running a query against a hypertable, TimescaleDB identifies the correct partition (in TimescaleDB dictionary - a <em>chunk</em>) and runs a query on it, instead of the entire table.</p>
<p>Therefore, hypertables improve performance and enable a better data management - no need for long and trying PostgreSQL scripts!</p>
<p>The real beauty, besides the performance improvements lies in the compression of these tables. When properly done, it can bring down the table size quite significantly, as well as improving the query speed.</p>
<p>If you want to learn more about hypertables follow <a href="https://github.com/timescale/timescaledb/issues/6186#issuecomment-1761211781">this link</a>.</p>
<h2>Simple things about compression</h2>
<p>Enabling compression is quite simple and it consists of two steps:</p>
<ol>
<li>Decide on the compress segment on a table.</li>
</ol>
<pre><code class="language-sql">ALTER TABLE example SET ( timescaledb.compress, timescaledb.compress_segmentby = 'device_id' );
</code></pre>
<ol start="2">
<li>Add a compression policy.</li>
</ol>
<pre><code class="language-sql">SELECT add_compression_policy('example', INTERVAL '7 days');
</code></pre>
<h2>Not so simple things about the compression</h2>
<p>What is not as simple as running the two SQL queries against the DB is deciding on the ordering and segmenting of the data. That is - if you're not that familiar with your data/table structure.</p>
<p>Here is what happens when you enable compression:</p>
<blockquote>
<p>When you enable compression, the data in your hypertable is compressed chunk by chunk. When the chunk is compressed, multiple records are grouped into a single row. The columns of this row hold an array-like structure that stores all the data. This means that instead of using lots of rows to store the data, it stores the same data in a single row. Because a single row takes up less disk space than many rows, it decreases the amount of disk space required, and can also speed up your queries.</p>
</blockquote>
<p>The ordering and segmenting here is important because it will have a great impact on the compression ratio and performance on the queries against the hypertables.</p>
<blockquote>
<p>Segmenting the compressed data should be based on the way you access the data. Basically, you want to segment your data in such a way that you can make it easier for your queries to fetch the right data at the right time. That is to say, your queries should dictate how you segment the data so they can be optimised and yield even better query performance.</p>
</blockquote>
<p>Great document that explains this into detail and which I have referenced a lot here, can be found <a href="https://www.tigerdata.com/docs/use-timescale/latest/compression/about-compression.">here</a></p>
<h2>Our experience</h2>
<p>We started having a look into compression when the DBAs reached to us and told us - these tables are too big for the current DB instance, let's explore and use compression on those tables.</p>
<p>Our first thought was - wouldn't that add additional performance toll on the DB host? Maybe. Let's test it out and see.</p>
<p>Our second thought - okay, let's see the tables we need to compress. We selected 4 of the biggest ones and started our analysis.</p>
<p>For reference purposes, I'll use an example table, which is a hypertable with 90d of retention and more than 1TB in size. I'll call this table creatively enough <code>table-0</code>.</p>
<p>This table had the following columns:</p>
<ul>
<li>id (int4),</li>
<li>sub_id (int4),</li>
<li>value (float4),</li>
<li>name (varchar255),</li>
<li>sub_name (varchar255),</li>
<li>status (int2) and</li>
<li>type (int2).</li>
</ul>
<h3>Setting the proper segmentation</h3>
<p>In the beginning we struggled setting the proper segmentation. We noticed there was a significant difference in the size-to-performance ratio if we decided to segment on the more granular, but not that much queried data, compared to the less granular, but frequently accessed data.</p>
<p>If we chose the first option, with more compression - performance degraded. The second option was less compression, but better performance.</p>
<h3>Wrong column type</h3>
<p>Another problem we experienced was that certain queries were failing with the error below:</p>
<pre><code>SQL Error [XX000]: ERROR: a variable with non-vectorizable type character varying is marked as vectorized
  Detail: Assertion 'is_vector_type(var-&gt;vartype)' failed.
</code></pre>
<p>Although not explicitly written in the documentation, going through the GitHub issues we've discovered that the segmentation by varchar column type is not a recommended solution, and we should change the type to text instead, because of the way how TimescaleDB does the compression.</p>
<blockquote>
<p>in some places its bad practice for postgres too. You should use text instead.</p>
</blockquote>
<p>We found info about this <a href="https://github.com/timescale/timescaledb/pull/8693">here</a>, <a href="https://github.com/timescale/timescaledb/issues/1755">here</a>, and <a href="https://github.com/timescale/timescaledb/issues/6186#issuecomment-1761211781">here</a>.</p>
<h2>What are the results we achieved?</h2>
<p>The biggest improvement we made was that the underlying hypertable shrinked in size from more than 1TB to ~200GB! We were amazed!</p>
<p>The second improvement was the performance boost we didn't think was possible. But it was, because of the way how TimescaleDB implements compression. We noticed this improvement in the query speed and as well as resource consumption, specifically CPU.</p>
<p>In future I might prepare some case study, where I can show you actual results, or close to actual results of implementing compression on big hypertables. At this point, I'm not able to share more.</p>
<h2>How all this ties to sustainability?</h2>
<p>It was in the title, no? Jokes aside, making systems more efficient with less resources, and less storage, is the goal of any sustainable improvement and design. With this change, we did release quite big amount of storage and more than a couple of CPU cores. I would count that as a win!</p>
<p>In case you're interested to find out more about the compression, check out the links I've shared above, or <a href="https://www.tigerdata.com/docs/use-timescale/latest/compression/compression-design">this one</a>, that relates to ways how to design the tables for compression.</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>A lot has changed...</title>
			<link href="https://wonderingchimp.com/posts/a-lot-has-changed/"/>
			<updated>2026-03-07T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/a-lot-has-changed/</id>
			<content type="html"><![CDATA[
				<p>The first article I wrote and published under <a href="https://wonderingchimp.com">Wondering Chimp</a> was more than 4 years ago. Wow! I could not have imagined that it will last that long...</p>
<p>From the beginning, I started quite simple, some would say lazy, having in mind my technical background - wrote down some thoughts, and decided to post it online. I chose to host everything on <a href="https://ghost.org">Ghost.org</a> - a site that helps you easily publish your thoughts, ideas, and so on, without any hassle of building it up yourself. What I like about them is that they were and still are independent from the corporate claws and really true to themselves and the world. From then on, I learned a lot. I deepened existing passions, discovered others. In a nutshell, I think I did well!</p>
<p>As you may or may not know, one of the things that I started exploring and discovered I really care about is digital sustainability. I worked on my website behind the scenes to make it as minimal as I could, having in mind I was using a web hosting platform. Then I thought - why don't I build a website myself? I am not some professional blogger, journalist, writer. I do this for myself, and a handful of others, why not play with it a bit more?</p>
<p>This thought couldn't come in more a convenient time in my life - we recently got a new family member, so I had plenty of time to dedicate myself to side-projects. Of course not. Nevertheless, I started tinkering around things.</p>
<p>Since the whole website is quite simple - text, images, nothing fancy, I opted to explore static websites. I searched around and found the best solution for me - <a href="https://www.11ty.dev/">Eleventy</a>. A JavaScript static site generator, minimal performance, and works with Markdown. What more could I wish for?</p>
<p>Then I went hopping around as one not knowledgeable enough would - I could try this, or maybe this, or even this? But wait, what's this? I don't know this? And so on and so forth... Then I decided to scrap all these questions and focus on the basics. Luckily, I was able to find a great starting (and ending) point to help me with that - <a href="https://learn-eleventy.pages.dev/">Learn Eleventy Course</a> by Andy Bell. I started slowly, building the demo website until it started looking like something I could publish. The whole demo is a full-blown website, my use case is just one side of that, so I built everything, and then removed the things I didn't need and customised others. And I really like the result! I hope you will to!</p>
<h2>What has changed?</h2>
<p>A couple of things have changed and I'll list them below:</p>
<ul>
<li>moved hosting from Ghost.org to local Raspberry Pi (yes, actual rpi!),</li>
<li>built a website with e11ty,</li>
<li>moved subscribers to <a href="https://buttondown.com/">buttondown</a>,</li>
<li>learned a bit of front-end development,</li>
<li>decided not to post anymore sound recordings of me reading the post (for now),</li>
<li>setup a Cloudflare tunnel towards my rpi (super simple, btw!),</li>
<li>fixed, updated, adjusted all my previous posts to proper markdown format,</li>
<li>and published this website!</li>
</ul>
<h2>What has stayed the same?</h2>
<p><em>Me writing the articles with minimal AI usage.</em> I'm not against it, that's not it, I just don't want to use it for writing. So far I have used it for minor polishing of images' ALT text, but that's all. I read somewhere something like the following, paraphrased:</p>
<blockquote>
<p>If you used AI to write the article and spent next to no amount of time doing it, why would I spend my time reading it?</p>
</blockquote>
<p>So I decided that this should be my motto in writing here. If I say something interesting, funny, deep, shallow, incorrect, at least I'll know it's me that said that!</p>
<p><em>Me sending e-mails with the articles to subscribers.</em> I moved subscribed people from Ghost to Buttondown and I'll be sending e-mails with articles in their inboxes. <em>No spam, ever!</em> Unless you count my articles as spam...</p>
<p><em>Comments to the posts</em> - you can still do that, you just need to be subscribed and reply to the e-mail I send you. Or if you like RSS - feel free to subscribe to the <a href="https://wonderingchimp.com/feed.xml">RSS Feed</a> of the website.</p>
<h2>Plans ahead?</h2>
<p>Having my own, built, and deployed, personal website means that I can do with it whatever I like and think of. Some of the things I'll implement in the following months are:</p>
<ul>
<li><a href="https://www.thegreenwebfoundation.org/tools/grid-aware-websites/">Grid awareness</a> to the website - one of the main reasons I decided to build a website on my own in the first place.</li>
<li>Search functionality - useful thing every website should have.</li>
<li>Measure and report electricity usage, directly.</li>
<li>Measure and report resource usage.</li>
<li>And many more things I'll probably come up with in future.</li>
</ul>
<p>A side note - one could say that moving a website from a green hosting provider to hosting it in Belgrade, Serbia, in my apartment couldn't be counted as a <em>sustainable</em> move, but the site is more lightweight, doesn't have any additional components other than Markdown, HTML, CSS, and some JS. Aaand, I can always plug it out!</p>
<h2>Summary</h2>
<p>Thanks for staying with me this long! In the end, I only have to say that I'll continue to write here! In some of the future articles, I'll write up more about the whole setup, how I build, deploy, and run everything. Spoiler alert - it's not on Kubernetes!</p>
<p>Feel free to share this article or any other article here if you find it useful!</p>
<p>See you in the next one!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>One look at failure after unlocking the Pixel 2 bootloader</title>
			<link href="https://wonderingchimp.com/posts/one-look-at-failure-after-unlocking-the-pixel-2-bootloader/"/>
			<updated>2026-02-09T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/one-look-at-failure-after-unlocking-the-pixel-2-bootloader/</id>
			<content type="html"><![CDATA[
				<p>Albert Einstein famously said:</p>
<blockquote>
<p>The definition of insanity is doing the same thing over and over again and expecting a different result.</p>
</blockquote>
<p>I can relate to that, in recent days. Sort of. Would I consider myself insane? Sort of. I don't know. I'm not the expert who can tell.</p>
<p>Anyhow, I had an old Pixel 2 lying around my place, gathering dust, and I wanted to make use of it. My plan was to put Linux on it and spin up a web server there. Aaand, move hosting of my blog there, if all went well. But it didn't. Let me tell you why, and how.</p>
<p>The idea came to me from my friend, who sent me the <a href="https://far.computer/">following link</a>.</p>
<p>He told me - this might be interesting to you, give it a try!</p>
<p>And it certainly was. Or is.</p>
<p>In a nutshell - the person hosting this website put it on an old repurposed phone - Fairphone 2. He wrote comprehensive instructions about it. So I thought - let's follow the same instructions and try to put PostmarketOS on a Pixel 2 and try to host there my website.</p>
<p>The overview of my current hosting is quite simple - I am a subscriber to ghost.org. I am quite satisfied with how much I pay and what do I get in exchange. But, since this blog is about digital sustainability, I thought it might be cool to spin up a simple web server and host my blog on a repurposed phone. And since I don't have many readers, it would be cool to migrate to the phone.</p>
<p>Long story short for all of those too busy to read through - Google decided to block me in my quest. So I decided to drop this, for now.</p>
<h2>How it all started?</h2>
<p>A moment of <em>I can also do it!</em> - from the link above and a bit of free time during my paternity ignited the fire. I found my old Pixel 2 phone, cleaned some dust from it and started to learn about installing Linux on Android phone.</p>
<p>First I needed to unlock the bootloader. This sounded about right. I've reset the phone, enabled <em>Developer settings</em>, with <em>OEM Unlocking</em> toggle enabled. In the meantime, on my laptop I've installed <code>fastboot</code> and <code>adb</code> to interact with the phone and the bootloader.</p>
<p>The <code>fastboot</code> is a tool from Android that allows you to modify the bootloader, and <code>adb</code> is the tool used for debugging.</p>
<p>So far, it was nice and easy.</p>
<p>Then, I started to test out the unlocking the bootloader. It would have been quite simple - restart phone into the Bootloader setup and run the following command from the laptop, keeping the device connected:</p>
<pre><code class="language-shell"># check to see if device is seen by fastboog
fastboot devices

# unlock the bootloader
fastboot flashing unlock
</code></pre>
<p>Then I got this error message:</p>
<pre><code class="language-shell">FAILED (remote: 'Flashing Unlock is not allowed')  
fastboot: error: Command failed
</code></pre>
<p>Hm, strange. Let's try it a couple of more times, why not?</p>
<p>Same result - <code>Command failed</code>.</p>
<p>Then I started reading the forums online and found out the following, paraphrased:</p>
<ul>
<li>Google needs to activate the phone while <code>OEM Unlocking</code> mode is enabled so you can unlock the bootloader.</li>
<li>It would need to pass from 24h to 72h for Google to activate the phone - keep the device connected.</li>
<li>And so on, and so forth.</li>
</ul>
<p>Then I found out that the OEM Unlocking is now disabled, so I reset the phone to factory settings. And decided to wait 24h.</p>
<p>After 24h - same result - <code>Command failed</code>.</p>
<p>Smart person would think - okay, this is obviously not possible, I'll drop it. Me - hold my beer! Let's try it once again.</p>
<p>Same workflow as before - factory reset, wait for 72h instead of 24h. Result - same!</p>
<p>Back to the drawing board.</p>
<p>I did some additional research and found these neat instructions in <a href="https://source.android.com/docs/setup/test/running#booting-into-fastboot-mode">Android docs</a>.</p>
<p>There I found out that you can <em>force</em> the device check-in from Google by running <code>*#*#CHECKIN#*#* (*#*#2432546#*#*)</code> on your phone. So I thought - I'm getting closer.</p>
<p>So, again - factory reset, this time manual check-in of the device and run the <code>fastboot</code> command from above.</p>
<p>Result - <code>Command failed</code>.</p>
<p>Oh, c'mon! Now that I've completed everything, it failed again?! Something is not right with the device, definitely.</p>
<p>I've searched for the device serial number and model name and all the information said the bootloader can be unlocked. Why, oh why was I seeing the opposite?</p>
<p>Then I stumbled upon the support ticket from Pixel Phone Help, dating back to 2019 with the title - <em>Google Refurbished Pixel 2 is Always Defective (bootloader unlock)</em>. There, going through the thread I found out the following:</p>
<blockquote>
<p>&quot;I have been informed by high-level Google support staff that all refurbished Pixel 2 phones coming from the Google refurbishment center are effectively the same as the Pixel 2 Verizon edition. (Carrier is still unlocked but the <a href="https://support.google.com/pixelphone/thread/14920605">bootloader is locked</a>.)</p>
</blockquote>
<p>So, maybe my phone was the refurbished one? It's possible. I bought it more than 10 years ago in some store in Belgrade. I bought it as original, but, who knows with the market here. Even if it looks legit, it is often a fine-grained shade of grey on the border with black.</p>
<p>This is when I decided to <em>call it a day</em>. In the end, I gave the phone to the people who might have more luck in making the use of it (<a href="https://pionir.org/">Pionir free school</a>). And I'm going to wait for some old device that I can refurbish to host my blog.</p>
<h2>Things I've learned?</h2>
<p>First and foremost - never trust Google. Even though they have a plethora of services, great documentation, oftentimes they have something, some setting, as hidden as this was, that can block you in your work.</p>
<p>And people are not stupid for saying:</p>
<blockquote>
<p>When some product or service is free, more often than not, you are the product.</p>
</blockquote>
<p>Second thing I've learned, after hours and hours spent on the same problem - persistence is a good quality, but too much persistence leads to stubbornness, which can prevent you from progressing or adapting. Sometimes it's good to let things go, after trying them for a couple of times and getting the same results.</p>
<p>Last but not least - even though my plan has failed, and I wasn't able to host my blog on some old smartphone, I'm not backing out! I'll find some other phone and try to do it there. Hopefully I'll migrate all the things written here to it, and have it <em>simpler</em> with just me and my thoughts.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How to scale pods based on Carbon Intensity?</title>
			<link href="https://wonderingchimp.com/posts/how-to-scale-pods-based-on-carbon-intensity/"/>
			<updated>2026-01-05T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-to-scale-pods-based-on-carbon-intensity/</id>
			<content type="html"><![CDATA[
				<p>Like every other spam e-mail of the service you've forgotten you signed up to, I want to start this article with:</p>
<p><em>Hi there, it's been a while!</em></p>
<p>And it certainly is. Or was. I don't know. English is not my mother language. But that's not important.</p>
<p>What's important is the article you're reading. The idea for it came to me quite organically, sort of like a <em>duh</em> moment. I held a talk recently at Heapcon 2025 in Belgrade, on the topic <em>From Kubernetes to Low Carbonetes: Optimizing Infrastructure and Workloads for Sustainability</em>. There I demoed briefly a setup where I scaled the application based on the Carbon intensity of the grid. Here is where the <em>duh</em> moment was - I could write a deeper dive into what I've done. And here we are.</p>
<p>One short trivia before we continue - if you are a keen follower of my writings, you might have noticed that the title of the talk sounds familiar. Yes, I've already written on this topic some time ago. In that <a href="https://www.wonderingchimp.com/posts/from-kubernetes-to-low-carbon-netes-optimizing-k8s-infrastructure-and-workloads-for-sustainability/">article</a>, as in my talk this October, I covered some basics, and practical steps on how you can reduce your Carbon footprint within your Kubernetes clusters.</p>
<p>With this article, however, I want to document the demo I had prepared for the Heapcon talk. Here we'll go into a deeper explanation of the things I used for the demo, and how I set everything up on my local machine. Feel free to scroll down to the bottom for the link to the repository where I put everything demo-related.</p>
<h2>Overview</h2>
<p>Every good 8 seconds cooking recipe starts with an overview of the dish you're going to prepare. I will try to do a similar thing with the following screenshot. And I hope it will be enough to show you in a nutshell what I've created.</p>
<p><img src="../images/posts/0071-scale-pods-on-clean-energy-01.png" alt="Split-screen terminal view comparing Kubernetes pod behavior under different carbon intensity conditions. Left panel shows K9s interface with one running pod (sample-app-8f46fb84f-7jdcr) and logs displaying &quot;Carbon intensity updated: 331 gCO2eq/kWh&quot; - representing high carbon conditions with minimal pod replicas. Right panel shows the same cluster scaled to three running pods with logs showing &quot;Carbon intensity updated: 137 gCO2eq/kWh&quot; - demonstrating increased pod replicas during lower carbon intensity periods. This visualization demonstrates carbon-aware autoscaling where Kubernetes dynamically adjusts workload replicas based on grid carbon intensity data." title="Source: Local k8s setup"></p>
<p>On the left-hand side you can see one replica of the resource and below its data about Carbon intensity. On the right-hand side, you can see three replicas, while the Carbon intensity is lower.</p>
<p>In a nutshell, here I'll show how I used data from Electricity Maps API and created a scaling point for resources, based on the overall Carbon intensity of the grid.</p>
<h2>Ingredients</h2>
<p>Keeping the 8-second cooking recipe story from above alive, in the next bullet points I'll show you what I used and to what extent.</p>
<ul>
<li>1 x Locally run k3s cluster.</li>
<li>1 x Prometheus deployment (kube-prometheus-stack).</li>
<li>1 x KEDA installation and configuration.</li>
<li>1 x Electricity Maps API account.</li>
<li>1 x Simple Node application.</li>
</ul>
<h3>Locally run k3s cluster</h3>
<p>The k3s cluster is a light-weight implementation of Kubernetes, that you can easily spin up on your machine by following quite thorough instructions on the official k3s website.</p>
<p>Since I needed the cluster only for the demo purposes, I followed the <a href="https://docs.k3s.io/quick-start">quick start</a>.</p>
<h3>Prometheus deployment</h3>
<p>Now, this was a no-brainer for me. I've been using quite extensively the Prometheus/Grafana monitoring stack on the Kubernetes, therefore I've installed a default setup of <code>kube-prometheus-stack</code> helm chart.</p>
<p><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack">This</a> is a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with Prometheus using Prometheus Operator. It was easy to follow and to set it up quickly.</p>
<h3>KEDA installation</h3>
<p>In order to be able to scale on external events, I needed to add KEDA - Kubernetes Event Driven Autoscaler. In a nutshell, this tool will monitor your resources for any external events, say, Carbon intensity of the current power grid, and scale the resources based on the thresholds you've defined.</p>
<p>Again, documentation is quite good, and the whole setup was quite easy to follow from this <a href="https://keda.sh/docs/2.18/deploy/">link</a>.</p>
<h3>Electricity Maps API Account</h3>
<p>The Electricity Maps is the most useful platform where you can get the electricity data from. With the free account, you can get the relevant electricity data from one region. Which makes sense, because you can use this opportunity to test out the platform and see how can you use it in your workflow.</p>
<p>To create an account, just visit the following <a href="https://portal.electricitymaps.com/auth/login">link</a>, and follow sign up instructions.</p>
<p>After you've created an account, you can play around on the platform. The most important part for us is the <em>Developer Hub</em> tab. Here you can find the data about your API authentication token, which you can use in the code.</p>
<p><img src="../images/posts/0071-scale-pods-on-clean-energy-02.png" alt="Electricity Maps API Playground interface showing a test API request for carbon intensity data. The left panel displays configuration options including Data Type set to &quot;Carbon Intensity&quot;, Temporality set to &quot;Latest&quot;, Region tab selected with &quot;Germany&quot; chosen, and optional parameters for emission factor type set to &quot;Lifecycle&quot; with hourly temporal granularity. The right panel shows a curl request to the carbon intensity API endpoint with an authentication token (highlighted in green), and below it a JSON response displaying Germany's current carbon intensity of 324 gCO2eq/kWh along with timestamps and a test mode disclaimer noting the data is intentionally inaccurate for integration testing purposes. The interface is in dark mode with blue accent highlighting indicating test mode is active." title="Source: Electricity Maps Developer Portal"></p>
<h3>Simple Node application</h3>
<p>The last step in the process is to create an application that can use this data. I've created a simple JavaScript server that takes the data from the Electricity Maps on latest Carbon Intensity, logs the numbers to standard output, and exports the values as Prometheus metrics.</p>
<p>This is an important step, because the scaling will depend on metrics from this service.</p>
<p>The application is located in the repository under <code>/app/server.js</code> directory.</p>
<p>I've created a docker image, and deployed the application as a <code>Deployment</code> in a <code>default</code> namespace.</p>
<h2>Mixing it all together</h2>
<p>Having deployed everything from the above will bring us two steps closer to the solution from the beginning.</p>
<p>There are two steps that we're missing here:</p>
<ol>
<li><code>ServiceMonitor</code> - a resource that tells Prometheus to scrape the metrics from the <code>sample-app:80/metrics</code> URL.</li>
<li><code>ScaledObject</code> - a resource that tells KEDA what to scale and on what grounds. This is the part I was having troubles the most.</li>
</ol>
<h3>Configuring correct scaling</h3>
<p>The <code>ScaledObject</code> resource is quite powerful and if not set up correctly, it can lead to various problems. For example - whenever one request goes to the application, and the resources increase for just a bit, workload is scaled to n replicas.</p>
<p>In my example, I wanted to scale up on lower Carbon intensity, and scale down on higher Carbon intensity numbers. This required me to use the inverted logic on the <code>ScaledObject</code>. Because, by default, KEDA expects that when some numbers increase, it considers increasing the number of replicas.</p>
<p>Below you can see the example of the <code>ScaledObject</code> I used.</p>
<pre><code class="language-yaml">apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: carbon-intensity
  namespace: default  # Adjust to your namespace
spec:
  scaleTargetRef:
    name: sample-app  # Replace with your deployment name
  minReplicaCount: 1
  maxReplicaCount: 10
  triggers:
    - type: prometheus
      metadata:
        serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090
        metricName: carbon_intensity_green_energy
        threshold: '50'
        activationThreshold: '1'
        # Inverted logic: higher value when carbon intensity is lower (greener)
        query: (250 - avg(carbon_intensity_gco2_per_kwh))
</code></pre>
<p>As you can see on the example above, the inverted logic is handled by the <code>query</code> option. This is how it will work:</p>
<ul>
<li>When Carbon intensity = 150gCO2eq/kWh - query returns (250-150) = 100 -&gt; Scale to ceil(100/50) = 2 replicas.</li>
<li>When Carbon intensity = 300gCO2eq/kWh - query returns (250-300) = -50 -&gt; KEDA treats this as 0 and scales it to <code>minReplicaCount</code> = 1 replica.</li>
</ul>
<h2>Summary</h2>
<p>It was quite easy for me to set this up by following quick-start guides, which are so good these days. However, this might and will differ from your use case. Fortunately, Electricity Maps API offers quite a lot of different data that you can use and act upon. In my example, I've used only basics.</p>
<p>For more information about the code and infrastructure I've deployed, check out the following <a href="https://github.com/alternaivan/co2eq-autoscale-demo/tree/main">repository</a>.</p>
<p>Hopefully you can test it out yourself and can provide some feedback on the topic!</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Reading the book - Building Green Software </title>
			<link href="https://wonderingchimp.com/posts/reading-the-book-building-green-software/"/>
			<updated>2025-08-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/reading-the-book-building-green-software/</id>
			<content type="html"><![CDATA[
				<p>Some of you know, others probably don't, but I'm quite the nerd when it comes to the things that are important to me and close to my heart. Reading <em>Building Green Software</em> was one of those things.</p>
<p>In this article, I'll write about the most important stuff I learned from it AND, as a bonus, I'll attach here all questions and answers I collected reading this book.</p>
<p>This will not be one of those catchy, clickbaity articles. It is meant to spark curiosity, and interest you to possibly read the book yourself. I found it full of insights, and it is a great starting point if you want to learn more about <strong>green software</strong>.</p>
<h2>What is this book about?</h2>
<p>This book - <em>Building Green Software</em>, as the title says, is about building green software - the software that causes minimal carbon emissions when it's running. It looks on the three core principles of green computing:</p>
<ol>
<li>Energy efficiency - use less energy to do the same job.</li>
<li>Hardware efficiency - use less hardware to do the same job.</li>
<li>Carbon awareness - adjust operational and runtime aspects of an application based on the current carbon emissions.</li>
</ol>
<p>Except the above principles, this book also covers sections about networking emissions; what can we do about <em>greenness</em> of ML, AI, and LLMs; how can we measure and monitor emissions; benefits of the approaches for all of us; section about green software maturity matrix - how mature are you when it comes to green software.</p>
<h2>Who wrote it?</h2>
<p>This book was written by three authors - Anne Currie, Sarah Hsu, and Sara Bergman.</p>
<p>Anne Currie is a techie with 30 years of experience and a writer. She writes about tech, climate, ethics, AI and surveillance.</p>
<p>Sarah Hsu is a Google Site Reliability Engineer and a strong advocate for green and sustainable software. She is a regular speaker and writer on the subject, and is the chair of <a href="https://learn.greensoftware.foundation/">Green Software Course</a> project for Green Software Foundation.</p>
<p>Sara Bergman is a Senior Software Engineer working in the Microsoft ecosystem. She is an advocate for the green software, and speaks about it publicly in conferences, podcasts, and meetups. She is also a contributor to the Green Software Foundation.</p>
<h2>What are some important points I noted?</h2>
<p>As I mentioned above, I think this book is a great start if you want to dive into the ecosystem of building sustainable software. There are many things I've noted reading this book, and I wanted to share here the ones that I find the most important.</p>
<p>This book has also been an inspiration for a couple of blog posts I've written so far, and I've used various concepts in some presentations I held on the topic of green software.</p>
<h3>Difference between Climate Change and Global Warming?</h3>
<p>Climate change is the change in the Earth's local, regional, and global climate, based on the longer variations of weather patterns. Climate has always changed throughout the Earth's history, but the recent change has been faster than the usual cycles.</p>
<p>Global warming, on the other hand, is the continuous warming of Earth's surface and oceans, since the preindustrial age.</p>
<p>So, climate change is a normal Earth process, that happens in cycles. What is not normal, however, is the current speed of change caused by Global warming. We see these terms used quite often.</p>
<h3>What is efficient code, and what are some common design patterns for code efficiency?</h3>
<p>The efficient code is the one that doesn't do more work than is necessary to achieve the designed functionality. Common design patterns to improve code efficiency are:</p>
<ul>
<li>Avoid too many layers, so we are not doubling up on the work done by our platform and create some wasteful layers.</li>
<li>Be mindful when using microservices - send fewer and larger messages using RPC rather than JSON/based communication, and carefully plan the architecture and inter-service calls.</li>
<li>Replace inefficient services and libraries - use performance profiling to determine the bottlenecks (slow services and bottlenecks).</li>
<li>Don't do or save too much - don't implement features you don't need, or save the data you don't need or use.</li>
<li>Leverage client devices - use devices to the fullest and make them last as long as possible.</li>
<li>Manage Machine Learning - reduce data collection and time for model training, and train models on green energy.</li>
</ul>
<h3>What is operational efficiency and which techniques can be used to improve it?</h3>
<p>The operational efficiency is the way to achieve the same functional results of an application or a service, by using fewer hardware resources - servers, disks, and CPUs. Some of the techniques that can be used to improve operational efficiency are:</p>
<ul>
<li>Turn things off when not used or hardly used (e.g. test or dev systems during the weekend).</li>
<li>Don't over-provision - use various approaches to battle this (rightsizing, auto-scaling, burstable instances on the Cloud) ; it is okay to scale up, but scale down as well.</li>
<li>Cut the costing bills by inspecting Cloud Provider tools - cheaper can be almost always greener.</li>
<li>Use containerized microservices only where possible and the introduction of them won't add any unnecessary complexity or over-provisioning (e.g. using Kubernetes cluster for a SPA).</li>
<li>Running on Cloud - choose instances that give the most flexibility, are pre-optimized instance types (e.g. managed DBs), or spot instances.</li>
<li>Embrace multitenancy - from shared VMs to managed Platforms.</li>
</ul>
<h3>What is Jevons paradox?</h3>
<p>Improving the efficiency of doing something makes us do it even more. For example - if we improve the energy efficiency of data centres, we'll want more of them, and consume more energy than when we initially started.</p>
<h3>What are some ways to make deployment of ML models greener?</h3>
<p>We could decrease the size of a model in use - deployment is cheaper, and smaller devices can run the models.</p>
<p>There are also several Machine Learning techniques that can make our models greener.</p>
<h4>Quantization</h4>
<p>This is an ML optimisation technique that reduces computational load and memory of neural networks without significantly impacting the model accuracy. It includes converting data in floating point 32 bits to a smaller precision like integer 8 bit, perform all critical operations, and at the end, convert lower precision output into a higher precision one in floating point 32 bits.</p>
<h4>Knowledge Distillation</h4>
<p>The technique of transferring the &quot;knowledge&quot; from a large, complex model (the &quot;teacher&quot;) into a smaller, more efficient model (the &quot;student&quot;). The goal is to train the student model to mimic the behaviour and replicate the performance of the teacher.</p>
<h4>Model Pruning</h4>
<p>The pruning is a technique of &quot;removing&quot; the weights in the context of neural networks - setting weights to zero. We can do it randomly, or remove the less important ones.</p>
<p>A great article on model compression and optimisation techniques that covers all three mentioned above, can be found on this <a href="https://towardsdatascience.com/model-compression-make-your-machine-learning-models-lighter-and-faster/">link</a>.</p>
<h2>Summary</h2>
<p>If you wanted to start your journey in digital sustainability - this is the book for you. If you want to brush up your knowledge on greening the IT - this is the book for you. If you are curious about how we can help build a sustainable IT for the future - this is the book for you!</p>
<p>Following is the long anticipated <a href="https://www.oreilly.com/library/view/building-green-software/9781098150617/">link</a> towards the book. Enjoy!</p>
<h2>Bonus Content</h2>
<p>When I read something for learning, I often gather my notes in the form of questions and answers. Now, this is the approach that makes me learn the most, and it is the one that <strong>always makes me go back to my notes</strong>.</p>
<p>In the past, whenever I was reading something and taking notes, those notes were sooner rather than later long forgotten in some notebook or within some directory I now know nothing about. So I changed things a bit and started using questions and answers approach - the flashcards.</p>
<p>For this, and many more learnings, I use <a href="https://apps.ankiweb.net/"><em>Anki</em></a> application, that allows me to create flashcards and share them with my devices, review them occasionally, and learn better. The flashcards are organised in Decks, and below you can find the deck I created while reading <em>Building Green Software</em> book. I hope you will find it useful, as it is to me.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Is subscription-based life sustainable?</title>
			<link href="https://wonderingchimp.com/posts/is-subscription-based-life-sustainable/"/>
			<updated>2025-05-04T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/is-subscription-based-life-sustainable/</id>
			<content type="html"><![CDATA[
				<p>Recently, I had an opportunity to watch the movie <em>Bicentennial Man</em> with Robin Williams, one of my favourite actors. It is based on a novelette by Isaac Asimov from the <em>Robot Series</em>, and it is about an android that has human characteristics - he can feel sadness, happiness, enjoyment in his work, etc.</p>
<p>The story goes on, and the movie is quite nice. It seems like a <em>feel good</em> movie to me, I didn't watch it until the end, so I can't know for sure. But it led me to think about one important question. And, it is not the most obvious question asked throughout the whole movie - can androids (or AI) replace humans in some near future. Nope. But you can probably guess from the title what was the question that I asked myself.</p>
<p>Anyhow, there is a scene in the movie where Android - <em>Andrew</em>, falls from the window and gets broken. It then enters the house and says to its worried owner something like - <em>I have a self-maintenance mode, I can repair myself</em>. Then goes into the basement, and fixes its broken self. And this is where I thought - well, that wouldn't work in today's subscription-based model world, right? So, the question that got me thinking was the one about the longevity of (these) devices.</p>
<p>The movie was released in 1999, 26 years ago. Not that long ago, at least from my perspective. But from that time, a lot has changed. The world has changed. The subscription-based model became sort of a norm for selling many products and services.</p>
<p>Subscription-based model is quite present in our lives. It represents a <em>strategy</em> where customers pay a recurring price at regular intervals to access a product or service. According to Wikipedia it <a href="https://en.wikipedia.org/wiki/Subscription_business_model">dates back from 17th century</a>!</p>
<p>I come from Serbia, where, in those days, every tech-related breakthrough was delayed a couple of years. Thankfully. So I remember the dial up internet for example, and how we used to create <em>configurations</em> when buying a computer. It was a list of hardware devices - processor, motherboard, storage, CD ROM, case, etc - that we bought and assembled into a working computer. Later in life I discovered that I can put Linux on those devices as well, but that's a story of its own.</p>
<p>When I look back now, it was a great learning experience, and for sure one of the main ways that got me, and a bunch of other people, into technology. Whenever something broke and we thought we could fix it - we tried to do it. Seldom was it fixed, but we tried it nevertheless. Nowadays, you'll break the device just by trying to open the case that holds it.</p>
<p>Why I feel like that meme - old man yelling at the clouds? Never mind.</p>
<p>It makes sense to me that the subscription-based model is one way to sell services, but why do we need to put everything under some subscription? Why do we need a subscription that allows us to use certain devices? Are we then actual owners of these devices or just users of them?</p>
<p>For some devices I think we're actually users, and this I learned the hard way. Let me share what I experienced with you.</p>
<h2>Do we own our device now or are we just users?</h2>
<p>Quite a while ago, I bought an <a href="https://en.wikipedia.org/wiki/Amazon_Kindle">Amazon Kindle</a> 4. It is a model from 2011, and I think I bought it somewhere around 2012, 2013, from a local reseller in a technology-deprived third world country. I was only able to read PDFs and certain Amazon-specific file formats (.mobi and .azw2). That didn't bother me at the time, because I was able to convert easily between the formats, so I didn't mind. I used it quite a while and I think I read the whole <em>Wheel Of Time</em> series on this device.</p>
<p>Recently, I learned that Amazon decided to <a href="https://www.geeky-gadgets.com/amazon-kindle-download-policy-change/">remove the option to download your Kindle books</a>, and this is where I lost it. But in a good way! First I copied all my books on my laptop, disconnected Kindle from the internet, and unregistered the device from Amazon. Then, I decided to <em>jail break</em> it. In other words - remove the restriction to install software on a device that is not official.</p>
<p>So I spent couple of hours in <em>jail breaking</em> my Kindle and I was finally able to read different format on it without any conversion! I can share the instructions I used, let me know in the comment section below if you're interested.</p>
<p>You might ask yourself - why didn't I just continue to convert books to Kindle formats? Well, first, I didn't want to. Second, I wanted to tinker with it and make it work to suit my needs, not the needs of some big corp.</p>
<p>How I discovered I wasn't the actual owner but just the mere user of it?</p>
<p>When I re-connected the Kindle back to the internet. I wanted to send some books to it and when I connected I got the message that the application I installed is locked and that I can't use it anymore. I can only use the Kindle-specific formats again. I was amazed to what extent they decided to <em>protect</em> themselves. Never mind, I'll go on and <em>jail break</em> my Kindle again, and use it without the internet access. Books don't have access to the internet, why should I have it on Kindle?</p>
<h2>Summary</h2>
<p>So, is the subscription-model any good? Maybe for the services, but I think it shouldn't apply to devices. Devices should be a one-time purchase, and you shouldn't subscribe to get more storage, memory, speed, better car extensions and whatnot.</p>
<p>In an ideal world you should be able to buy a device and actually own it, be able to repair it and not forced to buy a new one when the old one brakes. You should be able to use a standard and open formats on them, not only device-specific ones.</p>
<p>Okay, I get it, we're all greedy, we want to make more money. But, is that really a sustainable way? In the short run - maybe - you earn more money, ergo you're sustainable. But what about in the long run? Is this really the best option for us?</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>From Kubernetes to Low Carbon-netes: Optimizing K8s Infrastructure and Workloads for Sustainability</title>
			<link href="https://wonderingchimp.com/posts/from-kubernetes-to-low-carbon-netes-optimizing-k8s-infrastructure-and-workloads-for-sustainability/"/>
			<updated>2025-03-31T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/from-kubernetes-to-low-carbon-netes-optimizing-k8s-infrastructure-and-workloads-for-sustainability/</id>
			<content type="html"><![CDATA[
				<p>Before I start the main story, I just want to quickly reflect on the current situation in Serbia. If you don't already know, from November last year, students are actively protesting against corruption in the government that was the main cause of the tragedy at the main train station in Novi Sad. As with many people here, this situation has left its toll on me and my writing as well. With this little paragraph I want to first thank students for their persistence and to say this - although small and with not many readers, this blog and its author stand with students!</p>
<p>Now back to the main article.</p>
<p>Some time ago I've listened an episode of <a href="https://podcasts.castplus.fm/e/4n9v2qr8-the-week-in-green-software-new-research-horizons">Environmental Variables</a> podcast about <em>New Research Horizons</em>, and the host Chris Adams used the term - <em>from Kubernetes to Low Carbon-netes</em> (around 42nd minute). So I jotted this sentence down somewhere for it to be used in some future article. The time has finally come for the article to be published. So, thanks Chris for the great idea!</p>
<p>We will start this article by describing how can we measure energy usage of Kubernetes cluster, and with this, the Carbon footprint of the cluster(s).</p>
<h2>How can we measure emissions?</h2>
<blockquote>
<p>If you can't measure it, you can't manage it.</p>
</blockquote>
<p><em>Peter Drucker</em></p>
<p>The first step in any effort that is related to reduction, especially Carbon emissions, is to measure what our current usage is. In this case - what is our Carbon footprint.</p>
<p>Being the first step, it is for sure one of the hardest, because we can't say with the exact certainty if the numbers are true or not. The overall Carbon emissions of our infrastructure can be mainly calculated from:</p>
<ul>
<li>the amount of CO2/Carbon emitted in the production and delivery of the hardware to our premises (e.g. data centre) - this is known as <strong>embodied carbon</strong> - and</li>
<li>the amount of electricity this hardware uses while in running - known as <strong>operational emissions</strong>.</li>
</ul>
<p>Calculating the embodied carbon can be really difficult, so we need to rely on the data from our manufacturers and distributors. The embodied carbon is already emitted to the atmosphere so we cannot do much about that other than <strong>using our hardware for longer period</strong> and <strong>opt for re-use rather than buying new hardware</strong>. To find out more about embodied or embedded carbon, check out one of my <a href="https://www.wonderingchimp.com/posts/why-you-dont-need-that-new-and-cool-device-everyone-is-talking-about/">previous articles</a>.</p>
<p>In this article however, we'll focus on the second point. On the things we can do while operating our hardware.</p>
<p>If the most of the electricity we're using is coming from renewable sources - that is great! However, a lot of times, the source of electricity is rather dynamic - we cannot be 100% certain <a href="https://en.wikipedia.org/wiki/Variable_renewable_energy">from which source we're getting the power</a>. What we can do is to measure the power our infrastructure uses, and based on the numbers of Watt hours (Wh), we make calculations and predictions.</p>
<p>For the Kubernetes infrastructure, and other infrastructure for that matter, we can use one of the following tools:</p>
<ul>
<li><a href="https://github.com/hubblo-org/scaphandre">Scaphandre</a> - a monitoring agent that keeps track of energy consumption of your system.</li>
<li><a href="https://github.com/sustainable-computing-io/kepler">Kepler</a> - a Prometheus exporter that uses eBPF to probe energy-related system stats and exports them as metrics.</li>
</ul>
<p>And guess what - I've written about both these tools here! For Scaphandre, check out <a href="https://www.wonderingchimp.com/posts/demoing-scaphandre/">this link</a> and <a href="https://www.wonderingchimp.com/posts/demoing-kepler-exporter/">this link</a> shows you the demo I made about Kepler.</p>
<p>These tools mainly export power usage metrics that we can later convert into CO2 emissions based on the data from our power source provider. You can <a href="https://www.wonderingchimp.com/posts/exploring-the-green-apis/">check my article</a> from a while back, where I'm discussing the emission data sources, or <em>Green APIs</em> and how you can use them.</p>
<blockquote>
<p>There might be some other tools that also provide these kinds of metrics. I am only aware of only these two. If you have something in mind, feel free to add your recommendations in the comment section below.</p>
</blockquote>
<h2>What can we do to reduce the emissions?</h2>
<p>So, we have completed the first step - measuring. We now know where we stand. Or at least have some idea where we stand. Next, let's go through some possible emission reduction steps starting from the easier ones first towards the more complicated ones.</p>
<h3>Adding resource requests and limits</h3>
<p>By default, workloads in Kubernetes cluster run without any resource limitations. Because of this, as a first step, we should always have limits and requests defined on them. It is a good practice, and as well a recommended step, for two reasons:</p>
<ol>
<li>Adding them limits uncontrolled usage of resources by cluster workloads.</li>
<li>Adding them allows Kubernetes to better apply its <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/">Quality of Service classes</a>.</li>
</ol>
<p>The yaml definition, should look something like below.</p>
<pre><code class="language-yaml">    resources:
      requests:
        memory: &quot;64Mi&quot;
        cpu: &quot;250m&quot;
        ephemeral-storage: &quot;100M&quot;
      limits:
        memory: &quot;128Mi&quot;
        cpu: &quot;500m&quot;
        ephemeral-storage: &quot;500M&quot;
</code></pre>
<p>More information can be found on the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">following link</a> from Kubernetes documentation.</p>
<h3>Add limit ranges and resource quotas</h3>
<p>We all know users cannot be trusted (being that end users, developers, system administrators, etc). Therefore, we have some mechanisms that we can apply to make sure that resource requests and limits are always defined.</p>
<p>First, if we want to <em>force</em> no workload runs unbound in the cluster, we can add <code>LimitRange</code> to the namespace. This resource allows us the following:</p>
<ul>
<li>enforce min/max resources usage per Pod or Container,</li>
<li>enforce min/max storage request per Persistent Volume Claim,</li>
<li>enforce ratio between request and limit for a resource,</li>
<li>set default request/limit for resources and automatically inject them to Containers at runtime.</li>
</ul>
<p>The main thing to remember here is that <code>LimitRange</code> works inside a namespace we define. It won't work on cluster level, so we might apply this resource by default to every namespace that is created.</p>
<p>To find out more about limit ranges, go to <a href="https://kubernetes.io/docs/concepts/policy/limit-range/">this link</a>.</p>
<p>If our resources on the cluster(s) are limited, we might go one step further and enable <code>ResourceQuota</code> on each namespace. This will make sure that no workload takes too much of the resources that are available. If a workload is being deployed and requests resources that crosses the quota, the deployment will be blocked. This way you can safeguard the limited resources of your cluster.</p>
<p>To find out more about resource quotas, checkout <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/">this link</a>.</p>
<h3>Application changes</h3>
<p>Not all applications handle containerization the same way. For example, when calculating default Heap memory, <a href="https://developers.redhat.com/articles/2022/04/19/java-17-whats-new-openjdks-container-awareness#">Java applications (before version 17)</a> didn't use the resource limitations of the Container level, but on a Node level. It didn't properly handle container awareness. This is a problem because application crashes with <code>java.lang.OutOfMemoryError</code>  and you don't know why. To fix this, you would need to add <code>-Xms</code> and <code>-Xmx</code> arguments to your <code>JAVA_OPTS</code>.</p>
<h3>Turn on/off workloads when not used</h3>
<p>One of the most used fix in an endless amount of IT problems - turning it off and on again, can also be quite effective when it comes to reducing the CO2 emissions.</p>
<p>Some data shows that just turning off machines when not used can save us a lot of energy, and therefore reduce Carbon emissions. What is known as <a href="https://www.infoq.com/news/2023/03/stop-cloud-zombies-qcon/">the LightSwitchOps</a>.</p>
<p>To do this in a Kubernetes cluster environments, there are two approaches:</p>
<ol>
<li>manually scale up/down resources when not used</li>
<li>automatically (on a schedule) turn off/on resources when not used with <a href="https://github.com/kube-green/kube-green">kube-green</a>.</li>
</ol>
<p>The first option is quite simple, but rather painful. If you have multiple resources in the cluster, as you probably do, it can be quite taxing and boring to go from one resource to another and just scale it down when not used.</p>
<p>Another option is to have sort of a schedule when your resources will automatically scale down and up. This can be done with the help of <code>kube-green</code> tool. It is quite easy and simple to set up. And guess what - I already mention this in one of my <a href="https://www.wonderingchimp.com/posts/turning-the-lights-on-off-in-kubernetes-clusters/">previous articles</a>.</p>
<h3>Run batch jobs when energy is greener</h3>
<p>Based on the metrics in the first part, you can easily determine when your energy is greener - coming from renewable sources, and when it's not. You can take this information and reschedule your <code>CronJob</code> resources or any batch jobs to run on electricity from renewable sources.</p>
<p>The simple solution would be to adjust the schedule of your cron jobs to run when the energy is greener. The problem with this is approach is the dynamic nature of power sources. We can't easily predict from which sources the energy comes from all the time. Therefore, we need some automation that can help us here. We discuss this in the following section.</p>
<h3>Dynamically scale resources based on carbon intensity</h3>
<p>Due to its dynamic nature, energy sources cannot be predicted 100% of the time. This is where you can do what is called <em>event based scheduling</em>. There is a tool that allows you to do just that, called <a href="https://keda.sh/">KEDA - Kubernetes-based Event Driven Autoscaler</a>. This tool works as a dynamic autoscaler of workloads based on certain events.</p>
<p>Following is the idea of the process you can apply to dynamically auto scale workloads:</p>
<ol>
<li>Call one of the <a href="https://www.wonderingchimp.com//posts/exploring-the-green-apis/">Green APIs</a> (e.g. Electricity Maps) to check for Carbon intensity of your location.</li>
<li>If the Carbon Intensity is currently low, trigger the scale up of certain workloads through KEDA.</li>
<li>If the Carbon Intensity is getting high, trigger the scale down of the workloads in the same way.</li>
</ol>
<p>This solution is, from my point of view, the most complicated one to configure. I am going to spend some time in the next weeks to try it out and write up a demo in one of my next articles.</p>
<h2>Summary</h2>
<p>When looking into reducing your Kubernetes electricity and Carbon footprint, we need to start with the baseline - what amount of electricity we use, and what amount of CO2 do we emit. Then we can go to the next steps of actually reducing the footprint. The ones I mentioned here are the following:</p>
<ul>
<li>adding resource requests and limits, either through individual definitions or limit ranges,</li>
<li>rejecting the workloads that use more than available with resource quotas,</li>
<li>make applications changes (e.g. <code>Xmx</code> and <code>Xms</code> options for Java),</li>
<li>turning the workloads off when not used,</li>
<li>running batch jobs when energy is greener,</li>
<li>dynamically scale resources based on locations' carbon intensity.</li>
</ul>
<p>All the above options should help you reduce the carbon and energy footprint of your cluster. Let me know if you have tried some of them and what have you noticed. If you have something else I didn't consider here, even better! Write down your thoughts, feedback in the comment section below, I am eager to find out more about this topic!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Does running on green provider make our software green?</title>
			<link href="https://wonderingchimp.com/posts/does-running-on-green-provider-make-our-software-green/"/>
			<updated>2025-02-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/does-running-on-green-provider-make-our-software-green/</id>
			<content type="html"><![CDATA[
				<p>In my previous article, I raised a question about the need for making software more efficient if we're running on the <em>green provider</em>. Due to popularity of my writing, I didn't get any insights from the readers, but that will not stop me to write the answer myself. Or at least stop me from trying to answer this myself.</p>
<p>Why do we want to make our software efficient even when we're running on <em>green provider</em>?</p>
<p>From the first glance - we don't need to do it, right? Running on the <em>green provider</em> makes our software green. Sure, why not! Well, as with the most things in life, the answer is not that simple. In this article, we'll explore why.</p>
<h2>What  is a <em>green (cloud) provider</em>?</h2>
<p>It is something every (cloud) provider strives to be. It is a company that offers (cloud) computing services while prioritising sustainability and renewable energy usage.</p>
<p>Some of the key aspects one can consider themselves a <em>green provider</em> are:</p>
<ul>
<li>the power supply to the data centres comes primarily through renewable energy sources,</li>
<li>they are using advanced cooling systems, optimising server utilisation, and use energy-efficient hardware to minimise power consumption,</li>
<li>actively track and work on reducing their carbon emissions,</li>
<li>being transparent with the power usage and carbon emissions data,</li>
<li>opting for the increase of the devices life cycle, re-using and re-furbishing hardware.</li>
</ul>
<p>I've already written some time ago about various level of <em>greenness</em> of the most popular cloud providers trifecta - on <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-the-aws/">AWS</a>, <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-azure/">Azure</a>, and <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-gcp/">GCP</a>.</p>
<p>So, I won't go into details about whether or not they are (or not) <em>green providers</em>. Do they consider themselves to be - maybe (probably yes).</p>
<p>There are a couple of <em>smaller</em> providers that have done much more sustainability-wise, than the big players above. I've mentioned one of them in <a href="https://www.wonderingchimp.com/posts/data-centres-and-an-increase-in-energy-consumption/">previous articles</a> - <em>Scaleway</em>. It was really interesting to me how they show the real time data centre PUE dashboards on their website.</p>
<p>A couple of other providers to consider are OVH cloud, Switch, Green Mountain data centres.</p>
<h2>So, does running on green providers also make our software green?</h2>
<p>In short - not really. Because digital sustainability has two dimensions.</p>
<ol>
<li>operational and</li>
<li>manufacturing dimension.</li>
</ol>
<h3>Operational dimension</h3>
<p>When looking into the data centre efficiency metrics, for example <em>Power Usage Effectiveness</em>, we focus only on the operational dimension. That is - what type of energy is spent for running the hardware, and with it, your software. The operational aspect.</p>
<p>Quick reminder, <em>PUE</em> is a standard efficiency metric for power consumption in data centres. It is a ratio of total facility energy to IT equipment energy used in data centre.</p>
<p>Don't get me wrong, having <em>PUE</em> and having the tendency to bring <em>PUE</em> down is good. However, having only <em>PUE</em> is not enough, because it shows only one dimension.</p>
<h3>Manufacturing dimension</h3>
<p>What we are missing is the manufacturing dimension - how much energy was spent in manufacturing hardware and data centre buildings? This is really hard to answer, but a rather important thing to have in mind.</p>
<p>The main reason why your software doesn't become automatically <em>green</em> when we switch it to <em>green cloud provider</em> is the manufacturing cost of hardware. If we optimise our software solution and make sure it runs on the existing hardware, moving to a <em>green provider</em> can make it <em>green</em> to one extent. But, if we don't optimise and with each new release, or new product increment, we require different, more powerful hardware - running on <em>green providers</em> cannot make our software <em>green</em>.</p>
<h2>Conclusion</h2>
<p>In order to have and run <em>green</em> software, we need to have a look at both aspects of digital sustainability - operational and manufacturing. Focusing only on operational can bring us up to a certain point. To have a real impact is to take manufacturing aspect as well, and incorporate it in our requirements process.</p>
<p>Good news is that we can have a significant improvement in both of them by working on making our software (code) more efficient. And how to do that? There are several ways:</p>
<ul>
<li>Delete stuff that is not used.</li>
<li>Be mindful to the data you save.</li>
<li>Make sure you don't add any unnecessary (or future) functionality.</li>
<li>Switch to more efficient services and libraries.</li>
<li>Ensure backward compatibility.</li>
<li>Identify and address bottlenecks with performance profiling.</li>
<li>Use devices to the fullest and make them last as long as possible.</li>
</ul>
<p>That's all for today! Thanks for reading until the end. Use the comment section below to add your thoughts, feedback...</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>What is a &#39;greener&#39; architecture - monoliths or microservices?</title>
			<link href="https://wonderingchimp.com/posts/what-is-a-greener-architecture-monoliths-or-microservices/"/>
			<updated>2025-01-28T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/what-is-a-greener-architecture-monoliths-or-microservices/</id>
			<content type="html"><![CDATA[
				<p>Like most common rivalries - Liverpool FC and Manchester United, day and night, summer and winter, democracy and autocracy - there is an ever-growing software system architecture rivalry - monoliths and microservices.</p>
<p>One is good for one use case, while the other is better suited for the other. But, what it means when it comes to sustainability? What architecture type is better for designing sustainable, low footprint software? We'll use this article to find out. Sort of...</p>
<p>As in every article I've written so far, we'll tackle the topic with a rather selfish approach - me learning more about monoliths and microservices, and putting them in the context of sustainability.</p>
<p>We'll start first with describing what those architecture types are and what are their main characteristics. After that, we'll explore how they perform when it comes to environment footprint, and what it means for the software we're building. Last but not least, we'll try to give more insights when to choose one or the other.</p>
<h2>What is monolithic architecture?</h2>
<p>This architecture type is a design that combines all application components into a single, non-separable unit. Putting all layers into one - user interface, business logic, and data access layers.</p>
<h3>Key characteristics</h3>
<p>Even though this architecture type is not anything new and shiny, it brings some topics that are rather important to consider when designing your software system.</p>
<p><strong>Simplicity.</strong> It offers straightforward development and deployment process, easier understanding of the system when the components are packed together. The reason is that we're packing everything in a <strong>single codebase</strong>.</p>
<p><strong>Cost-effective.</strong> These architectures can be more economical for small to medium-sized projects, because they are easy to set up and don't require multiple, separate, components. Therefore, it can be cheaper to start with monolithic architecture.</p>
<p><strong>Performance.</strong> The architecture components are closely linked (<strong>tight coupling</strong>) in a single process and <strong>share the same memory space</strong>. This adds to often higher performance, because of less (or non-existent) network overhead between the components.</p>
<p>Because of the tight coupling and sharing the same memory space, monolithic applications have <strong>limited scalability</strong>. Ergo, not easy to scale. This can lead to more performance issues rather than improvements.</p>
<p>On the other hand, having components packed together, reduces the attack surface of the application, making the monoliths potentially <strong>more secure</strong>.</p>
<p>The structure of the monoliths is often <strong>layered</strong> - separate layers for data access, business logic, and presentation. Which might result with <strong>dependencies across those layers</strong>.</p>
<p>The <strong>data storage is often centralised</strong> - using a single database instance for all data storage needs.</p>
<p><a href="https://www.geeksforgeeks.org/monolithic-architecture-system-design/"><img src="../images/posts/0066-greener-architecture-01.webp" alt="A monolithic architecture diagram showing three clients connecting through a Load Balancer to a single E-Commerce application block. The E-Commerce block contains five stacked services: Shop UI, Catalog Service, SC Service, Discount Service, and Order Service, all connecting to a single RDBMS database. The diagram is labeled 'Monolithic Architecture' at the bottom and includes a small logo in the corner."></a></p>
<h2>What are microservices?</h2>
<p>Microservices are an architecture type of developing software as a collection of small, independent services, that communicate with each other over a network.</p>
<p>Compared to keeping everything in one service in monolithic applications, microservice approach is more for having a separate, loosely coupled, service for each functionality. Each of these microservices can be developed, deployed, and scaled independently.</p>
<h3>Key characteristics</h3>
<p>Compared to monolithic applications, where you usually have a straight-forward architecture design, microservices have different components based on the design patterns used. I'll not go into details about those patterns here. I will only focus on main characteristics of this architecture type.</p>
<p><strong>Loosely coupled services</strong> allow multiple teams to work on different microservices simultaneously, improving the development and deployment cycle. In addition, issues on one microservice usually don't impact others.</p>
<p>This loose coupling allows the system to be <strong>scaled more easily</strong>, independent of one another. The system can also <strong>quickly adapt</strong> to changing workloads.</p>
<p>On the other hand, managing <strong>service communication, network latency, and data consistency</strong> can be <strong>a bit difficult</strong>. More often than not, it can be <strong>complex to develop, test, and deploy</strong> the microservices. Network latency can impact services and add <strong>complexity to error handling and troubleshooting</strong>. Last but not least - <strong>maintaining consistent data</strong> across services can be challenging.</p>
<p><a href="https://www.geeksforgeeks.org/microservices/"><img src="../images/posts/0066-greener-architecture-02.webp" alt="A microservices architecture diagram showing a mobile app and browser as clients connecting to various backend services. The mobile app connects through a REST API and API Gateway, while the browser connects via WEB to a Storefront webapp. These connect to three microservices: Account Service, Inventory Service, and Shipping Service, each with their own REST API and dedicated database (Account DB, Inventory DB, and Shipping DB respectively). The diagram is labeled 'Microservices' at the bottom."></a></p>
<h2>What is a <em>greener</em> approach?</h2>
<p>Before we provide an answer to the above question, let's have a look at the different lenses we want to look through both of the types.</p>
<h3>Resource lens</h3>
<p>In order to build more <em>environmentally friendly</em> solutions, we need to have a low resource consumption. That is - CPU and memory. If we have a small project, the resource consumption should be lower, therefore, the obvious choice is to start with modular monolith.</p>
<p>However, if we want to introduce scaling to monoliths, then we might end up over-provisioning the application. Giving more resources to all components, even though not all of them need more.</p>
<p>Here, we might want to reconsider microservices - resource usage can be controlled in a fine-grained manner. We can scale up only those services that need scaling. Nothing more, nothing less.</p>
<p>When architecting for greener systems, don't forget that idle applications also consume electricity. Having that in mind can also help us in determining the appropriate type.</p>
<p>With all being said - start small, and build up on that. If your application doesn't require many resources, start with modular monoliths. If properly implemented, it would be easy to later change courses and go for microservices if you actually need them. Don't over-engineer from the start.</p>
<h3>Network lens</h3>
<p>Second important component that we need to take into consideration is network transmission. The amount of data and overall communication that happens over the network.</p>
<p>Being wrapped in a single binary, service, container, whatever your preference, monoliths have a smaller network footprint. The components within monolith don't need to communicate with one another beyond the localhost. However, reading from and writing data to the database happens over the network. We need to be mindful to that network transmission - how much data are we sending and how often? If it makes sense, we can decrease the amount of read/write operations to bring down the environment footprint even more.</p>
<p>As to overall network footprint of microservices - it is much bigger than the one of the monolithic application. Each service communicates with one another, or with the message queues and database(s), through various network protocols. This communication can be quite extensive. To reduce the footprint, we can do the same as I mentioned above - decrease the amount of read/write operations if possible.</p>
<h3>Storage lens</h3>
<p>More different storage solutions - more problems. Databases also consume energy. Having the right database solution for your application is often considered a crucial point, not only from the <em>environmentally friendly</em> perspective, but from the overall application performance. Design your database schema and choose DB solution carefully.</p>
<h3>Cost lens</h3>
<p>Last but not least - how much does this cost? When you start small, if taking the monolithic application road, costs should be small. With the increase in application usage and overall throughput, costs also increase. Having a big modular monolith can introduce more costs in the long run, e.g. when you try to scale it.</p>
<p>On the other hand, microservices will allow you to have more control over what is scaled and what not. But, starting with microservices can be more expensive than having a modular monolith.</p>
<h3>The verdict</h3>
<p>Before you read the next paragraph, please take a deep breath and try to be in a calm mental state.</p>
<p>There is no straight answer to this question. <strong>It really depends</strong>, mainly, on your use case and general scalability needs. However, there are some important points to mention.</p>
<p>Modular monolith is easier to handle than microservices. You have one application, compared to multiple. This is a no-brainer.</p>
<p>With low or constant load, monoliths are more energy efficient. The key point here is - <strong>low and constant load</strong>. In case of a compute-intensive high load, microservices bring more advantages than monoliths.</p>
<p>General recommendation is that you <strong>start with a modular monolith.</strong> When your application slowly starts to show the signs of <em>Netflix-style</em> scaling, you want to <strong>reconsider microservices</strong>.</p>
<p>Try not to over-engineer from the beginning and to introduce not used or not needed functionality from the start. This will only increase energy usage of your software, and with it the carbon emissions.</p>
<p><strong>But why do we need to think about energy efficiency of our software when we run the applications on <em>green</em> Cloud providers?</strong> Good question. What do you think, what would be the reason? I'll try to add my view of the answer in one of the following articles.</p>
<p>To sum up the discussion - like in all good rivalries, there is no exact winner. The truth is somewhere in the middle.</p>
<h2>Shut up, and give me the sources!</h2>
<p>This last bit is going to be the place where I give you the follow-up links, research, articles where I found the above-mentioned information.</p>
<h3>To learn more about monoliths and microservices</h3>
<ul>
<li><a href="https://www.geeksforgeeks.org/monolithic-architecture-system-design/">Monolithic Architecture System Design</a></li>
<li><a href="https://newsletter.techworld-with-milan.com/p/why-you-should-build-a-modular-monolith">Why you should build modular monolith?</a></li>
<li><a href="https://www.geeksforgeeks.org/microservices/">Microservices</a></li>
<li><a href="https://newsletter.techworld-with-milan.com/p/what-is-microservice-architecture">What is microservice archtiecture</a></li>
</ul>
<h3>Energy efficiency of monoliths and microservices</h3>
<p><a href="https://www.eco-compute.io/talk/2024/energieeffizienz-backend-architektur-monolith-vs-microservices/">This talk</a> was the basis for my research and general information about the energy efficiency of the architecture style. Note, the talk information is in German, but the slides are in English.</p>
<p>Congratulations, you've reached the end of this article! To add your thoughts, use the comment section below. It would be great if you share this, or any other article from my blog, with people that might find them interesting.</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Turning the lights on/off ON Kubernetes cluster nodes</title>
			<link href="https://wonderingchimp.com/posts/turning-the-lights-on-off-on-kubernetes-cluster-nodes/"/>
			<updated>2024-12-23T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/turning-the-lights-on-off-on-kubernetes-cluster-nodes/</id>
			<content type="html"><![CDATA[
				<p>Since I'm the person going around the house and turning the light switch off more often than not, I'll apply that same role on my blog today.</p>
<p>No, that doesn't mean I'll turn the lights off on this blog, shut it down and stop publishing. Even though I'm not sure how many of you read my ramblings, I'm not going to stop it, here, at least.</p>
<p>Now let's get back to the essentials - what is the purpose of this article?</p>
<p>Well, last time I wrote about <a href="https://www.wonderingchimp.com/turning-lights-on-off-in-kubernetes-cluster/">LightSwitchOps</a> and how to apply the concept within the Kubernetes cluster pods. In this article, we will change our perspective. Move further down or up the level. I never know, to be honest. But, that's why we are here, to make mistakes, and learn from them, no? In this article we will focus on Kubernetes nodes and see what could be some approaches to scale them down when not used, or up, when the usage increases.</p>
<p>The main focus of this article will be looking into Cluster Autoscaler and Karpenter. What they are, how they work, and why you should use one or the other, or both. I don't know, yet, but let's use this article to explore the topic together.</p>
<h2>Cluster Autoscaler</h2>
<p>Cluster Autoscaler is a tool that can automatically scale up or down your Kubernetes nodes. It changes the number of nodes in these two cases:</p>
<ol>
<li>There are some pods that failed to run in the cluster due to insufficient resources - in this case it will spin up a new node.</li>
<li>There are some nodes in the cluster that have not been utilised for an extended period of time, and pods there can be easily moved elsewhere. In this case, nodes are removed from the cluster, and machines turned off.</li>
</ol>
<p>Now, all this seems simple, and it sure is. But it has some complex setup, and some amount of manual intervention. You can run the Cluster Autoscaler on the cloud provider of your choice. I've had a chance to run it on AWS some time ago, and it was working okay.</p>
<p>The whole setup runs as a <code>Deployment</code>, and to run it on AWS, in short, you need to do the following:</p>
<ul>
<li>Setup proper permissions.</li>
<li>Configure Auto-Discovery Setup - configuration that tells Cluster Autoscaler where to look for nodes on the cluster, and which nodes to take into account for scaling. This is a preferred and recommended way to configure Cluster Autoscaler.</li>
<li>Manual configuration of nodes - also an option, and it will require passing <code>--nodes</code> argument at the startup of Cluster Autoscaler.</li>
<li>Decide on the instance size - can be mixed instances, spot instances.</li>
<li>You can also use a static instance list of the instances you want to include in your cluster setup.</li>
</ul>
<p>The setup is rather straight-forward, and you can find more information on <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler">this link</a>.</p>
<p>And, the list of supported Cloud Providers where you can run all this is <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#faqdocumentation">quite long</a>.</p>
<p>In essence - if you would like to have a tool that simply turns off and deletes the cluster node when it's not used, and creates one when you need it - Cluster Autoscaler is the way to go.</p>
<p>However, when you have a new workload that is not supported by the current list of nodes, e.g. it requires an ARM processor, we would need to add a whole new node group to AWS and to reconfigure Cluster Autoscaler to support it.</p>
<p>Sounds a bit tiring isn't it? It sure can be. Read along to find out how we can <em>mitigate</em> this.</p>
<h2>Karpenter</h2>
<p>If you run your workloads primarily on AWS and want a fine-grained scaling option, Karpenter is your weapon of choice! Although carpenters uses hammers, and chisels, and whatnot, as their weapon of choice, but carpenter with K... Sorry, I got carried away. Let's continue.</p>
<p>Karpenter is also a tool that automatically provisions new nodes when the pods cannot be scheduled on the current ones. It is Open Source (same as Cluster Autoscaler), flexible, and high-performance autoscaler.</p>
<p>Mainly developed for AWS, it also works with other Cloud Providers. But, from the documentation, I somehow feel, the focus is mainly on AWS, because it was developed by them. All instructions on how to set it up, how to migrate from Cluster Autoscaler, are using AWS underneath.</p>
<p>The below diagram shows how Karpenter works.</p>
<p><a href="https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler/"><img src="../images/posts/0065-lights-on-off-k8s-cluster.png" alt="Diagram illustrating how Karpenter handles unschedulable Kubernetes pods. Pending pods are first processed by the Kubernetes scheduler (sched), which places them onto existing cluster nodes. Pods that cannot be scheduled due to insufficient capacity become &quot;unschedulable pods&quot; and are passed to Karpenter, which provisions just-in-time capacity by spinning up a new node. Both the existing capacity and the newly provisioned node are then consolidated by Karpenter into an optimally packed node with all pods running."></a></p>
<p>Karpenter observers <strong>the events within the Kubernetes cluster</strong>, and then sends the commands to the underlying Cloud Provider to provision or deprovision the nodes.</p>
<p><strong>If you wish for me to write a tutorial on how to set up Karpenter, let me know in the comments below, and I can set it up for the next article!</strong></p>
<p>Check out <a href="https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/">this link</a> to get started with Karpenter.</p>
<h2>So, what does Karpenter do that Cluster Autoscaler doesn't?</h2>
<p>Well, first I thought Karpenter is a more complex solution than Cluster Autoscaler, because I already worked with the Cluster Autoscaler. The fear of the unknown, I assume. But, it turns out it's the other way around. Cluster Autoscaler is a more complex and strict tool to set up than Karpenter.</p>
<p>Following are some of the remarks I found during my research.</p>
<h3>Proactive vs Reactive</h3>
<p>Karpenter is more <em>proactive</em> in scaling up nodes - it looks at the actual Workloads, while Cluster Autoscaler is more <em>reactive</em> and does the readjustments of nodes when new, unscheduled pods are present.</p>
<p>Karpenter reviews the resource requirements of all unscheduled pods and then selects the instance type which fulfils the resource requirements. Cluster Autoscaler manages nodes based on resource demands, and it works with predefined Node Groups.</p>
<p>Let's say you have 100 of pods you want to run on your cluster. The Cluster Autoscaler will do some calculation and spin up 2, 3 additional nodes, depending on the number of pods to support the scheduling. Karpenter, however, can ask for a single, larger instance instead, to support the scheduling. It looks at underlying Workload (Pod) when it does the scaling.</p>
<h3>Fine-Grained vs Strict</h3>
<p>Karpenter has a fine-grained control over life cycle of Nodes through Time-To-Live settings. Cluster Autoscaler focuses on scaling the number of nodes up or down within the predefined Node Group.</p>
<p>Let's say you want to optimise costs, and use different types of instances in your Kubernetes cluster. Karpenter will let you configure a mix of dedicated and spot instances, dynamically choosing the most cost-effective options that meet the workloads' resource demands. Cluster Autoscaler does not automatically do that and it doesn't directly manage spot instances. For each option, you will need to add the node group to the Cluster Autoscaler.</p>
<h2>Summary</h2>
<p>To summarise the discussion, here are a couple of points to have in mind when considering Cluster Autoscaler or Karpenter.</p>
<ul>
<li>Cluster Autoscaler is supported by the longer list of Cloud Providers, for Karpenter, AWS is supported, with some documentation on how to set it up on Azure.</li>
<li>Karpenter has more fine-grained control and lets you automate quite a bit.</li>
<li>On the other hand, with every new instance introduced in the cluster, manual reconfiguration of Cluster Autoscaler is needed.</li>
<li>Karpenter has more proactive, while Cluster Autoscaler reactive approach.</li>
</ul>
<p>In the end, whatever is the tool you choose, the idea of not running on over- or under-provisioned nodes is important. Having the ability to configure this with both of these tools is quite helpful. For us, but in the end, for our environment.</p>
<h2>Useful links</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/cluster-administration/cluster-autoscaling/">Cluster Autoscaling</a></li>
<li><a href="https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler/">Introducing Karpeneter</a></li>
<li><a href="https://www.youtube.com/watch?v=3QsVRHVdOnM">Intro to Karpenter</a></li>
<li><a href="https://towardsdev.com/karpenter-vs-cluster-autoscaler-dd877b91629b">Karpenter vs Cluster Autoscaler</a></li>
</ul>
<p>Thanks for reading until the end! If you liked the article, please share it. If there is something that I failed to communicate, or you want to give your impressions, feel free to use the comments below!</p>
<p>Thank you for helping me grow!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Turning the lights on/off in Kubernetes clusters</title>
			<link href="https://wonderingchimp.com/posts/turning-the-lights-on-off-in-kubernetes-clusters/"/>
			<updated>2024-11-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/turning-the-lights-on-off-in-kubernetes-clusters/</id>
			<content type="html"><![CDATA[
				<p>Some time ago, cruising through the <em>Sustainability-related</em> parts of the Internet, I've arrived upon a term <em>LightSwitchOps</em>. Since I'm a fool for all the things <em>Ops</em>, I decided to have a look.</p>
<p>In this article, we will dip our fingers in the concept of <em>LightSwitchOps</em> and how to apply it in the Kubernetes environment, at the smallest possible level - pods. You can call this an intro to the <em>LSO</em> with a practical example, if you wish.</p>
<p>Just a small note on wording - I'll use the term <em>hardware</em> to describe underlying machines, servers, virtual machines, and other equipment.</p>
<p>Now, without further ado, let us start.</p>
<h2>What is Light switch ops?</h2>
<p>The <em>LightSwitchOps</em> is a concept in sustainability that represents turning the machines, servers, VMs off when not used. Now, this concept, from the logical point of view, seems normal, right? I myself am the person who goes around the house and switches lights off whenever unnecessary. Why can't I (we) do the same with the servers?</p>
<p>Well, in the IT world, because of all those <em>ilities</em>, (<em>availability</em> being one of them), we gravitate towards leaving the hardware working for the longest time possible. Now, don't get me wrong, if the application, process, or whatever you have/use needs to be on 24/7, then go ahead, leave it on. But for the most of the stuff, we don't need that availability.</p>
<p>And there are some research that show that a big percent of the running IT equipment, actually emitting CO2 is not being used, just sits there and consumes electricity. More information can be found <a href="https://jaychapel.medium.com/overprovisioning-always-on-resources-lead-to-26-6-billion-in-public-cloud-waste-expected-in-2021-da888ea68f74">here</a> and <a href="https://www.nrdc.org/sites/default/files/data-center-efficiency-assessment-IB.pdf">here</a>.</p>
<p>Okay, we can turn our hardware off, but <strong>what about the start-up time?</strong></p>
<p>What about it? If you don't need your hardware to be on all the time, you can spare some seconds to wait during startup. Or minutes, if we're talking in Windows terms.</p>
<h2>What is <code>kube-green</code>?</h2>
<p>Now, let's move the concept of <em>LightSwitchOps</em> into Kubernetes. We now know what the concept is, how can we apply it to our services running in Kubernetes?</p>
<p>We can use the tool called <code>kube-green</code>. The <code>kube-green</code> helps you turn the services off when not used, and back on when needed. For example - turning on during work hours, and turning off during non-working hours.</p>
<h2>How it works?</h2>
<p>The tool itself works quite simple - when you install it on the cluster, you get a <code>CustomResourceDefinition</code> available, the <code>SleepInfo</code>. This CRD basically takes the resources you specified via label selector(s), and scales them to 0 (<code>Deployment</code>), or suspends them (<code>CronJob</code>).</p>
<p>And that's it!</p>
<p>The setup is quite small, it creates the following resources:</p>
<ul>
<li>namespace - where to run <code>kube-green</code></li>
<li><code>SleepInfo</code> CRD mentioned above</li>
<li>service account for the service</li>
<li>role and cluster role for controller manager</li>
<li>cluster role for the metrics and for the proxy</li>
<li>some configuration for the controller</li>
<li>two services</li>
<li>one deployment</li>
<li>one certificate (<code>cert-manager</code> is a <a href="https://cert-manager.io/docs/installation/">prerequisite</a>)</li>
<li>and one validating web-hook configuration to validate the <code>SleepInfo</code> CRD application.</li>
</ul>
<p>For simple instructions on how to install it, check out <a href="https://kube-green.dev/docs/install/">this link</a>.</p>
<h2>Why not use the <code>HPA</code> for this?</h2>
<p>You may ask yourself, or me for that matter, - what about Kubernetes-native mechanisms such as <code>HorizontalPodAutoscaler</code>?</p>
<p>Good question. The simple answer is that <strong>the <code>HPA</code> cannot scale resources to 0</strong>. Yep.</p>
<p>The <code>HPA</code> works in a way that it monitors the metrics of resources we specify. If the resource usage changes above or below defined threshold (e.g. memory or CPU), the <code>HPA</code> automatically scales up or down those resources. In that way, it enables the auto-scaling mechanism in quite easy and stress-free way.</p>
<p>Because it uses the metrics of the running resources if you try and scale to 0, it won't be able to <em>know</em> when to scale up. Therefore, if you try and put <code>minReplicas: 0</code> in the <code>HPA</code>, Kubernetes will throw an error, and it won't apply the configuration.</p>
<p>This is where using <code>kube-green</code> can help us.</p>
<h2>Bonus points</h2>
<p>If you install <code>kube-green</code> to handle turning the services off when not used, consider using the Kubernetes Cluster Autoscaler, to scale up or down your nodes when they are not being used. It is a good pair from my point of view. For more info on that, check out <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler">this link</a>.</p>
<h2>Summary</h2>
<p>The <code>kube-green</code> tool in a nutshell is a time-based scaling tool. It doesn't use any fancy metrics and approaches. It just scales resources to 0 and back up when you define it. Following are a couple of impressions of the tool.</p>
<ul>
<li>Quite easy to install and setup, even with <code>cert-manager</code> as a prerequisite.</li>
<li>Simple to use, without any hassle and additional configuration.</li>
<li>It uses <em>set it and forget</em> it approach - you install it, configure it, and leave it working.</li>
</ul>
<p>Things can be that simple! Even with Kubernetes.</p>
<h2>Further information</h2>
<p>To find out more about the <em>LightSwitchOps</em> concept, visit <a href="https://www.infoq.com/news/2023/03/stop-cloud-zombies-qcon/">this link</a>.</p>
<p>To find out more about <code>kube-green</code>, visit <a href="https://github.com/kube-green/kube-green">this link</a>.</p>
<p>Thanks for sticking with me for this long! In the next article, we will go one level up, and apply the concept of <em>LightSwitchOps</em> to Kubernetes nodes. See you in a couple of weeks!</p>
<p>If you liked the article, feel free to share it. If there is something wrong with the things I wrote, feel free to drop a comment below. Bonus - subscribe to the blog and receive these articles in your inbox!</p>
<p>Thank you for helping me grow!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Where to start with (digital) sustainability?</title>
			<link href="https://wonderingchimp.com/posts/where-to-start-with-digital-sustainability/"/>
			<updated>2024-10-28T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/where-to-start-with-digital-sustainability/</id>
			<content type="html"><![CDATA[
				<p>My wife asked me - why haven't you posted anything recently on your blog? When I said I haven't had the time or will, she responded - oh, come on, the longer you don't write something, the longer will be the preface of why you haven't posted in a while.</p>
<p>So, I'm going to skip that preface, and just go to the main story.</p>
<p>You hear all the time - sustainability this, and sustainability that. What does all that mean? Or, on the other hand, you are interested in helping the Planet by doing something sustainability-wise, but you're not sure where to start. And you know that having yet another AI tool, prompt, and whatnot will not help the cause.</p>
<p>In order to get us started, I've comprised a (short) list of things that helped me get involved with the topic. There are three types of content that I've covered in the beginning, that were quite helpful to me.</p>
<p>This is a story about an online course, a book, and a podcast.</p>
<h2>An Online Course</h2>
<p>Actually, it's not <em>a</em> course, it's more <em>the</em> course! The course is called <em>Green Software for Practitioners (LFC131)</em>.</p>
<p>Luckily for me, this is the <em>thing</em> I started with. It was (and still is) free to enrol and it didn't take me too much to complete it. If you're eager to learn, you can finish it quite fast.</p>
<p>I heard about it via <em>Linux Foundation</em> newsletter. Yes, LF, somebody is actually reading them. More or less...</p>
<p>The course is covering everything you need to know to get you started. A lot of the articles on this blog are based on my learning from this course. It goes through the following:</p>
<ul>
<li>Carbon, Energy, and Hardware Efficiency</li>
<li>Carbon Awareness</li>
<li>Measurements</li>
<li>Climate Commitments.</li>
</ul>
<p>Each of the topics above is a separate section, covered in a good degree to get you started, help you learn the basics, and <em>get you in the know</em>.</p>
<p>It is <a href="https://training.linuxfoundation.org/training/green-software-for-practitioners-lfc131/">one of the best starting points</a> if you want to learn about the topic of digital sustainability and green software.</p>
<h2>A Book</h2>
<p>Truth be told, this book wasn't released when I doubled my fingers in this topic. I was so eager to read it, that I followed the draft versions on the O'Reilly platform.</p>
<p>But then it was released, and a giant - <em>Finally!</em> exclamation from my side. It is the book called <em>Building Green Software</em> by <em>Anne Currie, Sarah Hsu, and Sara Bergman</em>.</p>
<p>This book covers somewhat similar topics like the course above, but the authors go a leap further and explain a lot of concepts only mentioned in the course, and then some. The topics covered are:</p>
<ul>
<li>Building Blocks - things you should know to get started.</li>
<li>Code, Operational, and Hardware Efficiency.</li>
<li>Carbon Awareness.</li>
<li>Networking.</li>
<li>Greener Machine Learning, AI, and LLMS.</li>
<li>Measurements and Monitoring.</li>
<li>Benefits.</li>
<li>Green Software Maturity Matrix.</li>
</ul>
<p>So, I would say that <a href="https://www.oreilly.com/library/view/building-green-software/9781098150617/">this book</a> is the logical next step after you've completed the course from above.</p>
<h2>A Podcast</h2>
<p>This podcast helped me learn a lot, and it really got me started thinking and writing about sustainability. I was fortunate enough to discover it quite early in the journey, and from that point on, it was - <em>Full on!</em></p>
<p>It is a podcast called <em>Environment Variables</em>, and it's hosted by <em>Chris Adams</em>, the Executive Director of the <em>Green Web Foundation</em>, and an organiser of <em>ClimateAction.tech</em>. This podcast holds candid conversations about green software and digital sustainability.</p>
<p>It was, and still is, <a href="https://podcasts.castplus.fm/environment-variables">a great source of ideas and an inspiration</a>. Here I learned about the book I mention above, and many more... Give it a listen!</p>
<h2>Further Inspiration(s)</h2>
<p>The list of things I would recommend is immense! But, there are still a couple of places I tend to visit to get informed, get inspired, and learn new things. I will not go into the details about any of the below links, I'll leave you to explore on your own. And let me know in the comments below what did you find the most interesting!</p>
<ul>
<li><a href="https://branch.climateaction.tech/">Branch Magazine</a></li>
<li><a href="https://fershad.com/writing/">Fershad's Blog</a></li>
<li><a href="https://www.thegreenwebfoundation.org/">The Green Web Foundation</a></li>
</ul>
<p>See you all in the next article - and this time I mean soon, not in three months!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Data centres and an increase in energy consumption</title>
			<link href="https://wonderingchimp.com/posts/data-centres-and-an-increase-in-energy-consumption/"/>
			<updated>2024-08-05T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/data-centres-and-an-increase-in-energy-consumption/</id>
			<content type="html"><![CDATA[
				<p>Hi everyone!</p>
<p>In this week's article, we're going to talk about data centres. If you remember from before, I've already written about the energy efficiency of data centres. At the time of writing that article, the <em>AI hype</em> was going strong, however, the reports for energy efficiency of data centres were missing. We didn't know, or didn't want to know, how the (over)use of AI impacts the energy consumption.</p>
<p>Well, we sort of have the numbers now. And they aren't good. Here, I want to go through those numbers, see what do they mean for the overall data centre energy usage, and show you some of the examples from the wild that could be good things to implement, work on, and have available.</p>
<p>Let's start from the top.</p>
<p>In their environmental/sustainability reports, both Microsoft and Google showed the increase in energy and water usage. Charts below show the increase in both energy and water usage. I got the numbers from their official reports <a href="https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1lmju">here</a> and <a href="https://www.gstatic.com/gumdrop/sustainability/google-2024-environmental-report.pdf">here</a>.</p>
<p><img src="../images/posts/0062-real-time-dc-01.png" alt="A bar chart titled ‘Microsoft energy consumption in MWh’. The x-axis represents the years 2020 to 2023, and the y-axis shows energy consumption values ranging from 0 to 3,000,000 MWh. There are four bars corresponding to each year. The first bar for 2020 is approximately 1,250,000 MWh. The second bar for 2021 is slightly higher than the first, around 1,500,000 MWh. The third bar for 2022 shows a significant increase with about 2,250,000 MWh. The fourth bar for 2023 is the tallest at nearly 3,000,000 MWh." title="Microsoft energy consumption in megawatt hours. Source: Microsoft 2024 Environmental Sustainability Report"></p>
<p><img src="../images/posts/0062-real-time-dc-02.png" alt="A bar chart titled ‘Microsoft water consumption in megalitres.’ The chart has four vertical bars representing the years 2020 to 2023. The bar for 2020 shows a value just above 4000 megalitres, for 2021 it is just above 5000 megalitres, for 2022 it is around the 6000 megalitres mark, and for 2023 it is close to 9000 megalitres. There are no specific numerical values provided on the bars or the vertical axis, which makes it difficult to determine the exact values. However, there is a clear upward trend in water consumption over these four years." title="Microsoft water consumption in megalitres. Source: Microsoft 2024 Environmental Sustainability Report"></p>
<p><img src="../images/posts/0062-real-time-dc-03.png" alt="A bar chart titled ‘Google energy consumption in MWh’. It shows five vertical bars, each representing a year from 2019 to 2023. The height of each bar corresponds to the amount of energy consumed by Google in megawatt-hours (MWh) for that year. The y-axis is labeled with values ranging from 0 to 3,000,000 in increments of 500,000. The x-axis lists the years from left to right: 2019, 2020, 2021, 2022, and 2023. Each bar’s height increases progressively from left to right indicating an increase in energy consumption over the years. The bar for the year 2019 starts at approximately below the first increment (500,000 MWh), and the bar for the year 2023 reaches just below the topmost increment (3,000,000 MWh)." title="Google energy consumption in megawatt hours. Source: Google 2024 Environmental Report"></p>
<p><img src="../images/posts/0062-real-time-dc-04.png" alt="A bar graph titled ‘Google water consumption in million gallons.’ The horizontal axis lists the years 2019 through 2023, and the vertical axis is labeled with numbers ranging from 0 to 10000 in increments of 2000, representing million gallons. There are five bars corresponding to each year, showing an increasing trend in water consumption. The bar for 2019 starts just above 4000, with each subsequent year showing a higher consumption than the last, culminating in the bar for 2023 reaching close to the 10000 mark." title="Google water consumption in million gallons. Source: Google 2024 Environmental Report"></p>
<p>As you can see above, both energy and water consumption increased in previous two years.</p>
<h2>How data centres fit into the picture?</h2>
<p>If we have another look at the reports, from Google, to be exact, we can see the numbers for total water withdrawal (8653.3 million gallons) and the total water withdrawal of data centres (7657.2). This means that <strong>88% of total water withdrawal is done by data centres.</strong></p>
<p>And what about the energy going into data centres? Well, both Google and Microsoft don't show in their reports the exact consumption by data centres. Although, Google shows the PUE of its data centres. And on average, it stayed the same - 1.10. This means that 0.1 of the total energy consumption of the data centres goes to overhead cooling and support for the equipment.</p>
<p>But, on the other hand, if the energy and water usage increases, this is still not good. Never mind that the PUE is small. Water is used for cooling those servers. So, if energy consumption of data centres increases, alongside the water usage increases. Even though the PUE is a small number, it will not help us cool down our planet. Decrease in energy and water consumption will.</p>
<h2>What was the cause of this increase?</h2>
<p>Let's now go through the memory lane and the recent history of <em>AI getting into the spotlight</em>:</p>
<ul>
<li>November 2022 - ChatGPT public launch.</li>
<li>January 2023 - Microsoft invests US$10 billion in OpenAI.</li>
<li>February 2023 - Google announces Bard.</li>
<li>And so on and so forth...</li>
</ul>
<p>We can see that the end of the 2022 and the beginning of 2023, the <em>hype around the AI</em> is starting to develop.</p>
<p>Now, I guess the answer to the above question is obvious, no?</p>
<h2>Where we are today?</h2>
<p>Now, let's fast-forward to today and see what we have now.</p>
<ul>
<li>(Still) No major business case for using AI.</li>
<li>An increase in the AI-generated content online making it harder to see facts from fiction.</li>
<li>An increase in energy usage.</li>
<li>An increase in water usage.</li>
<li>An increase in Earth's temperature.</li>
</ul>
<p>And it seems that we're not stopping there. Nvidia unveils Blackwell cluster that almost <a href="https://www.techradar.com/computing/i-watched-nvidias-computex-2024-keynote-and-it-made-my-blood-run-cold">doubles the power consumption of chips</a>.</p>
<h2>What can we do about it?</h2>
<h3>Less AI training and usage</h3>
<p>We got used to the fact that we are able to get, learn, see, discover, <strong>understand</strong> things faster, right away. But, if we want to do it properly, <strong>all these things take time</strong>. And <em>AI</em> will not help us there. It will give us an answer faster, true, but will that answer be correct? And often times it will be faster if we would get the answer ourselves, rather than trying to find if AI was wrong.</p>
<p>Having the above in mind, first and obvious answer is to <strong>reduce putting AI-enabled features in every possible service or product</strong>. Training an AI model (like GPT-3) is estimated to take as much as <a href="https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption">1300 MWh of electricity</a> - the annual power consumption of 130 homes in US. Using AI on the other hand, to, for example, generate an image, will take as much energy as it is needed to charge an average smartphone (~0.012 kWh).</p>
<p>This reminds me of a post I've read.</p>
<p><a href="https://bsky.app/profile/paleofuture.bsky.social/post/3kyhb2fd2cd2u"><img src="../images/posts/0062-real-time-dc-05.png" alt="A screenshot of two tweets. The first tweet is by Matt Novak with the handle @paleofuture, which reads ‘AI folks have now discovered “thinking”.’ The second tweet is by Steph Smith with the handle @stephsmithio, which states ‘Sometimes in the process of writing a good enough prompt for ChatGPT, I end up solving my own problem, without even needing to submit it.’ The tweet from Steph Smith includes a timestamp of 2:16 PM on 7/29 and has garnered 1.7K views."></a></p>
<h3>Real-time DC metrics</h3>
<p>The second option that could help us get more awareness would be to have more real-time power and water consumption metrics of data centres available publicly.</p>
<p>For example, French Cloud computing and web hosting company <em>Scaleway</em> shows the real-time data centre dashboards on their website. This is great!</p>
<p><a href="https://www.scaleway.com/en/environmental-leadership/"><img src="../images/posts/0062-real-time-dc-06.png" alt="Three circular graphs representing data from DCS PARIS - Solvay Bicarcenter parts #1 to #3 with various measurements such as humidity levels ranging from 89% to 100%, temperatures from -3 °C to +14 °C, conductivity from zero to several hundred µS/cm, total dissolved solids from zero to thousands of ppm, pressure in atmospheres, flow in cubic meters per second which is zero across all parts shown here; level in millimeters which also reads zero; turbidity measured in NTU only present in part #3 with a value of sixteen NTU."></a></p>
<p>Imagine now having a sort of similar dashboard for cloud regions where you run your workloads?</p>
<h3>Exploring other options?</h3>
<p>Water is mainly used for cooling the hardware in data centres. As far as I know. What if there is some way to re-use that water for heating some near-by infrastructure? Or to use it to generate more electricity?</p>
<p>Or, is it possible to deploy data centres in a colder climate, so there will not be much energy and water needed to cool down hardware?</p>
<p>What do you think?</p>
<h2>Summary</h2>
<p>There are no easy solutions for stopping overall rise in the Earth's temperature. However, starting with something, just re-thinking how can we use resources in a better way, can be helpful.</p>
<p>Looking into history for examples could also help. Here, I can recommend the book I'm currently listening - <a href="https://www.romankrznaric.com/history-for-tomorrow"><em>History for Tomorrow</em>, by <em>Roman Krznaric</em></a>. So far, it seems to me as a great book, and a good point of reference for implementing practices from the past in today's world.</p>
<p>The last but not least, we need to remember, <strong>more is not always better!</strong></p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Six months of using a fair smartphone</title>
			<link href="https://wonderingchimp.com/posts/six-months-of-using-a-fair-smartphone/"/>
			<updated>2024-07-22T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/six-months-of-using-a-fair-smartphone/</id>
			<content type="html"><![CDATA[
				<p>Hi everyone!</p>
<p>In this article, I want to talk about my experience with smartphones. Not from the technical side, no, but from the user side. Here, I want to revisit my current usage of smartphones, and the replace-period. This is how I'm going to call the period in which we, or in this case I, replace the current smartphone with a new one.</p>
<p>Now, before I start rambling about this, and that, I want to put out a disclaimer first. This is not an advertisement for the Fairphone smartphones, or any other phones to that extent. This is just my general impression and experience with using the devices we all nowadays have. From 7-year-olds, to 107-year-olds...</p>
<h2>Smartphone usage cycle</h2>
<p>A regular smartphone is nowadays used approximately 2.5 years and then replaced for a new one. Usually discarded, not even recycled. Now, I don't want to write about rights and wrongs of this cycle. I've already written about the bad consequences of this approach in some of my <a href="https://www.wonderingchimp.com/posts/six-months-of-using-a-fair-smartphone/">previous articles</a>.</p>
<h2>A rather subjective use-case</h2>
<p>Now, another disclaimer - I am a rather simple user. I need my phone to be able to:</p>
<ul>
<li>give/receive a call</li>
<li>send/receive messages (now IMs), and e-mails</li>
<li>basic research a.k.a. internet search</li>
<li>maps</li>
<li>some music or podcasts</li>
<li>and a battery life.</li>
</ul>
<p>That is pretty much all. Yes, I don't mind a good camera, or a faster phone, but these are all not dealbreakers to me. The most important thing in my phone is the battery. That was, and I still think is, my biggest focus when looking for a new phone.</p>
<h2>What is Fairphone?</h2>
<p><em>Fairphone</em> is a company behind Fairphone smartphones. These smartphones are built in a fair way for both Planet and the people. In other words - <em>Fairphone</em>.</p>
<p>They are built to last much longer than a normal usage cycle, built from recycled materials, and easily repairable.</p>
<p>For example, in case you drop your phone and the screen breaks, you can order a new screen and replace it by yourself. And for way less than you would spend doing the same on some other smartphones.</p>
<h2>Why I decided to switch?</h2>
<p>Before the switch, I was using Google Pixel 2. I bought it in a dealer store in 2017. And only because my phone at the time got stolen. The <em>one that got away</em> was a Xiaomi, but I don't remember the model.</p>
<p>What I liked about it (Xiaomi) at the time was good battery (of course), and the ability to customise the phone. I remember I rooted that phone a couple of times, and used the <a href="https://lineageos.org/">Lineage OS</a> on it. Now, <em>rooting</em> a phone means to re-install the OS without all that (Google) bloatware on it.</p>
<p>It was a great learning experience, but with one small drawback - the whole application market was more oriented towards Play Store on Android and App Store on iOS. Not every application I was using was available on F-Droid, at the time. And <a href="https://f-droid.org/">F-Droid</a> is something similar to Google Play Store, but free and open source. So, the better of the two.</p>
<p>Having the possibility to play with the phone, tinker with the OS, <em>freeing</em> it in a sense, was great. I put a lot of time and effort into making that phone <em>Google-free</em>. But, it got stolen, and I was a bit pissed. I thought about all those hours I re-installed the OS, because I didn't like one thing or the other, and I wasn't motivated to do that all over again. Back then, I wasn't that much into documenting my troubleshooting and tinkering with devices, so it might be because of that as well. So, I opted for a not-that-much customizable Google Pixel 2.</p>
<p>I fell in the hole of conformism. On one hand, it felt so easy using the phone, and on the other, all my <em>de-Google-isation</em> before felt pointless. But, I continued to use it. With all that <em>Google</em> mambo-jumbo disabled, though.</p>
<p>Time passed, and I continued to use it. And I was satisfied with it. It checked all the points that were important to me. Then, during last year, the battery started to go from good to bad, and from bad to worse. It didn't make any impact if I used the phone or not, or cleaning it from the apps. The phone battery wouldn't last more than 12 hours. If I was able to change the battery for a new one, I would continue to use the Google Pixel 2. And I received the last official update from Google on that phone in October 2020. In a nutshell - I didn't care about that. I just didn't want to bring the external battery wherever I go.</p>
<p>So, I decided to buy a new phone. The main thing I wanted, is a phone with a good battery. But this time, I also wanted to use that phone for a longer period. To be able to buy spare parts if they got broken. To be able to buy a new battery, when the old one doesn't work any more. I wanted the things to be like they were before smartphones. When a phone battery died, you could've easily changed it for a new one...</p>
<h2>Why Fairphone?</h2>
<p>I first heard about them from a friend, one year ago. I wasn't actively looking for a phone, the one at the time was still functioning. Even though it spent a lot of time in Airplane mode. But, I started reading about them, and started following them.</p>
<p>What I've discovered is that they don't just sell smartphones. No, they sell the possibility to easily, and on your own, repair stuff. To reuse and repair. And to use the phones longer. Because the current phone life cycle <strong>is not good for our environment</strong>. Period.</p>
<p>This, I liked the most! Since my goal was to buy a phone that is <em>good</em> for people and the environment, the Fairphone fit right into the picture. And so far, it's going great!</p>
<p>Besides the great hardware, almost all easily replaceable and repairable, they also have a great Android support. They offer 5-year warranty on the hardware, and software updates until 2031.</p>
<p>Now, if you remember from before, I don't require too much from my phone, other than it lasts as longest as possible. I didn't test or benchmark this phone. Feel free to check various videos about that if you're interested. This phone works for me just great!</p>
<p>Also, this being my personal blog and kind of place I use to <em>vent</em>, I don't want to sell you anything. This is not my intention. If you like to find out more, be sure to visit <a href="https://shop.fairphone.com/about-us">their website</a>. They are quite transparent with what they do, based from what I understand.</p>
<h2>How can they guarantee that life cycle?</h2>
<p>Well, for starters, the processor they use - <em>Qualcomm® QCM6490 (Octa Core) extended life chipset</em> is, from my understanding, an industry-based processor. This means that it is designed for longer use, in factories. There, it's not that easy to just replace a device after it reached <em>EOL</em> (End of Life) after just 2 years. These things require extended life. Why shouldn't this be applied to regular users?</p>
<p>In addition, they guarantee 5 additional Android versions after version 13. Which is around 8 years of continuous software updates! If I am to compare that with the old Google Pixel 2 I used, I will probably use this phone 3 more years after the last update. Not recommended if you want to be secure, but, hey, nobody hacked me. As far as I know...</p>
<h2>What I like the most about them?</h2>
<p>There are numerous things I like about this phone. Some of them are:</p>
<ul>
<li>built to last,</li>
<li>built from recycled materials,</li>
<li>in a fair way for the workers building it,</li>
<li>easily repairable,</li>
<li>a lot less impact on the environment.</li>
</ul>
<p>Last but not least, I would like to mention <em>My Fairphone</em> application that comes with these phones. This is an app that shows you the overall status of your phone, your warranty, device info, and one of the really nice things - your phone timeline. You can scroll and see the life cycle of your phone, when do you need to check your battery, when will you receive a new Android update, and so on.</p>
<p>The image below shows just one small part of it.</p>
<p><img src="../images/posts/0061-fairphone.jpg" alt="The image is a three-panel graphic illustrating the evolution of a smartphone. The first panel, dated September 2023, displays disassembled smartphone components. The second panel, dated January 2024, shows a completed smartphone with the Android logo on its screen, alongside future years listed from 2024 to 2029. The bottom half of the image is divided into two sections; the left side says “Your phone is born” above an arrow pointing to the right, indicating the assembly process of a phone. The right side shows a close-up view of an Android smartphone’s interface with various app icons and is labeled “Summer 2024.”" title="Source: screenshot from My Fairphone app"></p>
<p>To find out more, check out <a href="https://www.fairphone.com/en/2023/09/14/my-fairphone-app/">this link</a>.</p>
<h2>Summary</h2>
<p>To finish all this rambling of mine with some (hopefully) sensible conclusion.</p>
<p>It is true, I am not your average <em>smartphone user</em> that needs to have all those new, flashy, or nowadays popular AI features, on the smartphone. I don't care about that.</p>
<p>What I don't want is to be limited to some <a href="https://en.wikipedia.org/wiki/Planned_obsolescence">built-in obsolescence</a>, and my device turns into a brick after just 2 years of regular usage.</p>
<p>What I want is to:</p>
<ul>
<li>be able to use my phone to reach others, find new things, on my own, without (AI) listening in on everything I say...</li>
<li>be able to open the phone case and see how the things are connected from the inside...</li>
<li>be able to easily find and repair things that get broken. Replace the battery, for example...</li>
<li>be able to make a good impact to our environment...</li>
</ul>
<p>And you should too.</p>
<p>Now, <strong>I am not saying</strong> - hey, go buy a Fairphone, it's great! No, that is not what I want my message to be here.</p>
<p>My message here is - when buying a smartphone (or any other thing for that matter) make sure you are aware of the impact this has on our Planet. And make sure you actually need it!</p>
<p>Let me know what do you think about this topic. What is your usual life-cycle of a smartphone? What are the things you look for in a phone?</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Greening the CI/CD Pipeline</title>
			<link href="https://wonderingchimp.com/posts/greening-the-ci-cd-pipeline/"/>
			<updated>2024-07-08T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/greening-the-ci-cd-pipeline/</id>
			<content type="html"><![CDATA[
				<p>Working as a DevOps engineer (to be honest, I don't quite like that term) has taught me the principles and practices of Continuous Integration and Continuous Delivery / Deployment. Or, CI/CD for short.</p>
<p>Going through those processes, from making change in the code, to deploying the change on a different environment, caused me to try and experiment with various things and approaches. Spoiler alert - it was/is bash all the way!</p>
<p>Jokes aside, daily preoccupation with the pipelines caused me to think in a direction - is there a way to make them <em>greener</em>?</p>
<p>With this article, we will take a <em>high-level</em> approach and check out the things to consider when making the whole CI/CD process more sustainable. We will not focus on the environmental impact of the application itself and the infrastructure it is running on. But on the <strong>process happening before</strong>, from the code commit, until applying that commit to the specific environment.</p>
<h2>How to measure the current impact?</h2>
<p>Okay, so we need to start somewhere. We can ask ourselves - what is the current impact of our CI/CD process?</p>
<p>Now, answering this can be a lot harder than you think. But it's not impossible.</p>
<p>Depending on where we run our CI/CD pipelines, we can use different ways to monitor the current setup.</p>
<ul>
<li>If running in the Cloud, you can opt for:
<ul>
<li>checking the cloud-provider-specific dashboards (not ideal)</li>
<li>using the <a href="https://www.wonderingchimp.com/demoing-the-cloud-carbon-footprint-tool/">Cloud Carbon Footprint tool</a></li>
</ul>
</li>
<li>If running on-prem, or to improve monitoring of your Cloud CI/CD agents, you can check out:
<ul>
<li><a href="https://www.wonderingchimp.com/demoing-kepler-exporter/">Kepler Exporter</a></li>
<li><a href="https://www.wonderingchimp.com/demoing-scaphandre/">Scaphandre</a></li>
</ul>
</li>
</ul>
<p>The above tools can give you an overview of what is the current status of the CI/CD process. They can be a good starting point.</p>
<h2>What gets neglected?</h2>
<p>The size of different artefacts matters. By artefacts in this context, I mean the following:</p>
<ul>
<li>single (or multiple) code repository</li>
<li>used libraries and packages</li>
<li>binaries</li>
<li>container images (if any)</li>
<li>all other artefacts not mentioned above.</li>
</ul>
<p>For example, it's not the same if you have an application binary that is 10 MBs or 1GB, from both performance AND environmental impact perspective.</p>
<p>The environmental impact of the artefacts' size can include <em>downloading those artefacts and storing them</em>. This is what gets neglected often.</p>
<p>Measuring the impact of download and storing the artefacts can be quite tricky, and hard. We can leverage the tools above to help us in the measures. But not just that.</p>
<h2>How to improve the impact?</h2>
<p>Okay, let's assume we are able to measure the impact of our CI/CD process, with some of the tooling from above. We now see the numbers, and we don't like them. How can we improve? There are a couple of things we can do.</p>
<h3>Avoid bloatware</h3>
<p>In our code, both application, and infrastructure, the hard truth is <strong>we have a lot of bloatware.</strong></p>
<blockquote>
<p>Well, bigger doesn’t imply better. Bigger means someone has lost control. Bigger means we don’t know what’s going on. Bigger means complexity tax, performance tax, reliability tax. <a href="https://tonsky.me/blog/disenchantment/">Source</a></p>
</blockquote>
<blockquote>
<p>You can deliver a lot of functionality even with a limited amount of code and dependencies. <a href="https://spectrum.ieee.org/lean-software-development">Source</a></p>
</blockquote>
<p>Instead of getting that cool library or a tool that solves your problem, consider solving it by adding a couple of more lines of code in your application. Or, if applicable, maybe some simple bash script in your container image... These are just some examples from the top of my head.</p>
<p>With this, we can definitely impact the size of our artefacts, and therefore have impact on the environment itself.</p>
<h3>Use cache where you can</h3>
<p>Caching of libraries, packages, or even container images, can improve both the execution and the overall environmental impact of the CI/CD pipelines.</p>
<p>There are numerous ways to do so - for example, caching locally on the CI/CD agent. Or, in the context of container images, have a remote caching of layers. There are some possibilities, we just need to look for them.</p>
<h3>Use temporary (spot) agents</h3>
<p>If you are running the CI/CD pipeline in the Cloud, you can configure the CI/CD server to spin temporary agents, where it will run the job(s). After the pipeline has finished, the agent is turned off and <em>destroyed</em>.</p>
<p>For example, in GitLab, you can configure the <em>runners</em> (CI/CD agents) to run on AWS Spot instances. This allows you to <em>re-use</em> the existing infrastructure, and not reserve new capacity.</p>
<p>This approach makes sense if you have a simple and small application, with not that big amount of dependencies and binaries to download/store.</p>
<p>If you have application(s) that size is measured in gigabytes, this approach might not be for you.</p>
<h3>Leverage running scheduled builds</h3>
<p>If your CI/CD pipeline doesn't need to run on every commit, or some part takes too long, maybe you can choose to run it once a day. Or, for example, when the energy is coming from renewable sources.</p>
<p>Using <a href="https://www.wonderingchimp.com/posts/exploring-the-green-apis/"><em>Green APIs</em></a> can help you there. You can check when you're getting the energy from renewables, and trigger the CI/CD process to run at that point in time.</p>
<h3>Leverage different regions</h3>
<p>If your CI/CD pipeline is running in the Cloud, you can use the above-mentioned <em>Green APIs</em> to check which regions are getting the energy from renewables and spin the agents there.</p>
<p>This, however, might not improve the environmental impact if your build process takes too long, and/or the artefacts you download/upload are big.</p>
<h3>Turn off agents when not used</h3>
<p>Machines consume power when not used. Why not turning them off when not used? For example, on weekends, or on non-working hours during the work week. If turning off CI/CD agents is not an option, maybe you can decrease their number?</p>
<h2>Summary</h2>
<p>Running the CI/CD pipelines sustainably can have a big impact on the environment. Adding this to consideration, and not just focusing on the environmental impact of running the application, can be of great importance.</p>
<p>A couple of side effects of <em>greening</em> the pipelines could be:</p>
<ul>
<li>decreasing costs of
<ul>
<li>infrastructure</li>
<li>data transfer</li>
</ul>
</li>
<li>improving performance and execution time, which leads to less power consumption.</li>
</ul>
<p>How to start? As I've written above, you can:</p>
<ul>
<li>Measure your current state with the tools from the above.</li>
<li>Leverage different improvement approaches I've mentioned.</li>
</ul>
<p>I would love to hear from you on the topic! Have you found the things I've written about useful? Do you think there is something I'm missing? Add your thoughts in the comments below!</p>
<p>If you found this topic interesting, consider it sharing with a larger audience. It would mean a lot!</p>
<p>Thank you and see you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Demoing Scaphandre</title>
			<link href="https://wonderingchimp.com/posts/demoing-scaphandre/"/>
			<updated>2024-06-24T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/demoing-scaphandre/</id>
			<content type="html"><![CDATA[
				<p>Hi wondering people!</p>
<p>With this article, I again want to <em>walk the walk</em>. So, demo time!</p>
<p>In the last article I wrote about Kepler exporter, what it is, how to use it, and what all those numbers mean. If you missed it, check it out on <a href="https://www.wonderingchimp.com/posts/demoing-kepler-exporter/">this link</a>.</p>
<p>This time, I want to check out the somewhat similar tool called - <em>Scaphandre</em>.</p>
<p>We'll see what it is, how it works, and how can you install it on your machine (in my case Kubernetes cluster). Let's dive in!</p>
<h2>What is <em>Scaphandre</em>?</h2>
<p>The translation from French - <em>heavy diving suit</em>. In Serbian, it has a bit different meaning. It is also sort of a one-piece suit, that you wear when you go skiing, snowboarding, snowball fighting... If you do an internet search of it, you'll see a plethora of clothing stores offering scaphandres for children.</p>
<p>Sorry, I wandered a bit off-topic. In this context, <em>Scaphandre</em> is a monitoring agent (not just a Prometheus exporter) that tracks energy consumption metrics. It has the same purpose as <em>Kepler</em> - to help measure and understand energy consumption of the services and infrastructure we use.</p>
<p>In their introductory page ask - <em>Why bother with all of this?</em> they give a great answer. You can find it on <a href="https://hubblo-org.github.io/scaphandre-documentation/why.html">this link</a>.</p>
<h2>How <em>Scaphandre</em> works?</h2>
<p>Following somewhat similar structure as in the previous article, we're now going to dive deeper in how <em>Scaphandre</em> works.</p>
<p>The design of <em>Scaphandre</em> consists of the following:</p>
<ol>
<li>Sensors</li>
<li>Exporters</li>
</ol>
<h3>Sensors</h3>
<p>Sensors are there to get the power consumption of the host, and expose it to the exporter part. Based on the documentation, there are two sensors that <em>Scaphandre</em> can use:</p>
<ul>
<li>PowercapRAPL sensor - GNU/Linux OS; with Intel or AMD x86 CPUs</li>
<li>MSRRAPL sensor - Windows 10, Windows Server 2016, and 2019; with Intel or AMD x86 CPUs</li>
</ul>
<p>These sensors are using the <em>Running Average Power Limit</em> (RAPL) feature on Intel/AMD x86 processors. This feature enables setting the limits on power usage by the CPU and other components. Additionally, it allows us to get the measurements of power consumption of the components. The part of the RAPL measurements are from estimations and models.</p>
<p>We can specify the sensor by adding the <code>-s &lt;SENSOR_NAME&gt;</code> argument to the <code>scaphandre</code> command. Example is below:</p>
<pre><code class="language-shell">scaphandre -s powercap_rapl EXPORTER 
</code></pre>
<h3>Exporters</h3>
<p>On the other hand, we have exporters - the part that asks sensor to get new metrics and store them for later usage, and export them. Following is the list of the exporters available in <em>Scaphandre</em></p>
<ul>
<li>JSON</li>
<li>Prometheus</li>
<li>Qemu</li>
<li>Stdout</li>
<li>Riemann</li>
<li>Stdout</li>
<li>Warp10</li>
</ul>
<p>Each of the above exporters can be called by adding the exporter name to the <code>scaphandre</code> command, like below.</p>
<pre><code class="language-shell">scaphandre prometheus
</code></pre>
<p>If you don't specify the <code>-s</code> argument, the default <code>powercap_rapl</code> sensor will be used.</p>
<p>Each of these exporters exports metrics to various source - JSON to json output, Prometheus to an http endpoint, Stdout to standard output.</p>
<p>The Qemu is considered <em>a special exporter</em> though. It computes the energy consumption for each Qemu/KVM virtual machine found on the host.</p>
<h2>What metrics are available?</h2>
<p>Before I start, I just want to quickly mention metric types available. In <em>Prometheus</em>, we have the following:</p>
<ul>
<li>Counter - a cumulative metric that represents a single <em>monotonically</em> increasing counter. Its value can only increase, or be reset to zero on restart. Good for - number of requests served, tasks completed, errors...</li>
<li>Gauge - a metric that represents a single numerical value, that can go <em>up and down</em>. Good for - temperatures, current memory usage, number of concurrent requests...</li>
<li>Histogram - samples observations and counts them in configurable buckets. This type of metric exposes multiple time series during scrape.</li>
<li>Summary - similar to <em>histogram</em>, it samples observations. While it provides a total count and a sum of all observed values, it also calculates the configurable quantiles over a sliding time window.</li>
</ul>
<p>The most used metric types are <em>counter</em> and <em>gauge</em>.</p>
<p>Now that we got this covered, let's move to the list of the metrics available in <em>Scaphandre</em>.  <em>TL;DR</em> it's huge! Therefore, I would only focus on a couple of key metrics computed and available.</p>
<ul>
<li><code>scaph_host_power_microwatts</code> - Aggregation of several measurements showing power usage of the whole host, in micro-watts. It is a <em>gauge</em> metric type.</li>
<li><code>scaph_process_power_consumption_microwatts{}</code> - Power consumption of the process, measured on at the topology level, in micro-watts. It is also a <em>gauge</em> metric type.</li>
<li><code>caph_socket_power_microwatts{}</code> - Power measurement relative to a CPU socket, in micro-watts. Also, a <em>gauge</em>.</li>
</ul>
<p>Besides the above metrics, <em>Scaphandre</em> provides additional metrics related to:</p>
<ul>
<li>disk space</li>
<li>memory usage</li>
<li>CPU load and <em>frequency</em></li>
<li><em>Scaphandre-specific</em> metrics (to monitor and troubleshoot the tool).</li>
</ul>
<p>We've covered some basics. Now, let's dive into installing the tool and checking out all these metrics in some <em>Grafana</em> dashboard.</p>
<h2>How to install <em>Scaphandre</em>?</h2>
<p>There are a couple of ways to install <em>Scaphandre</em>, and I've tested two approaches:</p>
<ul>
<li>using Debian package</li>
<li>using helm and running it on K3s cluster</li>
</ul>
<h3>Installing from <code>debian</code> package</h3>
<p>I wanted to test out how the <em>Shaphandre</em> works from the command line. Therefore, I opted for installing it from <code>debian</code> package. First, I needed to download the package from <a href="https://github.com/barnumbirr/scaphandre-debian/releases/tag/v1.0.0-1">this URL</a>.</p>
<p>These are the commands I've executed to do so, and install the package.</p>
<pre><code class="language-shell">cd /tmp
# Download the package
curl -LO https://github.com/barnumbirr/scaphandre-debian/releases/download/v1.0.0-1/scaphandre_1.0.0-1_amd64_bookworm.deb

# Install package
sudo apt install ./scaphandre_1.0.0-1_amd64_bookworm.deb
</code></pre>
<p>After this completed, I run the <code>scaphandre</code> locally, with the following command.</p>
<p><em>Note:</em> I needed to run the command with <code>sudo</code> privileges.</p>
<pre><code class="language-shell">$ sudo scaphandre stdout -t 10

# Output
scaphandre::sensors: Sysinfo sees 16
Scaphandre stdout exporter
Sending ⚡ metrics
Measurement step is: 2s
scaphandre::sensors: Not enough records for socket
Host:	0 W from 
	package 	core		dram		uncore
Top 5 consumers:
Power		PID	Exe
No processes found yet or filter returns no value.
------------------------------------------------------------

Host:	17.233882 W from powercap_rapl_psys
	package 	core		dram		uncore
Socket0	8.890063 W |	4.670897 W	1.222094 W	0.086669 W	

Top 5 consumers:
Power		PID	Exe
0.009456176 W	284	&quot;&quot;
0.009456176 W	120792	&quot;/app/obsidian&quot;
0.004728088 W	125299	&quot;&quot;
0.004728088 W	5648	&quot;/usr/bin/gnome-shell&quot;
0.004728088 W	126604	&quot;/usr/lib/firefox/firefox-bin (deleted)&quot;
------------------------------------------------------------

Host:	16.354634 W from powercap_rapl_psys
	package 	core		dram		uncore
Socket0	7.961599 W |	3.708898 W	1.213868 W	0.07227 W	

Top 5 consumers:
Power		PID	Exe
0.02474975 W	5648	&quot;/usr/bin/gnome-shell&quot;
0.0049499497 W	284	&quot;&quot;
0.0049499497 W	130000	&quot;&quot;
0.0049499497 W	4175	&quot;/coredns&quot;
0.0049499497 W	120792	&quot;/app/obsidian&quot;
------------------------------------------------------------
</code></pre>
<p>As you can see in the output above, there are some numbers from my laptop. Now, let's see if we can show them on a <em>Grafana</em> dashboard.</p>
<h3>Installing from Helm chart</h3>
<p>To present the metrics on the <em>Grafana</em> dashboard, I've opted for the installation of <em>Scaphandre</em> via Helm, because I already had the monitoring stack running on my <em>K3s</em> cluster. I did the following.</p>
<pre><code class="language-shell">git clone https://github.com/hubblo-org/scaphandre
cd scaphandre
</code></pre>
<p>Before I did the installation of the chart, I needed to update two files. First, the <code>daemonset.yaml</code> with the below lines in the <code>container</code> section. This enables running the pod as <code>privileged</code>.</p>
<p><strong>Note: do not do this in production!</strong></p>
<pre><code class="language-yaml">         securityContext:
           privileged: true
</code></pre>
<p>Next, I needed to enable the <code>ServiceMonitor</code> creation in the <code>values.yaml</code> file, in order  to make <em>Prometheus</em> aware of the <em>Scaphandre</em> instance.</p>
<pre><code class="language-yaml"> serviceMonitor:
   # Specifies whether ServiceMonitor for Prometheus operator should be created
   enabled: true
   interval: 1m
   # Specifies namespace, where ServiceMonitor should be installed
   namespace: monitoring
</code></pre>
<p>After I did all the necessary changes, I went ahead and installed <em>Scaphandre</em>.</p>
<pre><code class="language-shell">helm upgrade -i scaphandre helm/scaphandre -n scaphandre --create-namespace
</code></pre>
<p>After this finished, I've continued on to add the <em>Grafana</em> dashboard.</p>
<h2>Getting the <em>Grafana</em> dashboard</h2>
<p>Unlike the Kepler exporter dashboard, this one was rather simple, and <em>partially</em> working, at least on my setup.</p>
<p>I took the <code>json</code> file from <a href="https://github.com/hubblo-org/scaphandre/blob/main/docs_src/tutorials/grafana-kubernetes-dashboard.json">this link</a>, and imported it in <em>Grafana</em>. The end result is shown on the image below.</p>
<p><img src="../images/posts/0058-image-01.png" alt="This image displays a Grafana dashboard integrated with Scaphandre for energy usage tracking. The dashboard contains multiple panels with graphs and metrics. The top left panel shows a line graph titled ‘Processes cpu + sys’, indicating CPU usage over time. Below it, there’s another panel titled ‘Nodes’ with a line graph displaying some form of metric. On the top right, there is a heading ‘Kubernetes Energy Usage’ followed by a link to Scaphandre’s GitHub page. Below this, there are two panels labeled ‘Scaphandre’ and ‘Powerstat’, both currently without data. The bottom half of the screen shows two large empty panels titled ‘Kubernetes Context Page’ and ‘Pods’, intended to display more detailed information but currently without any data." title="Scaphandre Grafana dashboard"></p>
<p>As you can see, the dashboard worked partially, out of the box. But that is not a problem. As long as we have metrics, we can modify it or create a new, more representative one.</p>
<h2>Security concerns</h2>
<p>Looking from the security perspective - both <em>Scaphandre</em> and <em>Kepler</em> require root permissions to run. This isn't ideal, and for sure to be considered <strong>when running in production.</strong> If you (unlike myself) know what you're doing, test it out and see how it will run in your production environment.</p>
<p>Running it in <em>Kubernetes</em> will not fix this issue. Both tools were running as <code>DaemonSets</code>, <code>privileged</code>, and mapping local system-based directories, such as <code>/proc</code> and <code>/sys</code> to the pods. Which is considered a <em>no-no</em> by the overall <em>Kubernetes</em> security recommendations.</p>
<h2>Summary</h2>
<p>Now, to reflect on the <em>Scaphandre</em> installation and setup process. The overall status is the following.</p>
<ul>
<li><em>Scaphandre</em> is not just a <em>Prometheus</em> exporter, it supports different exporters.</li>
<li>It is possible to run it on Windows, as it supports a <em>Windows-based</em> sensor (although, I didn't test that out).</li>
<li>Setup and installation is not as <em>straight-forward</em> as with <em>Kepler</em>.</li>
<li><em>Scaphandre</em> requires a bit more tuning from the start, compared to <em>Kepler</em>.</li>
<li>Security concerns need to be taken into account and addressed accordingly, before moving to production.</li>
</ul>
<p>Congrats! You've reached the end of <em>yet another demo article</em>. If you liked what you see, or want to find out more, check out other <a href="https://www.wonderingchimp.com/tag/demo/"><em>Demo</em></a> articles from my blog.</p>
<p>It would mean a lot to if you shared this article with people interested in the topic of <em>Sustainability in tech</em>!</p>
<p>See you in the next one!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Demoing Kepler Exporter</title>
			<link href="https://wonderingchimp.com/posts/demoing-kepler-exporter/"/>
			<updated>2024-06-10T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/demoing-kepler-exporter/</id>
			<content type="html"><![CDATA[
				<p>Hi everyone!</p>
<p>In this week's article, we'll be doing a <em>walk the walk</em>, rather than just <em>talk the talk</em>. I'm going to show you how using a tool called <em>Kepler</em> can provide you more insights in power consumption of (not just) Kubernetes cluster nodes and machines. With that, we'll get predicted Carbon emissions, so, it's going to be interesting. At least to me.</p>
<p>We'll start from the beginning, explaining what Kepler is and how it works. Then, we'll dive into how to set it up, and in the end, we'll show what all those metrics mean in a <em>comprehensive</em> Grafana dashboard.</p>
<p>So, let's dive in!</p>
<h2>What is Kepler?</h2>
<p>First, I'll start with a note to myself. The tool <em>Kepler</em> is not a similar thing to <a href="https://keda.sh/"><em>Keda</em></a>. <em>Kepler</em> is a Prometheus exporter, and <em>Keda</em> is a <em>Kubernetes Event-driven Autoscaling</em>. A scheduler on steroids, basically. I'm going to check Keda out in some of my future articles.</p>
<p>The tool didn't get its name from <a href="https://en.wikipedia.org/wiki/Johannes_Kepler">Johannes Kepler</a>. Rather, <em>Kepler</em> in this context stands for <em>Kubernetes Efficient Power Level Exporter</em>. In a nutshell, it is a <em>Prometheus exporter</em> that uses <em>eBPF</em> to probe energy-related system stats and exports them as metrics.</p>
<p>To cover some basics - <em>Prometheus</em> is a monitoring tool that uses a pull-based method to gather metrics from various endpoints. An <em>exporter</em> is a tool that exports the underlying systems' metrics for <em>Prometheus</em> to <em>scrape</em> them.</p>
<p>And what is <em>eBPF</em>? In short, it is a technology with origins in the Linux kernel. The main feature is that it can run sand-boxed programs in a privileged context. It is used to extend capabilities of the kernel, without the need to change the kernel source code or load kernel modules.</p>
<p>To explain fully the <em>eBPF</em>, and how it works, we would need a separate article. And I'd need to research about the internals of the Kernel. So, we're not going to spend more time on it. At least in this article. If you're interested to read more, <a href="https://ebpf.io/what-is-ebpf/">check out the <em>eBPF</em> documentation</a>.</p>
<h2>How Kepler works?</h2>
<p>Following is the architecture diagram of Kepler.</p>
<p><a href="https://sustainable-computing.io/design/architecture/#kepler-exporter"><img src="../images/posts/0057-using-kepler-01.png" alt="Flowchart diagram showing Kepler’s architecture as a Kubernetes-based Efficient Power Level Exporter, detailing components such as eBPF Program Generator, Kernel Transport, Preferences Configuration, Pod List, Container ID to Pod Name process mapping within Kepler core that includes Process stats and Energy Stats Reader functions; outputs connect to Prometheus for data exportation and Online Learning Model for queries."></a></p>
<p>Pretty self-explanatory, one would say. I'm not that person, though. If you are an embedded engineer and know your ways in the workings of Kernel, eBPF, and power consumption stats available through Kernel, you will be able to discern this. If you are like me, on the other hand, (almost) completely unaware of all the previously mentioned stuff, you might have some problems understanding this.</p>
<p>This made me dig a bit deeper into the specs. Here are my findings.</p>
<p>We can group the whole architecture in 4 parts.</p>
<p>The first part is <em>data collection</em>. This is the part that creates the eBPF program, attaches it to the Kernel and reads the energy stats.</p>
<p>The second part is <em>data aggregation</em>. This part queries the <code>kubelet</code> for pod/container information, and aggregates that data with the previously read performance counters and energy stats data.</p>
<p>The third part is <em>data modelling</em>. This part is an additional <em>feature</em> that you can enable by running <a href="https://sustainable-computing.io/kepler_model_server/get_started/"><em>Kepler Model Server</em></a>. This server enables tools for power model training, exporting, serving, and utilising, based on Kepler-gathered metrics.</p>
<p>The fourth part is the <em>data presentation</em>. Data is presented, exported as <em>Prometheus</em> metrics, and on the other hand, the data from <em>Kepler Model Server</em> is available for querying and taking actions (e.g. scheduling, etc).</p>
<p>The graph on the page 10 of the presentation linked below helped me understand the <em>Kepler</em> architecture diagram. <a href="https://github.com/sustainable-computing-io/kepler/blob/main/doc/OSS-NA22.pdf">Check it out</a> to learn more.</p>
<h2>How to get started with Kepler?</h2>
<p>To install <em>Kepler</em>, you'll need a couple of things:</p>
<ol>
<li>Working Kubernetes cluster.</li>
<li>Prometheus (or some other monitoring) stack installed.</li>
</ol>
<p>Now, the <a href="https://sustainable-computing.io/installation/local-cluster/#install-kind">Kepler docs</a> provide instructions on getting started with <em>Kind</em>. I've written about <em>Kind</em> in an <a href="https://www.wonderingchimp.com/the-complexity-of-deploying-a-kubernetes-cluster/">article a year and a half ago</a>, and it's great to get you started, fast.</p>
<p>However, for this purpose I've chosen to run my Kubernetes cluster with <a href="https://docs.k3s.io/quick-start"><em>k3s</em></a>. I've decided to use this because it's lightweight, and it's a full-blown cluster, suitable for slower machines. And also, I've worked with <em>k3s</em>, so there's that.</p>
<p>Deploying the <em>k3s</em> cluster is a rather simple endeavour. Unless you use <em>PopOS</em> Linux (an Ubuntu-based OS), which doesn't have <em>vxlan</em> kernel module loaded by default.</p>
<p>So, I've followed a <em>k3s</em> quick-start guide, and after an hour, or so, I had a working cluster! On <em>my machine</em>, I did the following:</p>
<pre><code class="language-shell">## Check if the vxlan module is loaded
lsmod | grep vxlan

## If not, enable vxlan module to load from start
cat /etc/modules-load.d/vxlan.conf 
# /etc/modules-load.d/vxlan.conf
# Load vxlan driver
vxlan

## Restart your machine to make sure the kernel module is loaded

## Run k3s command to setup the cluster
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode=644
</code></pre>
<p>The above <code>curl</code> command gets the <code>k3s</code> script and sets up your cluster. I've also modified <code>kubeconfig</code> file permissions, in order to be able to access the cluster.</p>
<p>Bonus, troubleshooting command below will let you read logs from the <code>k3s</code> service, if you, hopefully not, need it.</p>
<pre><code class="language-shell">journalctl -u k3s -f
</code></pre>
<p>After all this has finished, I had a functioning Kubernetes cluster, running locally with <em>k3s</em>. To verify everything, you can just see if the pods are running with the following commands.</p>
<pre><code class="language-shell">## Set the KUBECONFIG location
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

## Get all the pods on the cluster
kubectl get pods -A
## Output
NAMESPACE     NAME                                                     READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-6c86858495-wgvs7                  1/1     Running     0          75m
kube-system   coredns-6799fbcd5-zrsl2                                  1/1     Running     0          75m
kube-system   helm-install-traefik-crd-mjf42                           0/1     Completed   0          75m
kube-system   helm-install-traefik-psb6k                               0/1     Completed   1          75m
kube-system   metrics-server-54fd9b65b-jcbvl                           1/1     Running     0          75m
kube-system   svclb-traefik-136ec67f-tlkg8                             2/2     Running     0          74m
kube-system   traefik-7d5f6474df-p4qd2                                 1/1     Running     0          74m
</code></pre>
<p>If you have a working Kubernetes cluster, you can skip the above step(s).</p>
<p>Now, on to installing the <em>Prometheus</em> and <em>Kepler</em>. For this, I've followed <a href="https://sustainable-computing.io/installation/kepler-helm/">a comprehensive guide in Kepler docs</a> and installed everything with <em>Helm</em>.</p>
<p>Here are the commands I've executed.</p>
<pre><code class="language-shell">## Install Prometheus on the cluster
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install prometheus prometheus-community/kube-prometheus-stack \
    --namespace monitoring \
    --create-namespace \
    --wait

## Install Kepler on the cluster
helm repo add kepler https://sustainable-computing-io.github.io/kepler-helm-chart
helm repo update

helm install kepler kepler/kepler \
    --namespace kepler \
    --create-namespace \
    --set serviceMonitor.enabled=true \
    --set serviceMonitor.labels.release=prometheus \
</code></pre>
<p>This completed in a couple of minutes, and I had a working monitoring stack, and Kepler exporter in no time!</p>
<h2>Add Grafana Dashboard</h2>
<p>After you've done with the installation, you'll also need to add a <em>Grafana</em> dashboard for <em>Kepler</em>. To do this, you'll need to port forward your local <em>Grafana</em>, login, and import the <a href="https://github.com/sustainable-computing-io/kepler/blob/main/grafana-dashboards/Kepler-Exporter.json">Kepler exporter dashboard</a>.</p>
<pre><code class="language-shell">## Make Grafana locally available through browser
kubectl port-forward --namespace monitoring prometheus-grafana-d5679d5d7-sc4d7 3000:3000
</code></pre>
<p>Open <code>localhost:3000</code> in your browser and login to Grafana. Default credentials are <code>admin/promOperator</code>.</p>
<p>After importing the dashboard linked above, you will see somewhat similar to the below.</p>
<p><img src="../images/posts/0057-using-kepler-02.png" alt="Grafana dashboard for Kepler Exporter displaying multiple panels. The top section shows three gauge panels for ‘CO2 Coal’, ‘CO2 Petroleum’, and ‘CO2 Natural Gas’, indicating real-time carbon dioxide emissions. Below is a bar graph titled ‘Power Consumption in KW over 24h per Source’, showing power consumption data over a 24-hour period segmented by different energy sources. The bottom section contains two horizontal bar graphs titled ‘Total Power Consumption in KW: Non-Renewable’ and ‘Total Power Consumption in KW: Renewable A-K’, displaying cumulative power consumption data categorized into non-renewable and renewable energy sources respectively." title="Source: Local Grafana dashboard of Kepler Exporter"></p>
<p>And that's that! You now have an operating <em>Kepler</em> exporter in your cluster. Let's now see what metrics are available, and what all these numbers in the dashboard actually mean.</p>
<h2>What is the meaning of this?</h2>
<p>To fully answer this question, we would need to really dig deep into our own lives and reflect on the purpose our own has.</p>
<p>Fortunately, the context is a bit different here. We'll stick to answer this question in relation to <em>Kepler</em> exporter and metrics it enables.</p>
<p>Let's start with the basic metrics used in this dashboard.</p>
<ul>
<li><code>kepler_container_joules_total</code> - it is the aggregated package/socket energy consumption of CPU, dram, gpus, and other host components for a given container.</li>
<li><code>kepler_container_*_joules_total</code> - where <code>*</code> is one of the following:
<ul>
<li><code>core</code> - total energy consumption on CPU cores for a certain container;</li>
<li><code>dram</code> - total energy spent in DRAM by a container;</li>
<li><code>uncore</code> - the cumulative energy consumed by certain uncore components (last level cache, integrated GPU and memory controller); the number of components may vary depending on the system;</li>
<li><code>package</code> - the cumulative energy consumed by the CPU socket, including all cores and uncore components;</li>
<li><code>other</code> - energy consumption on other host components besides teh CPU and DRAM;</li>
<li><code>gpu</code> - total energy consumption on the GPUs that a certain container has used.</li>
</ul>
</li>
</ul>
<p>The most of the metrics are available in Joules - the amount of work done or energy transferred. We would need to convert them in Watts - the rate at which work is done or energy is transferred.</p>
<p>This is all done through the <em>Prometheus</em> function <code>irate()</code>. This function calculates the per-second instant rate of increase of the time series in the range vector. In our case, Joules per seconds.</p>
<p>One additional function used in the dashboards is <code>increase()</code>. This function calculates the increase in the time series in the range vector.</p>
<p>To find out more about <em>Prometheus</em> functions, check out the link below.</p>
<p>https://prometheus.io/docs/prometheus/latest/querying/functions/</p>
<p>The <em>Carbon Footprint</em> is calculated by using the coal, natural gas, and petroleum coefficients from <a href="https://www.eia.gov/tools/faqs/faq.php?id=74&amp;t=11">the US Energy Information.</a> These coefficients are in pounds per kWh. To use the metric coefficients, we can consult the <a href="https://commons.wikimedia.org/w/index.php?curid=115157229">2020 Lifecycle Emissions</a> graph or just convert pounds to grams.</p>
<h2>Summary</h2>
<p>Exploring and installing <em>Kepler</em> exporter was quite fun and interesting! At least for me. It is a great tool that is installed easily, and it provides you data about the power consumption, which could be of great help!</p>
<p>You can use that data and create some actions - e.g. if a pod is consuming too much energy, see what is the reason and try to decrease the impact. Or run it in a different time frame, if possible. Possibilities are many.</p>
<p>The <em>Carbon footprint</em> part is an estimation, rather than a current state. And it also depends on the sources you get your electricity. But, <em>Grafana</em> allows you to easily adjust and change the dashboard to your own use-case.</p>
<p>In addition, I haven't explored <em>Kepler Model Server</em> in this article. Based on the documentation, it is an interesting feature, that provides tools for model training, exporting, serving, and utilising based on the metrics from <em>Kepler</em> exporter.</p>
<p>And that is all for this article! I hope you liked it and found the information provided useful and interesting.</p>
<p>Leave your comments or thoughts in the comments below. I'll be sure to respond to every one of them. The rule of thumb for sharing this article, is to send it to 5 people, so more will see and learn new stuff.</p>
<p>See you in the next one!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>A Greenwashing Detection Kit</title>
			<link href="https://wonderingchimp.com/posts/a-greenwashing-detection-kit/"/>
			<updated>2024-05-27T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/a-greenwashing-detection-kit/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>Another even week and another article on my blog! Random information, totally not needed - I've looked it up, this year I've been publishing an article on even week numbers.</p>
<p>Recently, I've read an article on <a href="https://www.themarginalian.org/2014/01/03/baloney-detection-kit-carl-sagan/">Carl Sagan's Baloney Detection Kit</a> by <a href="https://www.themarginalian.org/about/">Maria Popova</a>. This article discusses how to improve your critical thinking, and how to value and grow healthy scepticism towards the information we're surrounded.</p>
<p>In other words - how to detect baloney or <em>BS</em>.</p>
<p>Then I thought - well, this <em>Baloney Detection Kit</em> would be quite useful if we would to apply it to the trending topic of <em>sustainability</em>. And the idea was born - a <em>Greenwashing Detection Kit</em>.</p>
<p>First, I'll start at the beginning - explaining you, the reader, what it is. Then, we'll go to a bit deeper into the topic of how to detect it. In the end, we'll cover some points that can help us <em>fight</em> this and hopefully stop it.</p>
<p>I hope you'll enjoy the read and not actually conclude how I have too many footnotes. While we're at it - what <em>is</em> the correct amount of footnotes?</p>
<h2>What it is?</h2>
<p>Let's look at the Merriam-Webster dictionary, for the definition of <em>Greenwashing</em>:</p>
<blockquote>
<p>the act or practice of making a product, policy, activity, etc. appear to be more environmentally friendly or less environmentally damaging than it really is <a href="https://www.merriam-webster.com/dictionary/greenwashing">^1</a></p>
</blockquote>
<p>So, if companies advertise their products as good for the environment, where in fact they aren't, that is called <em>Greenwashing</em>.</p>
<blockquote>
<p>It's basically just a form of lying.<a href="https://www.nationalgeographic.com/environment/article/what-is-greenwashing-how-to-spot">^2</a></p>
</blockquote>
<p>Examples of <em>Greenwashing</em> are unfortunately present throughout various industries. They often happen in oil and fashion industry. Some of the examples, I'll show below.</p>
<h2>How to detect it?</h2>
<p>In order to detect <em>Greenwashing</em>, as in <em>Baloney detection kit</em>, we can implore a healthy dose of critical thinking.</p>
<p>Fact checking is great, but do we have time to check them for every <em>green</em> product we consider buying. Well, yes, and no. Following are some of the characteristics that can help you detect <em>Greenwashing</em>.</p>
<h3>Having a hidden trade-off</h3>
<p>The manufacturer claims that the product is <em>green</em> based on a narrow set of attribute. Without actually addressing other critical environmental issues.</p>
<p>For example, using technology to promote energy efficiency without mentioning the hazardous materials that were used in manufacturing. Or - <em>AI can help us in fighting climate change</em> without first focusing on millions of kilowatts and litres of water spent in data centres for its regular operation.<a href="https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1lmju">^3</a></p>
<h3>Having no proof</h3>
<p>You see a bold claim by the manufacturer about a product, but there is no easily accessible information, or some reliable third-party certification done.</p>
<p>For example, <em>Ryanair</em> claimed in their ads that it is <em>Europe's ... Lowest Emissions Airline</em> and <em>low CO2 emissions</em>. This was checked, and it ended up that those ads were banned in the UK. <a href="https://www.asa.org.uk/rulings/ryanair-ltd-cas-571089-p1w6b2.html">^4</a></p>
<h3>Being vague</h3>
<p>The claims are vague, not concrete and clearly defined. If something is <em>All-natural</em>, it is not necessarily <em>green</em>.</p>
<p>This characteristic can be applied to the above example from <em>Ryanair</em>. Having no proof and being vague in claims often overlap with one another.</p>
<h3>Showing off meaningless labels</h3>
<p>Companies create their own <em>sustainability</em> certification and mark their products <em>good</em> for the environment.</p>
<p>For example - introducing <em>scorecards</em>  of how each product impacts the environment. And then falsely claim that product A doesn't impact the environment, while it actually does. <a href="https://www.forbes.com/sites/retailwire/2022/07/13/hm-case-shows-how-greenwashing-breaks-brand-promise/">^5</a></p>
<h3>Being irrelevant</h3>
<p>Claims that are true, but not relevant. For example, many products, such as deodorants, proudly claim that they are CFC (Chlorofluorocarbon) free. This is true, but irrelevant, since CFCs have been banned for more than 30 years.<a href="https://www.unep.org/ozonaction/who-we-are/about-montreal-protocol">^6</a></p>
<h3>Being <em>less than two evils</em></h3>
<p>Claiming that a product A or B is in fact damaging to the environment, but not like products C and D.</p>
<p>For example, there was an ad about Land Rover Defender that suggested numerous environmental benefits of using the vehicle. But, the car in fact had an <em>internal combustion engine that burned the fossil fuels</em>.<a href="https://adstandards.ie/complaint/motor-vehicles-2/">^7</a></p>
<h3>Fibbing</h3>
<p>Claims that are simply false.</p>
<blockquote>
<p>fibbing: a trivial or childish lie.<a href="https://www.merriam-webster.com/dictionary/fibbing">^8</a></p>
</blockquote>
<p>For example, claims by the oil industry that the project(s) they are working on are good for the environment and <em>helping provide a sustainable future</em>.<a href="https://www.theguardian.com/environment/2008/aug/13/corporatesocialresponsibility.fossilfuels">^9</a></p>
<h2>How to stop it?</h2>
<p>Even though sometimes it is rather hard to detect, we can all <em>fight Greenwashing</em> with the following:</p>
<ul>
<li>Check if the claims made about a product are factual and true.</li>
<li>If you detect a company is making false claims - report them.</li>
<li>Speak up if you see/hear/read something that is considered <em>Greenwashing</em>.</li>
</ul>
<h2>Summary</h2>
<p>You've reached the end of this article! Before I finish, I want to mention two articles I found quite helpful while writing these lines. The first is from <a href="https://en.wikipedia.org/wiki/Greenwashing"><em>Wikipedia</em></a> and the second is from <a href="https://www.ucc.ie/en/eri/news/here-are-the-7-sins-of-greenwashing.html"><em>University College Cork in Ireland</em></a>. Make sure to check them out if you want to learn more.</p>
<p>I don't want to finish this article with the call to add your comments, notes, feedback below. Or to subscribe to my blog if you want to learn more about <em>Sustainability in Tech</em>.</p>
<p>I want to end it with an updated quote about publicity we all have heard somewhere or from someone.</p>
<p><em>There is no such thing as bad publicity. Unless it's obtained by Greenwashing.</em></p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Advocating for Sustainability in your company</title>
			<link href="https://wonderingchimp.com/posts/advocating-for-sustainability-in-your-company/"/>
			<updated>2024-05-13T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/advocating-for-sustainability-in-your-company/</id>
			<content type="html"><![CDATA[
				<p>Hello there!</p>
<p>Some time ago, I was searching for some new perspectives, ideas which could be interesting to write about. Then I asked community on Mastodon, and as always, they've responded with some interesting ideas.</p>
<p>The first I'm going to write about here is <em>How to Advertise Ecological Sustainability to Higher-Ups</em>. In other words - <em>how to sell</em> sustainability to your <em>for-profit</em> company. Thanks, <a href="https://fosstodon.org/@gohlisch">@gohlisch</a>!</p>
<p>We live in a capitalist society where profit is everything. If you can't grow your profit, you can't survive, most of the time. So, sometimes, although a bit illogical, you need to make a case for the logical stuff. And one of these topics are sustainability and the overall impact on the Planet.</p>
<p>How can we do that?</p>
<h2>Cost reduction</h2>
<p>When I first saw the question asked above, my immediate response was - <em>cost reduction</em>. I started with the engineering side of my brain, and focused on the infrastructure and amount of resources you're using. And yes, this could be one of the things we can reduce costs of. But, there are a plethora of other things we can take into account when trying to reduce costs.  Some of them include:</p>
<ul>
<li>use electricity from renewable energy sources in your offices and/or stores,</li>
<li>if your business allows it, switch to hybrid or remote way of working,</li>
<li>examine your supply chain for possible switch to cheaper and greener alternatives,</li>
<li>check where and how your IT infrastructure is running on.</li>
</ul>
<p>From my perspective, this could be one of the major selling points for sustainability to the <em>higher ups</em>.</p>
<p>Now, there are a couple of more things you can focus on besides cost reduction.</p>
<h2>Improve the unknowns</h2>
<p>Environmental and supply risk can be improved by moving to renewable energy sources. By moving to wind and solar power, companies can have the greater security over their energy resources. If the price of coal or oil skyrocket, it will not be a problem for your company.</p>
<p>In one sentence - you can control where you get the energy from.</p>
<p>Aaaand, moving to renewables is <a href="https://ourworldindata.org/cheap-renewables-growth">becoming cheaper and cheaper, each year</a>.</p>
<h2>Positive publicity</h2>
<p>With sustainability being a trend, <em>going green</em> can improve the publicity of your company. And publicity is (almost) everything these days.</p>
<p>If your company focuses on sustainability, it will be a positive impact on the brand, company's image, and overall marketing.</p>
<h2>People will buy your product(s) more</h2>
<p>Some <a href="https://hbr.org/2019/06/research-actually-consumers-do-buy-sustainable-products">research shows</a> that people will buy products more, if they are good for the environment. If your company focuses on building quality products without the built-in end of life, and focuses on repair rather than selling a new one instead, it can significantly impact the overall environment footprint.</p>
<p>Here, I'm not referring to <a href="https://www.apple.com/newsroom/2023/09/apple-unveils-its-first-carbon-neutral-products/">Apple quality</a> of the product, I'm referring to the <a href="https://www.fairphone.com/en/impact/">Fairphone quality</a>.</p>
<h2>Retaining and attracting employees</h2>
<p>When people see that the company they're working for is putting its environment footprint at the helm of how they do business, people will feel better. They will think of their work as meaningful.</p>
<p>And Victor E. Frankl said:</p>
<blockquote>
<p>Life is not primarily a quest for pleasure, as Freud believed, or a quest for power, as Alfred Adler taught, but a quest for meaning. The greatest task for any person is to find meaning in his or her own life.</p>
</blockquote>
<p>This will also work in attracting new talent. New talent that will bring new perspectives, thought processes, ideas. Possibilities are endless.</p>
<h2>Summary</h2>
<p>Now, as in everything in life, you can do things the right way, or you can <em>half-ass</em> them. In order to be good for the environment, as a company, we actually need to:</p>
<ul>
<li>Reduce the costs of energy and resources you're using in our daily operations (e.g. the amount of servers the application is running on if there isn't that much traffic to it).</li>
<li>Switch to renewable sources of energy.</li>
<li>Focus on the quality of our products rather than the quantity.</li>
<li>Focus on repairing instead of buying (or selling) a new product.</li>
</ul>
<p>The amount of profit we can all have is finite. And as the profit is finite, as is the growth itself. Focusing and investing in more sustainable ways of working, we make sure that we are aligned with the Planet and its resources.</p>
<h2>More information</h2>
<p>Researching this topic, I found a rather <a href="https://online.hbs.edu/blog/post/business-case-for-sustainability">comprehensive article</a> from <em>Harvard Business School</em> . I used it as an inspiration, for guidance, and overall research.</p>
<p>Additionally, if you'd like an optimistic view of the climate crisis, check out the <a href="https://app.thestorygraph.com/books/bf866382-2077-46a6-8944-e3ad7058c62a"><em>Not the End of the World: How We Can Be the First Generation to Build a Sustainable Planet</em>, by <em>Hannah Ritchie</em></a>. I just read it recently, and the optimism in it is quite helpful. Even though, for this kind of problem, <a href="https://www.theguardian.com/books/2024/jan/04/not-the-end-of-the-world-by-hannah-ritchie-review-an-optimists-guide-to-the-climate-crisis">we need pessimists as well</a>.</p>
<p>Congratulations, you've reached the end of this article! If you found this topic useful, feel free to share it, <em>to whom it may concern</em>. If you think I said or wrote something wrong, or you want to provide an overall feedback, comment below.</p>
<p>In any case, see you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Why you don&#39;t need that new and cool device everyone is talking about?</title>
			<link href="https://wonderingchimp.com/posts/why-you-dont-need-that-new-and-cool-device-everyone-is-talking-about/"/>
			<updated>2024-04-29T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/why-you-dont-need-that-new-and-cool-device-everyone-is-talking-about/</id>
			<content type="html"><![CDATA[
				<p>Hi there, it's been a while you've seen me in your inbox, or on the website.</p>
<p>To be honest, I didn't feel like writing. To be able to answer why, I would need to know the reason, so we will not do that here.</p>
<p>However, I do want to change that! And improve on the matters at my blog. My plan was to publish an article every two weeks, but plans are there to be modified, or scraped if we're not up for them, right? Well, the plan still remains, and even though I've missed a couple of the two-week articles (four, to be exact), I'll try to stick to it. How? I don't know. But you'll find out.</p>
<p>Now, to address the tittle of the article. The short answer is rather obvious - it is not good for our environment. We'll go through the longer answer in the paragraphs below.</p>
<h2>What is <em>embodied carbon</em>?</h2>
<p>So, let's start at the beginning - why is buying new device(s) not good for our Planet? The reason is <em>embodied or embedded carbon</em>. This is <em>the amount of CO2eq that is emitted during the production of the device(s) that we use.</em> Being it a smartphone, a laptop, smartwatch, headphones, servers, TVs, and so on, and so forth. Each of these devices contains embedded carbon.</p>
<p>This is because the production process is quite complex and requires quite a lot of materials. Each of that material is extracted from different parts of the world. The process of extraction is a problem on its own, with a lot of carbon emissions. Additionally, the amount of electricity and water spent in these processes is quite extensive. And the electricity used in production is not always clean, adding to the amount of carbon emissions.</p>
<h2>How to measure it?</h2>
<p>This is not as easy as it sounds. We have a plethora of devices, and each group of those devices have different production processes, amount of material used, and lifespan of the device.</p>
<p>We can divide carbon emissions from devices in two categories. The emissions happen during:</p>
<ol>
<li>manufacturing and</li>
<li>using the devices.</li>
</ol>
<p>Here, we're focusing on the emissions during the manufacturing process.</p>
<p>The table below shows the Information and Communication (ICT) sector and Entertainment and Media (E&amp;M) sector quantities, carbon footprint during manufacturing and use, weight of shipments, and overall value of shipments.</p>
<p><img src="../images/posts/0054-carbon-footprint.png" alt="The quantity (volume), carbon footprint, weight, and value for the Information and Communication Technology and Entertainment and Media user devices with the largest carbon footprints. With the paper and hardcopy devices are also included. The estimated total weight for each device type is also shown (note that the total weight of all papers used is out-of-scale."></p>
<p>A quick note on the section below the <em>E&amp;M sector</em>. <em>Hardcopy devices</em> and <em>Paper &amp; Printing</em> are considered a sub-sector of <em>E&amp;M</em>. It includes traditional paper media as well as office and home printers and similar equipment and their energy consumption. For example, printers, copiers, faxes, and combo-devices. <a href="https://www.mdpi.com/2071-1050/10/9/3027">^1</a></p>
<p>Let's focus on the <em>Carbon footprint</em> part. As you can see from the table above, quite a lot of carbon emissions are happening during the manufacturing. Especially the manufacturing of smartphones and laptops.</p>
<p>In other words - a lot of carbon is emitted to produce and ship that device to you instead of you actually using it.</p>
<p>Let me quickly post a reminder table below, referenced in my article on <a href="https://www.wonderingchimp.com/posts/why-computational-resources-are-not-infinite/">finite computing resources</a>.</p>
<p><img src="../images/posts/0051-1-life-expectancy-of-devices.png" alt="The image is a horizontal bar graph titled “HOW LONG SHOULD PRODUCTS LAST FROM A CLIMATE PERSPECTIVE?” with a subtitle “Average lifetime vs optimal lifetime to limit Global Warming Potential (years)”. It compares the average, minimum optimal, and maximum optimal lifetimes of three household products: a vacuum cleaner, a printer, and a washing machine. The vacuum cleaner has an average lifetime of 6.5 years, minimum optimal of 11 years, and maximum optimal of 18 years. The printer has an average lifetime of 4.5 years, minimum optimal of 20 years, and maximum optimal of 44 years. The washing machine has an average lifetime of 11.4 years, minimum optimal of 17 years and maximum optimal of 23 years. The graph is designed to illustrate the optimal product lifetimes from a climate perspective."></p>
<p>You can see here that the <em>minimum</em> optimal lifetime of a smartphone should be ~25 years, and for a laptop ~20 years. <a href="https://eeb.org/wp-content/uploads/2019/09/Coolproducts-report.pdf">^2</a></p>
<p>Is that even possible? Let's find out.</p>
<h2>How can we help?</h2>
<p>It can be quite overwhelming when you first hear, see, or read these numbers. You can even end up feeling depressed. At first.</p>
<p>Next stop is - what action can we take to improve this?</p>
<p>There are a couple of ways we can do this. Some of the steps we can take alone, but for others, we would need some help.</p>
<h3>Increasing the lifespan</h3>
<p>The first logical step is to keep using the devices you have, for as long as that is possible. For example, I have been using my previous phone (Pixel version 2) for more than 6 years. I bought one of the two laptops I'm using, 8 years ago. It is a <em>Lenovo ThinkPad T450</em> (not an IBM Thinkpad, unfortunately) which I bought used. And I am not even sure how long it was used before I bought it. Maybe I can check that.</p>
<p>The second laptop I'm using more actively was also used (by me) before I bought it from my company when the warranty expired. After 4 years.</p>
<p>So, it's possible. Not as easy as it should, but it is.</p>
<p>If you have been using your laptop or smartphone for quite some time, and really want a change, make sure to check more sustainable options like:</p>
<ul>
<li><a href="https://www.fairphone.com/">Fairphone</a></li>
<li><a href="https://frame.work/">Framework laptops</a></li>
</ul>
<p>The above two companies give the right to repair back to you, which I think is most important!</p>
<h3>Building more lean software</h3>
<p>This is more centred towards the software engineers out there. The problem of the plethora of software available today is that it is <em>bloated</em> and <em>over-complicated</em>.</p>
<p>Why does the Android system with no apps takes up almost 6GB? Why the Docker images take more than 350MB? Because of the complexity and dependency bloat we've introduced in our applications.</p>
<p>This increase in complexity and bloat requires devices to perform better. E.g. have more storage or CPU cores. Our existing devices cannot handle that as well. And we end up in a vicious cycle of consumerism.</p>
<blockquote>
<p>Well, bigger doesn’t imply better. Bigger means someone has lost control. Bigger means we don’t know what’s going on. Bigger means complexity tax, performance tax, reliability tax. <a href="https://tonsky.me/blog/disenchantment/">^3</a></p>
</blockquote>
<blockquote>
<p>The incomprehensible should cause suspicion rather than admiration.<a href="https://spectrum.ieee.org/lean-software-development">^4</a></p>
</blockquote>
<p>We cannot do this alone. We need to make decision makers, stakeholders, aware that over-complicating software, and bloating it with dependencies, doesn't have just security concerns. It also has environmental concerns.</p>
<h3>Regulate for <em>right to repair</em></h3>
<p>The <em>right to repair</em> is a user's right to repair their device(s) and in that way increase the lifespan of it. This also means that manufacturers need to:</p>
<ul>
<li>repair products for reasonable price and within a reasonable time frame after the guarantee period has expired;</li>
<li>guarantee access to the spare parts, tools, and repair information for the reasonable amount of time;</li>
<li>focus on repair first;</li>
<li>assist consumers in repair process.</li>
</ul>
<p>As far as I know, this rule is adopted in the EU.<a href="https://www.europarl.europa.eu/news/en/press-room/20240419IPR20590/right-to-repair-making-repair-easier-and-more-appealing-to-consumers">^5</a> I'm not sure about the rest of the world, maybe some states in the US.</p>
<h2>Summary</h2>
<p>Finally, we've reached the end of the article. In it, we (hopefully) learned the following:</p>
<ul>
<li><em>Embedded carbon</em> is the carbon emitted during the manufacturing of the device.</li>
<li>Measuring the embedded carbon is hard, but a lot of the emissions goes into <em>manufacturing</em>, rather than <em>using</em>, especially for smartphones.</li>
<li>We can impact it by: using devices longer time; we (software engineers) should work on implementing leaner software; (governments should) regulate <em>right to repair</em>.</li>
</ul>
<p>If you found this article useful, don't hesitate to share it. If you got it via e-mail, feel free to forward it to a person that will find this helpful.</p>
<p>See you in the next article! 🤞</p>
<h2>References</h2>
<ul>
<li>[[0051-frugal-computing]]</li>
<li>[[How everything in software got immensely bigger]]</li>
<li>[[The incomprehensible should cause suspicion rather than admiration]]</li>
<li>[[Myths and Experiences in Reducing Carbon for Cloud Platforms]]</li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Demoing the Cloud Carbon Footprint</title>
			<link href="https://wonderingchimp.com/posts/demoing-the-cloud-carbon-footprint/"/>
			<updated>2024-02-19T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/demoing-the-cloud-carbon-footprint/</id>
			<content type="html"><![CDATA[
				<p>Hello everyone!</p>
<p>In this article, I'm going to write about the Cloud Carbon Footprint tool. In short, it's an application that estimates energy usage and carbon emissions of various cloud providers.</p>
<p>We will explore the following:</p>
<ul>
<li>What this tool is?</li>
<li>How it works?</li>
<li>How the application looks like?</li>
<li>How it differs from tools provided by other cloud platforms?</li>
<li>How to run it locally?</li>
</ul>
<h2>What this tool is?</h2>
<p>This application helps you see all your energy usage estimates and carbon emissions in one place. If you have resources running in multiple cloud providers, this tool can be of help. It shows all the estimates in one place. There is no need to jump from one cloud provider service to another. All in one place.</p>
<h2>How it works?</h2>
<p>In a nutshell, it pulls usage data from cloud providers and calculates the estimated energy and GHG emissions. Estimated energy is expressed in Watt-Hours, and GHG emissions in metric tons CO2e. If you need a reminder on what CO2e is, check out my <a href="https://www.wonderingchimp.com/posts/how-much-carbon-does-my-server-emit/">earlier article</a>.</p>
<p>The estimations are calculated in the following way. (Copy from the documentation alert!)</p>
<pre><code>Total CO2e = operational emissions + embodied Emissions
</code></pre>
<p>Where:</p>
<pre><code>Operational emissions = (Cloud provider service usage) x (Cloud energy conversion factors [kWh]) x (Cloud provider Power Usage Effectiveness (PUE)) x (grid emissions factors [metric tons CO2e])
</code></pre>
<p>And:</p>
<pre><code>Embodied Emissions = estimated metric tons CO2e emissions from the manufacturing of datacenter servers, for compute usage
</code></pre>
<p>The documentation of the application is great! Check out the longer version of the methodology on <a href="https://www.cloudcarbonfootprint.org/docs/methodology/#longer-version">this link</a>.</p>
<h2>How the application looks like?</h2>
<p>The Application Web UI is pretty neat! The image below shows the overview of it.</p>
<p><a href="https://demo.cloudcarbonfootprint.org/"><img src="../images/posts/0053-ccf-01.png" alt="The image is a screenshot of the “Cloud Carbon Footprint” dashboard. It displays data on cloud usage and emissions breakdown. The dashboard includes a line graph showing cloud usage over time, a metric indicating 14.1 metric tons CO2e of total emissions equivalent to 17 direct one-way flights from NYC to London, and a bar graph breaking down emissions by low carbon intensity, medium, and high. The interface also contains various tabs for different pages and options for user interaction."></a></p>
<p>To check it out yourself, and play around, <a href="https://console.cloud.google.com/carbon">visit the application demo</a>.</p>
<h2>How it differs from tools provided by other cloud platforms?</h2>
<p>The application currently supports AWS, Google Cloud and Microsoft Azure. It provides the estimated values. Those estimates are not meant as a replacement for data from cloud providers. More like a complement.</p>
<p>They also provide integration with Electricity Maps API. For real-time carbon intensity emissions factors instead of the default values. To find out more about Electricity Maps API, <a href="https://www.wonderingchimp.com/posts/exploring-the-green-apis/">check out my article</a></p>
<h2>How to run it locally?</h2>
<p>Let's get our hands dirty now, and run the application locally. You can run it by executing <code>yarn</code> scripts or using <code>docker compose</code>. I chose the latter approach, because I'm lazy. And also, running <code>yarn</code> would require for me to install it, and also install <code>node.js</code>. I opted for <em>an easier</em> approach. Yeah, right.</p>
<p>In this <em>demo</em>, I'm going to configure the CCF to run with the GCP platform. I already prepared some resources running there, so let's give it a try.</p>
<p>I first started with cloning the application repository, as recommended in the documentation.</p>
<pre><code class="language-shell">git clone --branch latest https://github.com/cloud-carbon-footprint/cloud-carbon-footprint.git
cd cloud-carbon-footprint
</code></pre>
<p>After that, I went through the guide on connecting the GCP data to the application. In a nutshell, I did the following:</p>
<ol>
<li>Create a GCP service account with <code>roles/bigquery.dataViewer</code> and <code>roles/bigquery.jobUser</code> permissions</li>
<li>Set up a Google Cloud billing data to export to BigQuery</li>
<li>Create an <code>.env</code> file based on the <code>env.template</code> in the repository</li>
<li>Created <code>docker</code> secrets</li>
<li>Updated the <code>docker-compose.yml</code> file</li>
<li>Run the <code>docker compose up</code> from the root of the repo.</li>
</ol>
<p>The <a href="https://www.cloudcarbonfootprint.org/docs/gcp">following link</a> provides you thorough instructions on how to complete most of the steps above. Heads up, it also links to a lot of GCP instruction documents to set up service account and BigQuery.</p>
<p>Here is the overview of the <code>.env</code> file I've been using.</p>
<pre><code># GCP

# Variables needed for the Billing Data (Holistic) approach with GCP:
GCP_USE_BILLING_DATA=true
# Optionally set this variable if you want to include or not include from estimation request - defaults to true.
GCP_INCLUDE_ESTIMATES=true
GCP_USE_CARBON_FREE_ENERGY_PERCENTAGE=true
GOOGLE_APPLICATION_CREDENTIALS=/location/of/service-account-keys.json
GCP_BIG_QUERY_TABLE=your-project-id.data-set-name.table
GCP_BILLING_PROJECT_ID=your-project-id
GCP_BILLING_PROJECT_NAME=&quot;Your Project Name&quot;

# Variables to help configure average vcpu's to get more accurate date from GKE and Cloud Composer
GCP_VCPUS_PER_GKE_CLUSTER=10
GCP_VCPUS_PER_CLOUD_COMPOSER_ENVIRONMENT=10

# Variables needed for the Cloud Usage API (Higher Accuracy) approach with GCP:
GCP_PROJECTS=[{&quot;id&quot;:&quot;your-project-id&quot;,&quot;name&quot;:&quot;Your Project Name&quot;}]

GCP_RESOURCE_TAG_NAMES=[] # [&quot;tag:ise-api-enabler-access, label:goog-composer-location, project:twproject&quot;]


# Additional Configurations

# To enable the use of the Electricity Maps API for carbon intensity data, set the following variable to your token:
ELECTRICITY_MAPS_TOKEN=api-token
</code></pre>
<p>As you can see, I've also tried to enable the integration with Electricity Maps API by adding the token.</p>
<p><strong>Heads up!</strong> You will need to update the <code>.env</code> file per your settings. Make sure that you copy the whole value for <code>GCP_BIG_QUERY_TABLE</code> from GCP BigQuery table. I had some issues with not providing the correct name, so running of the application failed.</p>
<p>After that, I wanted to generate <code>docker secrets</code>. In the documentation, they recommend running the <code>yarn create-docker-secrets</code>. Since I don't have <code>yarn</code> installed, I've checked the <code>package.json</code> file and saw that the script is calling the <code>create-docker-secrets.sh</code> script. I found it in the <code>packages/api/</code> directory, and run it.</p>
<p><strong>Heads up!</strong> You will also need to put your <code>.env</code> file in this (<code>packages/api</code>) directory!</p>
<pre><code class="language-shell">./create-docker-secrets.sh
</code></pre>
<p>This script generated all necessary secrets in the <code>$HOME/.docker/secrets</code> directory. Here is the output of the directory.</p>
<pre><code class="language-shell">$ ls -lha ~/.docker/secrets/
total 56K
drwxrwxr-x 2 user group 4.0K Feb 16 09:23 .
drwxrwxr-x 3 user group 4.0K Feb 16 08:36 ..
-rw-rw-r-- 1 user group   14 Feb 16 08:48 ELECTRICITY_MAPS_TOKEN
-rw-rw-r-- 1 user group   89 Feb 16 09:23 GCP_BIG_QUERY_TABLE
-rw-rw-r-- 1 user group   20 Feb 16 08:48 GCP_BILLING_PROJECT_ID
-rw-rw-r-- 1 user group   19 Feb 16 08:48 GCP_BILLING_PROJECT_NAME
-rw-rw-r-- 1 user group    5 Feb 16 08:48 GCP_INCLUDE_ESTIMATES
-rw-rw-r-- 1 user group   57 Feb 16 08:48 GCP_PROJECTS
-rw-rw-r-- 1 user group   85 Feb 16 08:48 GCP_RESOURCE_TAG_NAMES
-rw-rw-r-- 1 user group    5 Feb 16 08:48 GCP_USE_BILLING_DATA
-rw-rw-r-- 1 user group    5 Feb 16 08:48 GCP_USE_CARBON_FREE_ENERGY_PERCENTAGE
-rw-rw-r-- 1 user group    3 Feb 16 08:48 GCP_VCPUS_PER_CLOUD_COMPOSER_ENVIRONMENT
-rw-rw-r-- 1 user group    3 Feb 16 08:48 GCP_VCPUS_PER_GKE_CLUSTER
-rw-rw-r-- 1 user group   54 Feb 16 08:48 GOOGLE_APPLICATION_CREDENTIALS
</code></pre>
<p>The final edit was to the <code>docker-compose.yml</code> file. I needed to remove all the values that I was not using, so the <code>docker compose</code> setup completes. Here is the overview of the file I've used.</p>
<pre><code class="language-yaml">version: '3.9'

services:
  client:
    image: cloudcarbonfootprint/client:latest
    ports:
      - '80:80'
    volumes:
      - ./docker/nginx.conf:/etc/nginx/nginx.conf
    depends_on:
      - api
  api:
    image: cloudcarbonfootprint/api:latest
    ports:
      - '4000:4000'
    volumes:
      - $HOME/.config/gcloud/service-account-keys.json:/root/.config/gcloud/service-account-keys.json
    secrets:
      - GCP_BIG_QUERY_TABLE
      - GCP_BILLING_PROJECT_ID
      - GCP_BILLING_PROJECT_NAME
      - ELECTRICITY_MAPS_TOKEN
    environment:
      # set the CACHE_MODE to MONGODB to use MongoDB
      - CACHE_MODE=LOCAL
      - GCP_USE_BILLING_DATA=true
      - GCP_USE_CARBON_FREE_ENERGY_PERCENTAGE=true
      - GOOGLE_APPLICATION_CREDENTIALS=/root/.config/gcloud/service-account-keys.json
      - GCP_BIG_QUERY_TABLE=/run/secrets/GOOGLE_APPLICATION_CREDENTIALS
      - GCP_BILLING_PROJECT_ID=/run/secrets/GCP_BIG_QUERY_TABLE
      - GCP_BILLING_PROJECT_NAME=/run/secrets/GCP_BILLING_PROJECT_ID
secrets:
  GCP_BIG_QUERY_TABLE:
    file: ~/.docker/secrets/GCP_BIG_QUERY_TABLE
  GCP_BILLING_PROJECT_ID:
    file: ~/.docker/secrets/GCP_BILLING_PROJECT_ID
  GCP_BILLING_PROJECT_NAME:
    file: ~/.docker/secrets/GCP_BILLING_PROJECT_NAME
  ELECTRICITY_MAPS_TOKEN:
    file: ~/.docker/secrets/ELECTRICITY_MAPS_TOKEN
</code></pre>
<p>After I had everything of the above set and complete, I run the following command.</p>
<pre><code class="language-shell">docker compose up
</code></pre>
<p>This created two containers, one for the application, and the other for the API.</p>
<pre><code class="language-shell">$ docker ps
CONTAINER ID   IMAGE                                COMMAND                  CREATED       STATUS          PORTS                                       NAMES
22aca61d4487   cloudcarbonfootprint/client:latest   &quot;/docker-entrypoint.…&quot;   2 hours ago   Up 29 minutes   0.0.0.0:80-&gt;80/tcp, :::80-&gt;80/tcp           cloud-carbon-footprint-client-1
04624e02bab2   cloudcarbonfootprint/api:latest      &quot;docker-entrypoint.s…&quot;   2 hours ago   Up 29 minutes   0.0.0.0:4000-&gt;4000/tcp, :::4000-&gt;4000/tcp   cloud-carbon-footprint-api-1
</code></pre>
<p>Opening the <code>localhost</code> on my local browser, I got the following screen.</p>
<p><img src="../images/posts/0053-ccf-04.png" alt="The image is a screenshot of the “Cloud Carbon Footprint” application interface. It provides data visualization and information on CO2e emissions resulting from cloud usage. The interface includes a line graph labeled “Cloud Usage”, a section displaying “0.0031 metric tons CO2e” indicating the total emissions measured, and an “Emissions Breakdown” bar graph categorizing emissions into low, medium, and high severity. The interface also contains various tabs for user interaction." title="Source: http://localhost"></p>
<p>As you can see, the data is available! You (in this case I), can go through the dashboards and further check where are the emissions coming from. Pretty nice!</p>
<p><strong>Notes on the side!</strong> When I configured everything, I needed to wait for some time to get the data. This might be due to BigQuery setup on GCP. Additionally, I've used a trial token for Electricity Maps API. It has only a couple of regions available. This made the application timeout a couple of times. In the end, I needed to remove the token, so the application could start. I got plenty of warnings, similar to below.</p>
<pre><code class="language-shell">...
api-1     | 2024-02-17T08:03:47.476Z [BillingExportTable] warn: Electricity Maps zone data was not found for us-west1. Using default emissions factors. 
api-1     | 2024-02-17T08:03:47.567Z [BillingExportTable] warn: Electricity Maps zone data was not found for us-west1. Using default emissions factors. 
api-1     | 2024-02-17T08:03:47.802Z [BillingExportTable] warn: Electricity Maps zone data was not found for southamerica-west1. Using default emissions factors. 
api-1     | 2024-02-17T08:03:47.903Z [BillingExportTable] warn: Electricity Maps zone data was not found for southamerica-west1. Using default emissions factors. 
api-1     | 2024-02-17T08:03:48.059Z [BillingExportTable] warn: Electricity Maps zone data was not found for europe-west3. Using default emissions factors.
...
</code></pre>
<p>Let's now compare the values with the ones from the GCP Carbon Footprint tool. Check out the image below.</p>
<p><a href="https://console.cloud.google.com/carbon"><img src="../images/posts/0053-ccf-03.png" alt="The image is a screenshot of the Google Cloud interface, specifically the “Overview for billing account ‘My Billing Account’” page under the “Carbon Footprint” tab. It displays various graphs and data visualizations representing location-based monthly carbon footprint estimates. The interface includes a yearly carbon footprint section, a bar graph displaying monthly carbon footprint in kgCO2e units, and three separate bar graphs showing location-based monthly carbon footprint estimates by project, by product, and by region for January 2024. The interface also contains various tabs for user interaction."></a></p>
<p>From first look, it seems that the values from the CCF are more fine-grained than those on the GCP. CCF also can connect to Electricity Maps API, which shows live emissions factors. I'm not sure if GCP shows that. Probably not.</p>
<p>Anyhow, the CCF is not meant to replace the tools from cloud providers, it complements them.</p>
<h2>Summary</h2>
<p>Even though I had some challenges with the setup, I find it quite nice and useful. I really like the <em>Emissions Equivalencies</em> panel (I named it). The one on the bottom left. It puts emissions from your resources into different contexts - flights, phones, and how many trees could sequester the carbon emissions from resources. It might need some improvements (talking from the deployment side). Nevertheless, I am looking forward to future releases.</p>
<p>The application itself is small, not requiring a lot of resources, sort of easy to set up and run... And the most important thing - it shows you necessary data in one place! It's definitely a good starting point.</p>
<p>Check out <a href="https://console.cloud.google.com/carbon">this GitHub repository</a> for more information.</p>
<p>Now, in an ideal world, cloud providers would expose more comprehensive data about resource energy usage and GHG emissions. Not just only scope 1 and 2, but also scope 3. Better yet, we wouldn't be in this place to start with.</p>
<p>However, since we're not living in an ideal world, this tool can help us improve the reporting and knowledge about our resource emissions.</p>
<p>Let me know the in the comments below, what do you think about the tool and the article itself!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Sustainability Fatigue - the Two Hows</title>
			<link href="https://wonderingchimp.com/posts/sustainability-fatigue-the-two-hows/"/>
			<updated>2024-02-05T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/sustainability-fatigue-the-two-hows/</id>
			<content type="html"><![CDATA[
				<p>Looking at the Merriam-Webster dictionary on the definition of <a href="https://www.merriam-webster.com/dictionary/fatigue"><em>Fatigue</em></a>, I found the following.</p>
<blockquote>
<p>Fatigue, noun
2a: weariness or exhaustion from labor, exertion, or stress.</p>
</blockquote>
<p>Hm, interesting. Yes, it's somewhat like that. But, maybe not the same. Then I read along.</p>
<blockquote>
<p>2c: a state or attitude of indifference or apathy brought on by overexposure (as to a repeated series of similar events or appeals).</p>
</blockquote>
<p>There we are. This is how I would describe the feeling I experienced from time to time. Especially at the end of the last year. Even writing this article felt a bit apathetic in the beginning... I wonder what Merriam-Webster has to say about the word <em>apathetic</em>?</p>
<p>Now, this is not a lesson in English dictionary, and I am, by no means, a person to give you one. This is a story of the <em>state of indifference or apathy brought on by overexposure</em>. I couldn't help myself, sorry. Overexposure to sustainability-related content. The state I fell into a couple of times in the past. The state I'll fall into in future, probably.</p>
<p>This is a story for all of you who feel, or have felt, the same. Or at least somewhat similar. It also has a selfish goal - it is a lesson for my future self on how not to fall into that <em>trap</em> again.</p>
<h2>How it all started?</h2>
<p>Imagine you're in nature, surrounded by greenery. Trees, plants, meadows... Everything is perfect! In the background you can hear the soft sound of birds chirping, wind blowing. It's idyllic. You enjoy it...</p>
<p>This is how I felt in the beginning, with all my brain juices flowing. Exploring the possibilities on how to make our planet better, simpler, more sustainable.</p>
<p>And I read along. Days on end, even weeks. Not all the time, but whenever I had some moment to spare. All those news, e-mails, how-to guides, videos, posts here and there...</p>
<p>In my head, it all started to change. Trees were cut down, plants were drying, meadows covered with concrete buildings... All you can hear in the background is the endless hum of the traffic with the sound of an emergency vehicle from time to time...</p>
<p>This was the feeling I ended up with. By continuously immersing myself in the topic. Reading about the solutions, but somehow getting more and more aware of the problems. News weren't great, either. Both on a local and global level.</p>
<p>It started as an optimistic quest wrapped in green. And it ended as a pessimistic <em>reality</em> wrapped in dark gray.</p>
<p>The feeling itself wasn't immediate. It was the result of an ongoing content consumption. The process of getting into fatigue wrapped in dark gray was slow. I wasn't aware of it for quite a while. Up until I saw how I felt reading about the consequences of global warming some time ago. It was the feeling you can explain with the simple word - <em>meh!</em> Which was, if we consult <a href="https://www.merriam-webster.com/dictionary/meh">Merriam-Webster dictionary</a>, again, to the point.</p>
<blockquote>
<p>apathetic, indifferent.</p>
</blockquote>
<h2>How to cope?</h2>
<p>When something is wrong in my life, I feel different, or somehow strange, I turn to books. They're my first <em>line of defence</em>. I tried reading a couple of them, but nothing felt right. I felt disconnected from each of them I took.</p>
<p>After that, I turned to my second <em>line of defence</em> - something to listen to or watch. Podcasts, conference talks, documentaries... Result was the same - nada.</p>
<p>All those things like: <em>Five easy steps to recover from fatigue</em>; <em>Do this, and you'll be better</em>; <em>Check this great advice on how to X and Y</em>... It all reminded me of a book called - <a href="https://ronpurser.com/book"><em>McMindfulness: How Mindfulness Became the New Capitalist Spirituality</em></a>, by Ronald Purser. Just a note, I didn't read it, yet, but I found the title somehow convenient.</p>
<p>Then, I took a step back. I was trying to <em>fix</em> the feeling of <em>overexposure</em> with another one. That same <em>overexposure</em>, but to other type of content. Like fighting fire with fire. Which in some cases makes sense. However, in this case, it didn't. My focus and attention needed a rest. My brain needed a rest.</p>
<p>So, I just stopped. Stopped checking all the news. Stopped reading all those newsletters, blogs, articles... Listening all those podcasts. Yes, even <em>The Huberman Lab</em>.</p>
<p>Instead of <em>push-based</em>, I took the <em>pull-based</em> approach. I felt that information was getting <em>pushed</em> to me all the time. Via e-mails, notifications, pop-ups, and what-nots... In a <em>pull-based</em> approach, you control when you get the information. <em>When</em> and <em>how</em> you want to get the information.</p>
<p>Now, is there an app for that? Well, yes, sort of. It's actually a functionality that's been there all the time.</p>
<p>All apps were designed to shoot dopamine in your brain with all those pop-ups and notifications. Most of them not even relevant or necessary. Yes, I'm looking at you, <em>LinkedIn</em>! Now, the thing that you can do is just to <em>turn off all those notifications</em>. All applications allow that. Or the system you're using allows it. In that way, <em>you can control how you get the information</em>. Simple as that.</p>
<p>If you think you need to be online all the time, that is also okay. I hope you can manage it. I can't. Let me tell you this, and I'm aware it's a <em>boomer-alert</em>, I think if something is dead-urgent, <em>people will always call.</em> Either by phone or some application of choice. I tend to leave the call notifications on. For the apps I use the most frequently at least.</p>
<p>And that's it. That is how I manage. <a href="https://www.goodreads.com/book/show/39003820-somehow-i-manage"><em>Somehow I manage</em></a>.</p>
<p>So, what are some results? Well, at first, it was a bit strange. A couple of times I caught myself reaching for a phone and looking for some new thing, notification, e-mail, anything... Just to feel the excitement. Then, that desire started to slowly fade away. I often see myself leaving the phone unchecked for quite some time. And yes, when it was urgent, people called.</p>
<p>This approach helps me sort out the plethora of information I'm bombarded with every day. And see that it's not everything so dark gray and pessimistic...</p>
<p>Now I need to go, I heard my phone buzzing.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How learning one word can shift perspective?</title>
			<link href="https://wonderingchimp.com/posts/how-learning-one-word-can-shift-perspective/"/>
			<updated>2024-01-22T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-learning-one-word-can-shift-perspective/</id>
			<content type="html"><![CDATA[
				<p>A while ago, I've stumbled upon an interesting <em>toot</em> on Mastodon:</p>
<p><a href="https://werd.social/@ben/111228643892082291"><img src="../images/posts/0051-2-toot.png" alt="The image is a screenshot of a Mastodon post by Ben Werdmuller (@ben@werd.social) posted on October 13, 2023, at 18:26 via Zapier app. The toot discusses the importance of treating computational resources as finite and precious, emphasizing the need for “frugal computing” to achieve goals with less energy and material. The toot includes the hashtag “#Climate” and a link to an article on arXiv.org titled “Frugal Computing – On the need for low-car…”. The toot has been retooted 32 times and liked 34 times. The visible snippet of the linked article mentions that current emissions from computing are almost 4% of something, but the text is cut off."></a></p>
<p>It linked to the research paper titled <em>Frugal Computing - On the need for low-carbon and sustainable computing and the path towards zero-carbon computing</em>. My thought process was the following - Hm, this seems interesting, but what the hell is <em>frugal</em>?</p>
<p>After a short research (one search query), I found out that <em>frugal</em> actually means economical, not wasteful, wise in expenditure of resources. Since I'm not a native English speaker, I felt a small sense of accomplishment after learning a new word.</p>
<h2>The <em>Frugal Computing</em></h2>
<p>All jokes aside, the title of the paper was now making way more sense to me than in the beginning. So I decided to have a look, and find out more. I've opened the paper, and read the abstract.</p>
<blockquote>
<p>The current emissions from computing are almost 4% of the world total. This is already more than emissions from the airline industry and are projected to rise steeply over the next two decades. By 2040 emissions from computing alone will account for more than half of the emissions budget to keep global warming below 1.5°C. Consequently, this growth in computing emissions is unsustainable. The emissions from production of computing devices exceed the emissions from operating them, so even if devices are more energy efficient producing more of them will make the emissions problem worse. Therefore we must extend the useful life of our computing devices.
As a society we need to start treating computational resources as finite and precious, to be utilised only when necessary, and as effectively as possible. We need frugal computing: achieving our aims with less energy and material.</p>
</blockquote>
<p>Great! - I thought. There is a paper that I can turn to when I need to convince myself and others about the importance of economical use of computational resources. And yes, I'm aware that this is coming from a person who has continued to think in the well-known sysadmin's way - Sure, we can add more CPU or Memory to your server, machine, VM, pod, container...</p>
<p>But, the important question here is - do we actually need that much of CPU or Memory? Probably not.</p>
<h2>About the author</h2>
<p>The author of the paper is Wim Vanderbauwhede, a professor from the University of Glasgow. He's the lead of the Low Carbon and Sustainable Computing activity at the School of Computing Science at the University. You can find more about him on <a href="https://www.gla.ac.uk/schools/computing/staff/wimvanderbauwhede/#">this link</a>.</p>
<p>If you are on Mastodon, you can also check <a href="https://merveilles.town/@wim_v12e">his profile</a>.</p>
<p>Aaaand, he also has a <a href="https://wimvanderbauwhede.codeberg.page/">great blog</a>!</p>
<h2>Into the paper</h2>
<p>This person knows what he's writing about, I thought. Unlike some of us (I mean me, yes). So, I continued reading the paper. In the following text, I'll go through each section and write down what I learned and how I understand it. I hope it helps!</p>
<h3>What are computational resources?</h3>
<p>In the first section, the author gives an explanation of what computational resources are. He uses a simple example.</p>
<blockquote>
<p>... when you perform a web search on your phone or participate in a video conference on your laptop, the computational resources involved are those for production and running of your phone or laptop, the mobile network or WiFi you are connected to, the fixed network it connects to, the data centres that perform the search or video delivery operations.</p>
</blockquote>
<p>There is no more to be added here. It's all summed in a nice and a comprehensive way. So let's go to the next part.</p>
<h3>Are they infinite?</h3>
<p>The second section is about the fact that computational resources are <strong>finite.</strong></p>
<p>We tend to see the computational resources as infinite. <em>We can always have more servers, storage, VMs... New, and more powerful laptop, smartphone, this new smartwatch...</em> The last one was personal. When will it end?</p>
<p>Well, it turns out that, first - the energy usage of these devices will grow. And second - the resources used to create these devices and infrastructure are finite. So, it should end sooner rather than later.</p>
<p>The author sums the section with the following.</p>
<blockquote>
<p>.. as a society we need to start treating computational resources as finite and precious, to be utilised only when necessary, and as frugally as possible. And as computing scientists, we need to ensure that computing has the lowest possible energy consumption. And we should achieve this with the currently available technologies because the lifetimes of compute devices needs to be extended dramatically.</p>
</blockquote>
<blockquote>
<p>I would like to call this “frugal computing”: achieving the same results for less energy by being more frugal with our computing resources.</p>
</blockquote>
<p>So, the term <em>frugal computing</em> was born.</p>
<h3>Why so serious?</h3>
<p>The third section is going into details about why this is a problem and why we need <em>frugal computing</em>.</p>
<p><strong>Meeting the climate targets</strong>. As stated in the paper - <em>we cannot count on renewables to eliminate CO2 emissions from electricity in time to meet the climate targets.</em> We also need to reduce our energy consumption.</p>
<p><strong>Emissions from consumption of computational resources</strong>. Research shows that the emissions only from using devices will grow. From 3%-3.5% in 2020 to 10%-14% by 2040. In other words - the emissions of using will increase 3x or 4x times. This is a problem because by 2040 these emissions will equate to 5 GtCO2e. The target for the world total emissions from all sources by 2040 is 13 GtCO2e.</p>
<p><strong>Emissions from production of computational resources</strong>. Yet another thing we don't think about.</p>
<p><em>Okay, new model of the phone is out, I need to have it! (Even though I used this model for only a couple of months...)</em></p>
<p>Well, emissions from producing them are actually way bigger than those of using them. For laptops and similar, production, distribution, and disposal is 52% of the total. The total being emission from both usage and production. For smartphones, that number is 72%, which is even bigger! And for servers is 20%.</p>
<p>We can <em>fix</em> all this by increasing the life span of the devices we use. Simple as that, isn't it?</p>
<p><a href="https://eeb.org/wp-content/uploads/2019/09/Coolproducts-report.pdf"><img src="../images/posts/0051-1-life-expectancy-of-devices.png" alt="The image is a horizontal bar graph titled “HOW LONG SHOULD PRODUCTS LAST FROM A CLIMATE PERSPECTIVE?” with a subtitle “Average lifetime vs optimal lifetime to limit Global Warming Potential (years)”. It compares the average, minimum optimal, and maximum optimal lifetimes of three household products: a vacuum cleaner, a printer, and a washing machine. The vacuum cleaner has an average lifetime of 6.5 years, minimum optimal of 11 years, and maximum optimal of 18 years. The printer has an average lifetime of 4.5 years, minimum optimal of 20 years, and maximum optimal of 44 years. The washing machine has an average lifetime of 11.4 years, minimum optimal of 17 years and maximum optimal of 23 years. The graph is designed to illustrate the optimal product lifetimes from a climate perspective."></a></p>
<p><strong>Total emissions cost from computing</strong>. When we add things together, they don't look good. Based on the above, emissions from both consumption and production by 2040 will be 10 GtCO2e. And remember that the target is still 13 GtCO2e.</p>
<p>The main carbon cost of the resources is their production and the use of the mobile network. We <em>must extend their useful life very considerably and reduce network utilisation.</em></p>
<h3>How we can achieve computational frugality?</h3>
<p>The fourth section is about a vision on how we can achieve that. To put it simple:</p>
<ul>
<li>use devices for longer time,</li>
<li>support those devices for longer time (maintenance, spare parts...)</li>
<li>incentivise the long-term support and usage (with taxes and policies).</li>
</ul>
<h3>Challenges ahead</h3>
<p>The fifth section is about research challenges to the vision presented earlier.</p>
<p>There are numerous challenges in various aspects: cloud computing, ultra-HD video &amp; VR/AR, IoT, mobile devices, AI... How we approach each aspect will determine the ending result.</p>
<h3>Where to next?</h3>
<p>This section is about research directions. Where we can go so we contribute to the vision stated above.</p>
<ul>
<li>Systems must be as energy-efficient and long-lived as possible.</li>
<li>Sustainable systems need to be data-driven.</li>
<li>Human-computer interfaces should bring awareness to users. Also nudge them towards more sustainable practices.</li>
<li>Programming languages need to be aware of the full-system energy usage.</li>
<li>Algorithms should focus on minimising energy consumption.</li>
<li>Compilers need to compile for minimal energy consumption.</li>
</ul>
<h2>Conclusion</h2>
<p>Whenever I hear something like <em>less is more</em>, I remind myself of a joke - <em>Minimalism is just something made up by Big Small™ to sell more less</em>. It always brings a smile to my face.</p>
<p>Joke aside, the way forward to reducing emissions from computing is by <strong>using less energy</strong> and <strong>less materials</strong>. And this is a fact!</p>
<p><a href="https://arxiv.org/abs/2303.06642">Here</a> is the link to the research paper. I hope you will find it as useful as I did.</p>
<p>Let me know in the comments below what do you think about the article. How do you find my comments to the research article presented? Is my analysis/explanation of the sections understandable?</p>
<p>Thank you and see you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>The 2023 in review</title>
			<link href="https://wonderingchimp.com/posts/the-2023-in-review/"/>
			<updated>2023-12-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/the-2023-in-review/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>It's been a full calendar year of writing! That's almost two years in total! I've started these writings on the end of January last year. And, as you may already know, it's cool to have a review of the year, so that is my plan with the last post of the year.</p>
<p>Disclaimer, it's not going to be like Spotify Wrapped, or what not, so don't expect much. I will reflect on the things I've written this year. What, how, and why. I'll also mention the things that inspired me, gave me ideas, and were wind at my back, so to say.</p>
<p>So, let's get started!</p>
<h2>Writings</h2>
<p>Throughout the year, I've written about various topics. I was still in my exploratory phase, so I've started with couple of articles about <a href="https://www.wonderingchimp.com/tag/kubernetes/">Kubernetes</a>. I've written the guide on how to get started with Kubernetes, ways to deploy it, and Kubernetes Ingress. How did that got in the list? Well, looking back now, I didn't understood how Kubernetes Ingress works, so I wanted to write about it. You might've discovered already, but the <em>things I don't know is usually what I write about here.</em> The one to point out from this list is the guide on <a href="https://www.wonderingchimp.com/posts/k8s-where-to-start/">where to start with Kubernetes</a>.</p>
<p>Next in line were articles about <a href="https://www.wonderingchimp.com/tag/learning/">learning</a> and <a href="https://www.wonderingchimp.com/tag/growth/">growth</a>. These are the topics I go back to quite often. I like learning new stuff, and exploring ways to grow myself. I believe that <em>writing helps me grow</em>, hence the articles on various topics. Two that I would like to point out are about <a href="https://www.wonderingchimp.com/posts/everyday-epiphanies-the-benefits-of-keeping-a-daily-journal/">journaling</a> and <a href="https://www.wonderingchimp.com/posts/learning-to-learn/">life-long learning</a>.</p>
<p>I also wrote about <a href="https://www.wonderingchimp.com/tag/climbing/">climbing</a>, so (I) you won't forget my <em>hobby that is more than a hobby</em>. There, I've touched one of the lessons I learned (and still learning) from climbing - <a href="https://www.wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-patience/">patience</a>!</p>
<p>Now that I look back, middle of the year was my patience-awareness period. I saw how I'm impatient, and wanted to do something about it. It's an everyday battle!</p>
<p>The topic I found the most interesting this year, and the one I will explore further is sustainability. I feel that by writing about this topic, I can do something that will be good for our planet. To learn, teach, and act.</p>
<p>As you may or may not know, the climate change is not just a today's problem, but the problem that will impact our future. The COP28 didn't go well, but that should not stop us from learning about new ways to help, and act. The articles I want to mention here are, well, all of them! If you have any time to spare during the holidays (and we all know we do), <a href="https://www.wonderingchimp.com/tag/sustainability/">check them out</a>!</p>
<h2>Inspirations</h2>
<p>This year, one of my articles was mentioned in the <a href="https://greensoftware.foundation/">Green Software Foundation</a> newsletter! The article shared is about <a href="https://www.wonderingchimp.com/posts/exploring-carbon-awareness-no-its-not-a-trendy-mindfulness-practice/">Carbon Awareness</a>.</p>
<p>And here is the screenshot where they mention my article under <em>Latest Resources and Perspectives</em>. I was THRILLED!</p>
<p><img src="../images/posts/0050-wrap-up-01.png" alt="The image shows a newsletter section titled “Latest Resources and Perspectives” with five different article summaries listed below it. The articles cover a range of environmental and technological topics, including sci-fi fantasies, a sustainable internet, carbon awareness, cloud computing, and decarbonizing technology supply chains. Each summary includes a title in bold font followed by a brief description. The articles are intended to provide readers with the latest resources and perspectives on these topics." title="Source: GSF Newsletter #53"></p>
<p>During this year, I started exploring <a href="https://joinmastodon.org/">Mastodon</a>. I liked the idea of it, and got active there. I would highly recommend you to check it out, explore, and see for yourself! I think of this network as a <em>social network for people</em>. There are no ads, you follow what you want, see what you follow... <a href="https://fosstodon.org/@wonderingchimp">Here is the link</a> to my profile there.</p>
<p>One of my <em>tut</em> (post on Mastodon) was an inspiration for an article! It was about AI prompts and how we can start using them in a sensible way.</p>
<p><a href="https://fosstodon.org/@wonderingchimp/111533766647110789"><img src="../images/posts/0050-wrap-up-02.png" alt="The image shows a social media post by a user named Marjan, who is pondering the environmental impact of AI queries. The post suggests that if AI engines informed users of the environmental impact before executing each query, people might be more environmentally conscious. The hypothetical prompt informs users about the water consumption and CO2 equivalent emissions of their query, and asks if they still wish to proceed. Marjan ends the post by asking for others’ opinions and includes hashtags related to sustainability and responsible AI use. The post is dated Dec 06, 2023, and has been made via Mastodon for Android. It has received 3 re-shares and no comments or likes yet. The user’s profile picture, an illustration of a chimp, is visible on the left."></a></p>
<p>This <em>tut</em> was mentioned in an article from <a href="https://fosstodon.org/@frebelt@mastodon.online">Friedemann Ebelt</a> about AI and new AI legislation in the EU. I'm sharing <a href="https://blog.campact.de/2023/12/ki-und-klima-was-kann-der-ai-act-vielleicht/">here</a> the original article in German.</p>
<p>Again, I was THRILLED!</p>
<h2>Ideas</h2>
<p>Here, I want to mention the things that impacted me the most - the books and articles I've read, podcasts I've listened... Sort of like the link dump of the things.</p>
<h3>Podcasts</h3>
<p>List of podcast episodes I've listened to is quite extensive, so I won't go into too much details. Here are the three podcast episodes that I've listened, re-listened, and re-re-listened.</p>
<ul>
<li><a href="https://www.hubermanlab.com/episode/how-to-enhance-performance-and-learning-by-applying-a-growth-mindset">Huberman Lab Podcast - Learning</a>
<ul>
<li>A great episode on the science behind the growth mindset, and action points we all can take towards applying it!</li>
</ul>
</li>
<li><a href="https://podcast.greensoftware.foundation/e/68rz0318-we-answer-your-questions">GSF Q&amp;A</a>
<ul>
<li>One of the many episodes of <em>Green Variables</em> podcast. This one is a session where hosts answer questions and doubts about sustainability.</li>
</ul>
</li>
<li><a href="https://www.powercompanyclimbing.com/blog/remix-effort">Effort</a>
<ul>
<li>Last but not the least is the one about applying effort. It is about applying effort in climbing, but, I use it as a firecracker when I feel down about everything. It helps a lot!</li>
</ul>
</li>
</ul>
<h3>Books</h3>
<p>Somewhat like the podcast episodes, this year I've read quite a bunch of books. I want to mention only three (two plus a book series) that affected me the most.</p>
<ul>
<li><a href="https://app.thestorygraph.com/books/9b1e31ce-978b-46f4-bf60-2eef833caf15">Greenlights by Matthew McConaughey</a>
<ul>
<li>Full of simple, hard, and honest truths about people, world, and life in general. I recommend it.</li>
</ul>
</li>
<li><a href="https://app.thestorygraph.com/books/7f86f212-027b-46b3-a3ba-d964aa046f21">Life and Death in Shanghai by Nien Cheng</a>
<ul>
<li>A great book about how to approach life when everything and everyone is aginst you. How to live through and persist!</li>
</ul>
</li>
<li><a href="https://en.wikipedia.org/wiki/Mistborn">Mistborn Series by Brandon Sanderson</a>
<ul>
<li>This I've read in almost one breath, <em>Era One</em>, at least. Great books full of ups and downs, tragic deaths, and so forth. Even though the ending was a bit abstract for my taste, I find the <em>Era One</em> of the series great! We'll see how I feel about the <em>Era Two</em> in next years.</li>
</ul>
</li>
</ul>
<h3>Articles</h3>
<p>We live in the information age, thus, content is abundant. Keeping the list as short as possible, below are the three articles that affected me the most.</p>
<ul>
<li><a href="https://gurwinder.substack.com/p/overchoice-and-how-to-avoid-it">Overchoice</a> - A practical guide on what to do when you have to make some choice. It's full of great advices!</li>
<li><a href="https://fs.blog/remember-books/">How to remember books</a> - How to actually read books and remember the most from them. Spoiler alert - note-taking!</li>
<li><a href="https://www.allthingsdistributed.com/2023/06/a-few-words-on-taking-notes.html">Note taking</a> - This article motivated me to review my note-taking techniques. In the end, got me back to the handwritten notes.</li>
</ul>
<h2>Year in numbers</h2>
<p>Now, to wrap up the story with some numbers. These are the following:</p>
<ul>
<li>22 articles written (23, if we include this one).</li>
<li>Somewhat similar number of posts on LinkedIn, where I shared the articles.</li>
<li>More than 200 hours spent in exploring, reading, preparing, and writting. Then editing, and re-writting these articles.[^1]</li>
<li>More than 68000 post impressions just on LinkedIn!</li>
</ul>
<p>My goals when I started all this were:</p>
<ol>
<li>Learn (/)</li>
<li>Write (/)</li>
<li>Stay curious (/)</li>
<li>Publish every two weeks (/x)</li>
</ol>
<p>Reflecting to the goals, I've learned a lot! And I think the articles I've written so far can be a sort of a measure for that. And for the second goal as well, now that I'm mentioning. The range of topics I've covered this year can easily show me being curious. The fourth one is partially complete. Throughout the year, I wasn't always able to post once in a fortnight. I missed the schedule on couple of ocassions.</p>
<p>Continuing on, I'm keeping the above goals the same. The only thing I want to change is my approach to them. Write and learn in a more active way, and stay curious along the way.</p>
<p>Thank you for staying this long with me and see you next year!</p>
<p>[^1]: 1 article x min 5h to write.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>A Lesson in Carbon Awareness</title>
			<link href="https://wonderingchimp.com/posts/a-lesson-in-carbon-awareness/"/>
			<updated>2023-12-11T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/a-lesson-in-carbon-awareness/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>It's been a while. A full month since my last e-mail (article) that, hopefully, didn't end in your spam.</p>
<p>For quite some time, I was trying the idea of <em>doing what I preach</em> and move my website to be carbon-aware. In this article I want to dive in what I've learned so far and what is the progress of turning this website carbon aware.</p>
<h2>A bit on Carbon Awareness</h2>
<p>Let us remind ourselves first. To put it simple, Carbon awareness is <em>do more when the energy is coming from renewable sources. Do less when it comes from non-renewable sources</em>. I first wrote about the basics some time ago. To check them out, <a href="https://www.wonderingchimp.com/posts/exploring-carbon-awareness-no-its-not-a-trendy-mindfulness-practice/">follow the link</a>.</p>
<p>Making a website carbon aware means that it can serve more or less high-intesive content based on the user's location. If the carbon intensity of the users' location is high, don't load images and videos, for example. Let the users decide.</p>
<p>This practice is called demand shaping. You shape your demand based on the carbon intensity.</p>
<p>Another example of it could be that different AI engines prompt users the following, before running each query:</p>
<blockquote>
<p>Hey, do you know that this query will consume THIS MUCH litres of water and produce THIS amount of CO2eq. Do you still want to run it?&quot;</p>
</blockquote>
<p>Could this be a possible feature request for all those <em>GPTs</em> developers? What do you think?</p>
<p>Sorry, I went a bit off the topic. The initial inspiration for making website carbon-aware came from the <a href="https://branch.climateaction.tech/"><em>Branch Magazine</em></a>.</p>
<h2>First steps</h2>
<p>As in almost everything in life, I first searched the Internet. Since I'm hosting my website on <a href="ghost.org">Ghost</a>, my use-case was quite specific.</p>
<p>After a couple of dead-ends, I stumbled upon an awesome blog from Fershad Irani. A web sustainability consultant working with Green Web Foundation. <a href="https://fershad.com/">This is the link</a> to his website.</p>
<p>The articles that provided more info and inspired action was the <a href="https://fershad.com/carbon-aware-site/">following</a>. This one is a general info about the website and the fact that it is carbon aware.</p>
<p><a href="https://fershad.com/writing/making-this-website-carbon-aware/">This one</a> is a step-by-step guide to how the site was made carbon aware. With examples, links to libraries and different configuration options. Excellent!</p>
<p>Okay, I thought, this is great. It will be easy!</p>
<p>Consider here the fact that I don't know much about web development. But, I was eager to learn.</p>
<h2>Contacting the support</h2>
<p>Then I decided to contact the support. See their opinnion on this. I've written an e-mail about the idea, where I got it from, and so on. And sent it.</p>
<p>Not long after came the response. It was quite nice and comprehensive. They explained to me why wouldn't that be so easy, or even possible. With examples of why it could be challenging.</p>
<p>I was a bit disappointed. Not by their response, but the fact that it wouldn't be an easy job. I'm so satisfied with the Ghost provider, and I didn't want to move from there. Still don't.</p>
<p>Then I stopped looking. It was like somebody <em>burst my idea bubble.</em></p>
<h2>Personal doubts</h2>
<p>I'm not sure why, but when I'm hyped about the idea, I tend to lose the sense of time and place. And sometimes, I become impatient. Like in that song from Queen - <em>I want it all, and I want it now!</em> And when I see it's not so easy, or even possible, well, then I lose that initial hype.</p>
<p>This doesn't happen often, but it happened here. And it lasted for quite some time. Not sure why. Who knows, maybe it's because the end of year is upon us?</p>
<h2>Doing what I can</h2>
<p>Finally, I got to my senses, and decided to take some action. I cannot change the website easily, okay, let's go with what I can do. At the moment, at least.</p>
<p>First, I've decided to change the theme of the website to be more text-based, rather than full of images.</p>
<p>This was easy, but creating a landing page is still a challenge. I hope I finished it before publishing the article.</p>
<p>Next, I went through all my articles so far, and removed unnecessary feature images. I was never a fan of them. Even though, they've added a certain professionality and <em>eye-catchiness</em> to my website. Now, my website looks rather plain, and I don't mind it at all. I hope you don't mind.</p>
<p>I'm not stopping here, but I'm slowing down. First I'll try to create a theme in Ghost with a minimalistic, text-based, approach. Then, I'll see if something else/new comes up.</p>
<p>Let me know in the comments, what do you think about this. Also, do let me know what do you think about my text-based approach.</p>
<p>See you in the next, once-a-fortnight, article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Exploring the Green APIs</title>
			<link href="https://wonderingchimp.com/posts/exploring-the-green-apis/"/>
			<updated>2023-11-13T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/exploring-the-green-apis/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>This week, we're exploring Green APIs. APIs that show you carbon intensity data you can use in your code. And based on that data create some logic. For example:</p>
<ul>
<li>If users' location has high carbon intensity, don't load carbon intensive content on a webpage. Carbon intensive content - videos or images, for example.</li>
<li>If carbon intensity in a region is high, you can automatically shift your workloads into a green one.</li>
<li>Schedule batch jobs of your application when the carbon intensity is lower.</li>
</ul>
<p>In this article, we'll go through two of them. First, I'll write a short overview about each. Then, I'll create a small Go application we can use to call these APIs. And last we'll see the methodologies used within both of them.</p>
<p>That's the plan. Will I be able to stick to it? Let's dive and see.</p>
<h2>Electricity Maps</h2>
<p>First in line is the Electricity Maps. They provide electricity data for more than 160 regions. Founded in 2016 with a goal to get us to a decarbonized electricity system. You can find more about them on <a href="https://www.electricitymaps.com/">their website</a>.</p>
<p>They are the people behind the electricity map I have mentioned in the previous posts. And they are doing a great job!</p>
<p><a href="https://app.electricitymaps.com/map"><img src="../images/posts/0048-electricity-maps.png" alt="This is an interactive world map on a dark background, displaying electricity consumption by country. The countries are color-coded from green to red, with green indicating low electricity consumption and red indicating high electricity consumption. The map uses a Mercator projection and includes a legend on the left side and various options on the right side."></a></p>
<p>Let's see what do they offer. Checking their website, they have free and paid plans. Based on the option you choose, different features are available. They offer free-tier API calls (up to 100,000), and paid options. I am going to look into the free-tier.</p>
<p>On the type of data they are offering, you have two options:</p>
<ol>
<li>Use APIs to make applications carbon-aware</li>
<li>Granular carbon accounting</li>
</ol>
<p>With the first option, they provide data based on which you can create logic in your app. You call the API, you get a JSON response. The second option focuses on carbon accounting and creating reports on the scope 2 emissions. Data includes consumption-based emissions factors from both direct operations and life cycle analysis (LCA) for the years 2021-2022.</p>
<p>Here, we'll use the first option, and the <em>Free</em> subscription for that option. I want to play around and test the APIs available in the free tier. Maybe I'll use it in future to make my website carbon-aware. Who knows...</p>
<h3>Testing out the APIs</h3>
<p>After selecting the <em>Free</em> option, I easily log in by providing e-mail, name, last name, password... You know the drill. When I filled this information in, I am redirected to their <em>API Portal</em>. The screenshot of the portal is below.</p>
<p><a href="https://api-portal.electricitymaps.com/home"><img src="../images/posts/0048-em-api-portal.png" alt="This is a screenshot of the welcome page for the Electricity Maps API. The page has a white background with black text, a header that reads ‘Welcome to Electricity Maps API!’, and a paragraph explaining the API’s functionality. There’s a sidebar on the left with links to different sections of the API, and a world map in the bottom right corner. The page provides data on electricity demand across continents and more than 160 regions, and offers ways to assess individual electricity footprints and increase renewable energy use."></a></p>
<p>And this portal has everything you need! It has extensive documentation, list of available APIs with example code snippets. You can use those snippets in your app rather easy. Which as it turns out, will be quite useful for this article!</p>
<p>Below is the example application written in Go. It shows you latest carbon intensity per specified zone. I've specified <code>RS</code> zone for the country where I live in. I've also omitted the <code>auth-token</code> which is a way to authenticate with the API.</p>
<pre><code class="language-go">package main

import (
  &quot;fmt&quot;
  &quot;io/ioutil&quot;
  &quot;net/http&quot;
  &quot;encoding/json&quot;
  &quot;bytes&quot;
)

func main() {

  url := &quot;https://api-access.electricitymaps.com/free-tier/carbon-intensity/latest?zone=RS&quot;

  req, _ := http.NewRequest(&quot;GET&quot;, url, nil)

  // here you need to change YOUR_AUTH_TOKEN with the token from your registered profile
  req.Header.Add(&quot;auth-token&quot;, &quot;YOUR_AUTH_TOKEN&quot;)

  res, _ := http.DefaultClient.Do(req)
  defer res.Body.Close()

  body, _ := ioutil.ReadAll(res.Body)

  fmt.Println(jsonPrettyPrint(string(body)))

}

// jsonPrettyPrint() takes the JSON string and formats it to a readable format
func jsonPrettyPrint(in string) string {
  var out bytes.Buffer
  err := json.Indent(&amp;out, []byte(in), &quot;&quot;, &quot;\t&quot;)
  if err != nil {
      return in
  }
  return out.String()
}
</code></pre>
<p>The output data is shown below.</p>
<pre><code class="language-json">{
	&quot;zone&quot;: &quot;RS&quot;,
	&quot;carbonIntensity&quot;: 510,
	&quot;datetime&quot;: &quot;2023-11-10T07:00:00.000Z&quot;,
	&quot;updatedAt&quot;: &quot;2023-11-10T06:45:48.660Z&quot;,
	&quot;createdAt&quot;: &quot;2023-11-07T07:47:23.027Z&quot;,
	&quot;emissionFactorType&quot;: &quot;lifecycle&quot;,
	&quot;isEstimated&quot;: true,
	&quot;estimationMethod&quot;: &quot;TIME_SLICER_AVERAGE&quot;
}
</code></pre>
<p>Here you can see the time when I've queried the API, and the carbon intensity. The value is in grams of CO2 equivalence per kWh, or <strong>gCO2eq/kWh</strong>. You also have the emission factor type, and the estimation method. Both of which we'll explain in the next section.</p>
<p>I was able to create this example in a couple of minutes after logging. Without using any AI prompts. Just by following examples and instructions clearly written and stated on the portal. That is how it should be!</p>
<h3>Methodology</h3>
<p>Now, what about the data provided by the <em>Electricity Maps</em>, where is it from? How do they calculate the carbon intensity?</p>
<p>The data comes from a variety of public data sources. Those sources can be transmission system operators, balancing entities, or market operators. The <a href="https://github.com/electricitymaps/electricitymaps-contrib/blob/master/DATA_SOURCES.md">complete list of data sources</a> shows where the data is sourced from.</p>
<p>If I go a step further and check out the data source for Serbia, it shows the <em>ENTSOE</em>. I don't know what this means, so I'll dig deeper.</p>
<p>The <em>ENTSOE</em> stands for <em>European Network of Transmission System Operators for Electricity</em>. It represents 39 electricity TSOs (Transmission System Operators) from 35 countries across Europe. It is established to promote closer cooperation of the TSOs across Europe. To support the implementation of EU energy policy, and achieve Europe's energy and climate policy objectives.</p>
<p>This organization publishes an <a href="https://transparency.entsoe.eu/content/static_content/download?path=/Static%20content/web%20api/RestfulAPI_IG.pdf">Implementation Guide</a> for the transparent data extraction process.</p>
<p>Thanks, <a href="https://en.wikipedia.org/wiki/European_Network_of_Transmission_System_Operators_for_Electricity">Wikipedia</a>!</p>
<p>The data the <em>Electricity Maps</em> gets is then processed and formatted with different parsers in a uniform way. The formatted data is then saved in the db and processed using the flow-tracing algorithm.</p>
<p>The flow-tracing algorithm follows the <em>flow</em> (whether it’s power, data, or something else) through a system to understand how it operates or to identify specific characteristics. It’s like a roadmap that shows you how to get from point A to point B, and all the stops you make along the way.</p>
<p>The <code>ENTSOE.py</code> parser, used for Serbia and other <em>ENTSOE</em> members, can be found on the link below. All parsers are open-source, and the <a href="https://github.com/electricitymaps/electricitymaps-contrib/blob/master/parsers/ENTSOE.py">contribution</a> is more than welcome! Pretty neat, isn't it!</p>
<p>Calculation of carbon intensity is done by multiplying the power production from each source by the corresponding emission factors. The emission factors depend on different parameters. These parameters can be production source, the region and many others. Data is calculated with a number of regional and global emission factors. The table below shows the default Life-cycle emission factors.</p>
<p><a href="https://github.com/electricitymaps/electricitymaps-contrib/wiki/Default-emission-factors"><img src="../images/posts/0048-em-emission-factors.png" alt="This is a table displaying the emission factors for different types of energy sources. The table has three columns: ‘Mode’, ‘Emission factor (gCO2eq/kWh)’, and ‘Category’. It lists various modes such as biomass, battery discharge, coal, and gas, along with their corresponding emission factors and categories. The emission factors range from 11 (for wind) to 820 (for coal), and the categories include ‘Renewable’, ‘Fossil’, ‘Low-carbon’, ‘UK Parliamentary Office of Science and Technology’, and ‘Assumes (coal, gas, oil)’."></a></p>
<p>Some part of the methodology used is published on the <em>Electricity Maps</em> <a href="https://github.com/electricityMaps/electricitymaps-contrib/wiki">GitHub Wiki</a>, and their <a href="https://www.electricitymaps.com/blog">Blog</a>. Feel free to check them out for more information and reference.</p>
<h2>WattTime API</h2>
<p>The second API that I'm going to explore is the WattTime. It is a nonprofit that offers technology solutions that help in achieving emissions reductions. The nonprofit was founded in 2014, first tried on hackathon in 2013! Similar to the <em>Electricity Maps</em>, <em>WattTime</em> provides the electricity data for many regions across the world. To find out more, check out their website linked <a href="https://www.watttime.org/">here</a>.</p>
<p>The <em>WattTime</em> has, as well, an excellent documentation of the API.</p>
<p>By default, they offer a free plan, which is nice. But, it's a rather limited one. You can access and query only one region/zone. All others are forbidden. Like <em>Electricity Maps</em>, they offer paid plans, but, the price is not published. You need to e-mail them for paid plans.</p>
<p>Anyhow, the registration and the login process is done via API. They offer comprehensive Python scripts/snippets on how to register, and perform login. I'm going to use Golang in my use-case. And I'll register for the free plan.</p>
<h3>Testing out the API</h3>
<p>First, let's register by running the below code.</p>
<pre><code class="language-golang">package main

import (
    &quot;bytes&quot;
    &quot;encoding/json&quot;
    &quot;fmt&quot;
    &quot;log&quot;
    &quot;net/http&quot;
)

func main() {

    // add your USERNAME and PASSWORD
    values := map[string]string{
      &quot;username&quot;: &quot;USERNAME&quot;,
      &quot;password&quot;: &quot;PASSWORD&quot;,
      &quot;email&quot;: &quot;wondering.chimp@tuta.io&quot;,
      &quot;org&quot;: &quot;Wondering Chimp&quot;,
    }

    json_data, err := json.Marshal(values)

    if err != nil {
        log.Fatal(err)
    }

    url := &quot;https://api2.watttime.org/v2/register&quot;
    resp, err := http.Post(url, &quot;application/json&quot;,
        bytes.NewBuffer(json_data))

    if err != nil {
        log.Fatal(err)
    }

    defer resp.Body.Close()

    var res map[string]interface{}

    json.NewDecoder(resp.Body).Decode(&amp;res)

    fmt.Println(res)
}

</code></pre>
<p>After that, you should get the somewhat similar response to the below one.</p>
<pre><code class="language-json">{
  &quot;user&quot;: &quot;USERNAME&quot;,
  &quot;ok&quot;: &quot;User created&quot;
}
</code></pre>
<p>Next, let's get a token and get the grid emission data for the only one available region (CAISO_NORTH).</p>
<pre><code class="language-golang">package main

import (
  &quot;fmt&quot;
  &quot;io/ioutil&quot;
  &quot;net/http&quot;
  &quot;net/url&quot;
  &quot;strings&quot;
  &quot;bytes&quot;
  &quot;encoding/json&quot;
)

func main() {
  loginURL := &quot;https://api2.watttime.org/v2/login&quot;
  req, _ := http.NewRequest(&quot;GET&quot;, loginURL, nil)
  // you will need to change this line with your USERNAME and PASSWORD
  req.SetBasicAuth(&quot;USERNAME&quot;, &quot;PASSWORD&quot;)
 
  resp, _ := http.DefaultClient.Do(req)
  defer resp.Body.Close()
  body, _ := ioutil.ReadAll(resp.Body)
  token := strings.Split(string(body), &quot;:&quot;)[1]
  token = strings.TrimRight(strings.TrimLeft(token, &quot;\&quot;&quot;), &quot;\&quot;}\n&quot;)
 
  dataURL := &quot;https://api2.watttime.org/v2/data&quot;
  req, _ = http.NewRequest(&quot;GET&quot;, dataURL, nil)
  req.Header.Add(&quot;Authorization&quot;, &quot;Bearer &quot;+token)
 
  params := url.Values{}
  params.Add(&quot;ba&quot;, &quot;CAISO_NORTH&quot;)
  params.Add(&quot;starttime&quot;, &quot;2023-11-05T20:30:00-0800&quot;)
  params.Add(&quot;endtime&quot;, &quot;2023-11-05T22:30:00-0800&quot;)
  req.URL.RawQuery = params.Encode()
 
  resp, _ = http.DefaultClient.Do(req)
  defer resp.Body.Close()
  body, _ = ioutil.ReadAll(resp.Body)
 
  fmt.Println(jsonPrettyPrint(string(body)))
}

func jsonPrettyPrint(in string) string {
  var out bytes.Buffer
  err := json.Indent(&amp;out, []byte(in), &quot;&quot;, &quot;\t&quot;)
  if err != nil {
      return in
  }
  return out.String()
}
</code></pre>
<p>You should get the response similar to the below one.</p>
<pre><code class="language-json">[
...
        {
                &quot;point_time&quot;: &quot;2023-11-06T04:50:00.000Z&quot;,
                &quot;value&quot;: 950.0,
                &quot;frequency&quot;: 300,
                &quot;market&quot;: &quot;RTM&quot;,
                &quot;ba&quot;: &quot;CAISO_NORTH&quot;,
                &quot;datatype&quot;: &quot;MOER&quot;,
                &quot;version&quot;: &quot;3.2&quot;
        },
        {
                &quot;point_time&quot;: &quot;2023-11-06T04:45:00.000Z&quot;,
                &quot;value&quot;: 950.0,
                &quot;frequency&quot;: 300,
                &quot;market&quot;: &quot;RTM&quot;,
                &quot;ba&quot;: &quot;CAISO_NORTH&quot;,
                &quot;datatype&quot;: &quot;MOER&quot;,
                &quot;version&quot;: &quot;3.2&quot;
        },
        {
                &quot;point_time&quot;: &quot;2023-11-06T04:40:00.000Z&quot;,
                &quot;value&quot;: 954.0,
                &quot;frequency&quot;: 300,
                &quot;market&quot;: &quot;RTM&quot;,
                &quot;ba&quot;: &quot;CAISO_NORTH&quot;,
                &quot;datatype&quot;: &quot;MOER&quot;,
                &quot;version&quot;: &quot;3.2&quot;
        },
        {
                &quot;point_time&quot;: &quot;2023-11-06T04:35:00.000Z&quot;,
                &quot;value&quot;: 955.0,
                &quot;frequency&quot;: 300,
                &quot;market&quot;: &quot;RTM&quot;,
                &quot;ba&quot;: &quot;CAISO_NORTH&quot;,
                &quot;datatype&quot;: &quot;MOER&quot;,
                &quot;version&quot;: &quot;3.2&quot;
        },
        {
                &quot;point_time&quot;: &quot;2023-11-06T04:30:00.000Z&quot;,
                &quot;value&quot;: 963.0,
                &quot;frequency&quot;: 300,
                &quot;market&quot;: &quot;RTM&quot;,
                &quot;ba&quot;: &quot;CAISO_NORTH&quot;,
                &quot;datatype&quot;: &quot;MOER&quot;,
                &quot;version&quot;: &quot;3.2&quot;
        }
]
</code></pre>
<p>Same as the above, we get the JSON data with which we can play around further.</p>
<h3>Methodology</h3>
<p><em>WattTime</em> uses grid's marginal emissions rate. And the API provides access to real-time, forecast, and historical marginal emission data. The rate provided is the Marginal Operating Emissions Rate (<strong>MOER</strong>). The unit is pounds of emissions per megawatt-hour (e.g. <strong>CO2 lbs/MWh</strong>). So to use it in the metric system part of the world, we would need to convert that to <strong>gCO2eq/kWh</strong>.</p>
<p><em>WattTime</em> has built a marginal emissions model based on the empirical technique founder Gavin McCormick published. The fundamental approach of all those models is a somewhat similar.</p>
<ol>
<li>Data is reported by emissions monitoring system through <em>EPA CAMPD program</em> from the power plants within the US. Through the US <em>Environmental Protection Agency Clean Air Markets Program Data</em>. And now I finally understand why do they abbreviate everything there! I assume this is something similar to <em>ENTSOE</em>.</li>
<li>Each system then applies a regression-based modeling to ask every time a rise or fall in electricity demand occurs, which power plants increase/decrease their output in response.</li>
</ol>
<p>This allows for comparing marginal emissions by time and place.</p>
<p>To discover more about their methodology, check out <a href="https://www.watttime.org/marginal-emissions-methodology/">this link</a>.</p>
<h2>Key Takeaways</h2>
<p><strong>The purpose of this article is not a comparison of the two.</strong> My goal was to explore the options and write down the things I found. Following are some key points I would like you to take from this article.</p>
<ul>
<li>The APIs that give us carbon emissions data are there and waiting to be used!</li>
<li>Both of these I've written about offer free and paid options, which is good. If you are not paying for services, that often times means that you are the product.</li>
<li>The first step would be to use any of them.</li>
<li>To achieve the best results, we can test and use both, and compare them. Maybe one works better for us.</li>
</ul>
<p>Thanks for staying with the article until the end! If you liked it, feel free to share it with your friends, colleagues, peers, and on your social media. Also, feel free to use the comment section below to add your comments, overview, experience.</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Deep Dive into Scope 3 Emissions</title>
			<link href="https://wonderingchimp.com/posts/deep-dive-into-scope-3-emissions/"/>
			<updated>2023-10-30T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/deep-dive-into-scope-3-emissions/</id>
			<content type="html"><![CDATA[
				<p>Hi everyone!</p>
<p>The idea for this article came up as a result from a presentation I've held within my company. There, I was talking about the basics of Green Software and how to put it into practice. One of the topics we've covered was about emission scopes. There I've got an interesting question, to which I couldn't respond at the moment. The question was (and I'm paraphrasing) - isn't a scope 3 a bit a utopia? How can you differentiate between different companies? What all things go into the scope 3?</p>
<p>All these are valid questions, and I'll try to respond to them in this article.</p>
<h2>Which emission scopes are there?</h2>
<p>Scopes were briefly mentioned in <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-azure/">The greenest regions in Azure</a> article. But, let's revisit them once more.</p>
<p>Scopes were defined by the Greenhouse Gas protocol. This is the most widely used gas accounting standard. Almost all Fortune 500 companies use it when calculating and disclosing emissions.
There are three scopes of emissions:</p>
<ul>
<li><strong>Scope 1</strong> - direct emissions (on-site fuel combustion, vehicles emission).</li>
<li><strong>Scope 2</strong> - indirect emissions, by purchased energy (heat or electricity).</li>
<li><strong>Scope 3</strong> - all other indirect emissions in which we engage. For example, all emissions from organization's supply chain, and so on.</li>
</ul>
<p>We know that the scope 3 is the most significant and the hardest one to calculate. Why? Because it includes all other activities that the organization is doing. Which can be many, and a bit hard to track.</p>
<h2>A simple example of scopes</h2>
<p>For this, let's use a <em>Coffee brewing analogy</em>*.</p>
<p>In the image below you can see how you can think of emission scopes in the example of brewing your coffee or tea.</p>
<p><a href="https://docs.google.com/presentation/d/1CuRqj6bF3-VtD82_oRK6K1Jnmw1YVYn1fXdiHc-0iXg/edit#slide=id.g27992814723_0_64"><img src="../images/posts/0048-scopes-01.png" alt="An infographic explaining how to measure carbon emissions communicated through the medium of hot beverages. It’s divided into three sections labeled ‘Scope 1’, ‘Scope 2’, and ‘Scope 3’, each with an icon and a description related to making coffee. The footer reads ‘Measuring carbon emissions - Green Web Foundation’." title="Using coffee to talk about carbon emissions"></a></p>
<h2>What goes into scope 3?</h2>
<p>Here, I want to focus on what's the most important - scope 3. What goes into scope 3? This is a summary of my understanding. I'll ask the questions first, and then do the follow-up research.</p>
<p>If we have a look at the example above, all activities in our supply chain, so we can have a coffee, go into scope 3. This means emissions from the whole chain of production of coffee beans. Including packaging those beans, transporting them to various locations, including ours.</p>
<p>Next, emissions from producing the coffee pots and all the material that goes into them, right? Also, the packaging and transportation to our location... All this is part of scope 3?</p>
<p>Is this correct?</p>
<p>What about if the coffee pot producer and coffee producer already calculate and disclose their emissions. Do we include those emissions in our calculation as well? By adding them to our calculation, wouldn't that be a duplication of emissions?</p>
<p>Am I understanding this right?</p>
<p>Here, a deeper look into what scope 3 is, is needed. And then based on that we'll see what goes where. To help us with that, we'll use the GHG protocol standards.</p>
<h3>GHG Protocol Standards</h3>
<p>Let's start first with the <a href="https://ghgprotocol.org/corporate-value-chain-scope-3-standard"><em>Corporate Value Chain (Scope 3) Standard</em></a>. This standard provides methods that can be used to account for and report emissions. From companies of all sectors, globally. Its goal is to make calculating and reporting of Scope 3 easy.</p>
<p>Following this Scope 3 Standard, is the <em>Scope 3 Guidance</em>. This is a calculation guidance. It's designed to reduce barriers of complexity of Scope 3. It provides a detailed, technical guidance on all the relevant calculation methods. It provides information not contained in the Scope 3 Standard.</p>
<ul>
<li>Methods for calculating GHG emissions.</li>
<li>Guidance on selecting the appropriate calculation methods.</li>
<li>Examples to show each calculation method.</li>
</ul>
<p>To answer the questions from above, we'll use <a href="https://ghgprotocol.org/scope-3-calculation-guidance-2">these two documents</a> for guidance.</p>
<h3>Types of Scope 3 emissions</h3>
<p>There are two types of the Scope 3 emissions. These are the following:</p>
<ul>
<li><strong>Upstream activities</strong> - everything that goes into the stream (purchased goods and services, capital goods...). Indirect GHG emissions related to purchased or acquired goods and services.</li>
<li><strong>Downstream activities</strong> - everything that goes out of the stream. Processing of sold products, transportation and distribution. Indirect GHG emissions related to sold goods and services.</li>
</ul>
<p>In my above uncertainties, I've looked at these two types interchangeably. Without even knowing it.</p>
<blockquote>
<p>In the case of goods purchased or sold by the reporting company, upstream emissions occur up to the point of receipt by the reporting company, while downstream emissions occur subsequent to their sale by the reporting company and transfer of control from the reporting company to another entity (e.g., a customer).[^1]</p>
</blockquote>
<p>In the coffee pot example, yes, we use emissions from coffee pot producer and coffee producer. We use their scope 1 and scope 2 emissions and include them in our Scope 3 emissions.</p>
<p>Following image from the Scope 3 Standard shows an overview of the upstream and downstream activities.</p>
<p><img src="../images/posts/0048-scopes-02.png" alt="An infographic showing the different scopes and emissions across the value chain for GHG Protocol. It’s divided into three sections: ‘Upstream activities’, ‘Reporting company’, and ‘Downstream activities’, each with a description related to the company’s activities. The infographic also shows the different types of emissions: CO2, CH4, N2O, HFCs, PFCs, SF6, and NF3. The infographic is sourced from Figure 1.1 of Scope 3 Standard." title="Source: Figure 1.1 of Scope 3 Standard"></p>
<h3>Categories of Scope 3 emissions</h3>
<p>These two types of Scope 3 emissions are then divided into different categories. All 15 of them! First 8 are upstream, last 7 are downstream Scope 3 emissions.</p>
<ol>
<li><strong>Purchased goods and services.</strong> All upstream emissions from the production of products purchased by the reporting company in the reporting year. Products include goods and services.</li>
<li><strong>Capital goods.</strong> All upstream emissions from the production of capital goods purchased by the reporting company. Capital goods are final products that have extended life. They are used by the company to manufacture a product, provide a service (e.g. run an application).</li>
<li><strong>Fuel- and energy-related emissions not included in scope 1 or scope 2.</strong> Includes emissions related to the production of fuels and energy purchased and consumed by the reporting company, not included in the scope 1 and 2. For example - mining of coal, refining of gasoline, electricity consumed in a transport and distribution system.</li>
<li><strong>Upstream transportation and distribution.</strong> It includes transportation and distribution of products purchased by the reporting company from suppliers to operations. Also third-party transportation and distribution services. In essence - scope 1 and scope 2 emissions of third-party transportation companies.</li>
<li><strong>Waste generated in operations.</strong> Emissions from disposing waste and waste water. They include scope 1 and 2 emissions from solid waste and wastewater management companies.</li>
<li><strong>Business travel.</strong> Emissions from employees business-related travels in vehicles owned or operated by third parties. They include scope 1 and 2 from transportation companies (e.g. airline).</li>
<li><strong>Employee commuting.</strong> Emissions from employee commuting between homes and worksites. Remote work can be included in this category.</li>
<li><strong>Upstream leased assets.</strong> Emissions from the assets that company leases.</li>
<li><strong>Downstream transportation and distribution.</strong> Emissions from transporting and distributing of sold products.
10.<strong>Processing of sold products.</strong> They include scope 1 and 2 emissions of downstream value chain partners (e.g. manufacturers).
11.<strong>Use of sold products.</strong> Emissions from the use of goods and services sold by the reporting company.
12.<strong>End-of-life treatment of sold products.</strong> Waste disposal and treatment of products sold by the reporting company at the end of their life.
13.<strong>Downstream leased assets.</strong> Emissions from assets owned by the reporting company, but leased to other entities. And not already included in scope 1 or 2.
14.<strong>Franchises.</strong> Emissions from operating of franchises not included in scope 1 or scope 2. This category applies to those companies selling franchise.
15.<strong>Investments.</strong> Investments of the reporting company, not included in scope 1 or 2.</li>
</ol>
<p>These categories are intended to provide companies with a systematic way to organize, understand, and report the scope 3 activities. They are mutually exclusive - there is no double counting of emissions between categories.</p>
<h3>Putting it all together</h3>
<p>Here, I want to answer my questions from above.</p>
<ul>
<li>This means that the whole chain of production of coffee beans... goes into our scope 3? Yes. The whole value chain goes into scope 3, within a certain category, as part of the upstream emissions.</li>
<li>Emissions from producing coffee pots and all the material that goes into them is part of scope 3? Yes, as a category 2, capital goods.</li>
<li>Transport and packaging to our location? Yes, in the fourth category - upstream transport and distribution.</li>
<li>Do we include (or duplicate) emissions from coffee beans producer and coffee pot producer? No. We include their own scope 1 and scope 2 emissions into our scope 3.</li>
</ul>
<h2>Key takeaways</h2>
<p>This was a mouth full article. I hope I didn't confuse you, and didn't go on to confuse me even more, when talking about scope 3. The main things to have in mind and to take away from this articles are the following.</p>
<ul>
<li>Scope 3 represents all other indirect emissions in which we are engaged in.</li>
<li>There can be an upstream and downstream scope 3 emissions.</li>
<li>Upstream emissions - emissions from what goes into the value stream.</li>
<li>Downstream emissions - emissions from what goes out of the value stream as a result.</li>
<li>There are 15 categories (8 upstream, 7 downstream) that help in calculating the scope 3 emissions.</li>
</ul>
<p>If you find this article more confusing, or I missed the point, do let me know. I'm eager to see your opinion on the topic.</p>
<p>Thank you and see you in the next article(s)!</p>
<ul>
<li>[^1]: Section 5.3, page 29 of the Corporate Value Chain (Scope 3) Accounting and Reporting Standard</li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>A deeper look into Data Centres</title>
			<link href="https://wonderingchimp.com/posts/a-deeper-look-into-data-centres/"/>
			<updated>2023-10-16T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/a-deeper-look-into-data-centres/</id>
			<content type="html"><![CDATA[
				<p>Hello everyone! To start on the same note as before - it's been a while! I wasn't able to keep up with my initial plan of publishing an article every second Monday. Last one was 4 weeks ago. Sorry for that! Anyhow, the over planned months/weeks are over, hopefully, and I'll continue to post on a more regular basis.</p>
<p>In this week's article, we'll cover one of the most important parts of the IT industry, one of its cores. The Data centres. What is their impact on the environment, how we measure that impact, what other resources are consumed within a data centre... We'll also examine a study about the importance of eco-friendly and sustainable computing.</p>
<h2>What are Data centres and how to measure their effectiveness?</h2>
<p>Since I am fond of starting from scratch, let's see what Data centres are? You've heard about the term Cloud, right? Well, most of the data centres are places where that cloud is actually running. These are the buildings where servers, storages, networking machines are running. Location of actual physical resources. This is a rather simplified explanation.</p>
<p>To determine the effectiveness of data centres, we use Power Usage Effectiveness. I've written already about it in the article about <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-gcp/"><em>The greenest regions in GCP</em></a>. If you haven't had chance to check it out, now's the time. The following image is a simplified example of PUE.</p>
<p><a href="https://www.google.com/about/datacenters/efficiency/"><img src="../images/posts/0046-deeper-look-into-dcs-01.png" alt="https://learn.greensoftware.foundation/energy-efficiency#power-usage-effectiveness"></a></p>
<h2>What other resource, besides electricity, DCs use?</h2>
<p>Water. The equipment uses a lot of water for cooling. Besides the electricity. As with PUE, we also have a measurement called WUE - Water usage effectiveness. This is the litres of water consumed per Kilowatt Hour - L/kWh. Now, the numbers from Google, Microsoft, and Amazon are big.</p>
<p>In 2022, Google used about 21 billion litres of water in their data centres and offices[^1]. Most of it went to the data centres. Microsoft consumed ~6.3 billions of litres of water in 2022 in their Data centres.[^2]</p>
<p>And Amazon, well, they provided WUE. Their data centres consume 0.19 l/kWh on average. [^3]. I tried finding the exact number of litres consumed, but I wasn't able to do that. I also tried to find the exact amount of kWh of electricity consumed, and then multiply, but with no luck. Amazon reports everything in a bit different way.</p>
<h2>A bit about cooling in Data centres</h2>
<p>Now, the cooling in Data centres takes a big percentage of the electricity (and water) consumed. If the PUE is 1.5, as in the image above, roughly the 1/3rd of energy consumed goes into cooling. That's a lot!</p>
<p>What can we do about that? Well, for starters, we can increase the temperature in the DCs. The ideal temperature in the DCs is 18-27°C. The ASHRAE envelope provides this guideline. For optimal performance. In practice, temperatures go between 18-21°C.</p>
<p>Now, what they did in Singapore is quite interesting. They've launched a standard for optimizing the energy consumption in data centres. The end goal is to increase the temperature in the DCs at 26°C and above. This could lead to potential 2-5% savings of energy used for cooling for every 1°C increase. They tested it at two DCs, raising the temperature by 2°C. This reduced energy consumption by 2-3% during the trial. To find out more, visit <a href="https://datastorageasean.com/news-press-releases/imda-launches-sustainability-standard-tropical-climate-data-centres">this link</a>.</p>
<h2>Efficient computing</h2>
<p>Recently, I read an article by Wim Vanderbauwhede called <em>Frugal Computing</em>. The article discusses the importance of low-carbon and sustainable computing and the journey towards zero-carbon computing. A rather interesting article with an interesting conclusion. To prevent IT from worsening global warming and climate change, we must take these steps.</p>
<ul>
<li>We cannot count on only using renewables, we must reduce energy consumption.</li>
<li>We must increase the life-span of hardware resources (phones, laptops, servers).</li>
</ul>
<p>Now, this article considers all IT devices - laptops, smartphones, servers, IoT devices, and so on. I wanted to focus on the servers. Most of which are in the Data centres.</p>
<p>Manufacturing, distribution, and disposal of servers accounts for 20% of their total emissions. This percent is higher in other types of devices (e.g. smartphones and laptops). And the current life-span of servers in DCs is between 3-5 years. Somewhere is more, but the general rule of thumb is to replace servers every 5 years. We would definitely need to increase that!</p>
<blockquote>
<p>As a society, we need to start treating computational resources as finite and precious, to be used only when necessary, and as effectively as possible.</p>
</blockquote>
<p>To check out the complete research article, follow <a href="https://arxiv.org/abs/2303.06642">this link</a>.</p>
<h2>Summary</h2>
<p>With this article, I only scratched the surface of the energy consumption of DCs. I'm aware of that. To give you some more info - Ireland released a report for electricity consumption in 2022. They found that data centres use 18% of all electricity. The same amount as urban dwellings, and 8% more than rural dwellings. That is a lot!</p>
<p><a href="https://cloud.google.com/about/locations#europe"><img src="../images/posts/0046-deeper-look-into-dcs-02.png" alt="https://www.cso.ie/en/releasesandpublications/ep/p-dcmec/datacentresmeteredelectricityconsumption2022/"></a></p>
<p>So, Data centres are a big part of the IT infrastructure, and how we treat them has and will have an impact on our planet. That impact is not immediate, but rather long-term. In the next 10, or 20 years. But it is important to act now.</p>
<ul>
<li>We need to reduce the energy consumption of DCs, increasing cooling temperatures is one of the way.</li>
<li>We need to increase the approximate life-span of servers and DC equipment.</li>
<li><strong>We need to consider computing as finite resource.</strong></li>
</ul>
<p>See you in the next article!</p>
<p>[^1]: page 50 of <a href="https://www.gstatic.com/gumdrop/sustainability/google-2023-environmental-report.pdf">the Google Environmental Report</a>
[^2]: page 6 of <a href="https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW13PLE">the Microsoft Environmental Fact Sheet</a>
[^3]: page 35 of <a href="https://sustainability.aboutamazon.com/2022-sustainability-report.pdf">the Amazon Sustainability Report</a></p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Exploring Carbon Awareness: No, It&#39;s Not a Trendy Mindfulness Practice!</title>
			<link href="https://wonderingchimp.com/posts/exploring-carbon-awareness-no-its-not-a-trendy-mindfulness-practice/"/>
			<updated>2023-09-18T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/exploring-carbon-awareness-no-its-not-a-trendy-mindfulness-practice/</id>
			<content type="html"><![CDATA[
				<p>Hi everyone, it's been a while! I haven't had time to fully commit to writing these past weeks. That's why I missed the regular posting schedule last Monday. But now I'm back, and eager to share with you more of the topics related to sustainability.</p>
<p>This week, we'll cover the Carbon Awareness. We'll go from the beginning - what it is, and why is it important. Then we'll check out how you and your team can be more carbon aware. I'll also mention some examples of carbon awareness. In the end, we'll cover if there is a negative side to it.</p>
<p>I hope you'll find this topic interesting and engaging. And I hope it will help us better understand the idea of how can we all help in fighting climate change. One server at a time.</p>
<h2>What it is?</h2>
<p>To explain this, we need to first address <em>what is awareness?</em> Just kidding. Paragraphs below explain the carbon awareness in somewhat simple terms.</p>
<p>What we all know is that not all electricity production is equal. There are renewable sources, with little to no carbon emission. Then there are non-renewable sources which, well, brought us here.</p>
<p>What I wasn't aware before is that at different times or locations, these sources vary. This means that on sunny or windy days you can get electricity from solar or wind power. And in opossite cases, you'll usually get it from regular coal burning powerplant.</p>
<p>This might not apply as much to the state of things in Serbia. As you can see on the image below, we don't consume that much electricity from renewable sources. Well, there is hydroelectric energy, sure. But then again, we also have those <em>Mini hydroelectric power plants</em> that are quite a touchy topic.</p>
<p>As a country, we might be good at sports. But, when it comes to energy production and consumption, well... But we'll not cover that here.</p>
<p><a href="https://app.electricitymaps.com/zone/RS"><img src="../images/posts/0045-carbon-awareness-01.png" alt="Image shows the map of Europe with Serbia marked and on the left side are the yearly electricity consumption of the country."></a></p>
<p>What is carbon awareness? <em>Do more when the energy is coming from renewable sources. Do less when it comes from non-renewable sources.</em></p>
<p>That is neat. But, it's easier said than done.</p>
<h2>Why is this important?</h2>
<p>The answer to this is quite simple - it is important because it can help saving our planet. One of the things that can be helpful is to stop using electricity from carbon intensive sources. That is not an easy thing to do at once, so we need to be more patient there. Step by step our consumption from those sources needs to decrease.</p>
<p>An advantage for renewable sources is that they're cheaper than coal. And studies show that carbon aware actions can result in 45% to 99% carbon reductions. Depending on the number of renewables powering the grid. More information on <a href="https://ieeexplore.ieee.org/document/6128960">this link</a>.</p>
<h2>How to be more carbon aware?</h2>
<p>There are two ways to be more carbon aware - <em>demand shifting</em> and <em>demand shaping</em>.</p>
<h3>Demand shifting</h3>
<p>If workloads are flexible about when and where they run, we can shift our demand accordingly. We can run our workloads more when the energy is cleaner, and less or not at all when the energy is dirtier. For example, we can train a Machine Learning model at a different time or region with a cleaner energy.</p>
<p>We can break down demand shifting to the following.</p>
<ul>
<li>Spatial shifting. Moving our workloads to regions where the energy is cleaner.</li>
<li>Temporal shifting. If we cannot move our workloads to different regions, it's possible to run them at different times. During the sunny or windy day we can opt for running our workloads more.</li>
</ul>
<h3>Demand shaping</h3>
<p>Like the above, demand shaping means that we shape our demand to match the existing supply. Demand shaping for carbon-aware applications is all about the supply of carbon. When the carbon cost becomes high, shape the demand to match the supply of carbon. This can happen automatically, or the user can make a choice.</p>
<p>One example of demand shaping is eco mode found in cars or washing machines. When activated, some amount of performance is sacrificed to consume fewer resources.</p>
<p>Applications can also have eco modes that can make decisions to reduce emissions. Automatically or with users consent. Couple of examples are the following.</p>
<ul>
<li>Video conferencing software that adjusts streaming quality automatically. This means reducing the video quality to focus on audio when the bandwidth is low. Instead of streaming at high quality all the time.</li>
<li>TCP/IP. The transfer speed increases in response to how much data is broadcast over the wire.</li>
<li>Progressive enhancement with the web. The user experience improves based on the resources and bandwidth available on the end devices.</li>
</ul>
<p>Demand shaping relates to a broader concept in sustainability - reducing consumption. We can achieve a lot by becoming more efficient with resources, <strong>but we also need to consume less at some point.</strong></p>
<h2>Carbon awareness in the wild</h2>
<h3>Example of demand shaping</h3>
<p>A good example of demand shaping is the web site of the <a href="https://branch.climateaction.tech/">Branch Magazine</a>. It is an online magazine written by and for people who dream of a sustainable and a just internet for all. It is published by the Green Web Foundation.</p>
<p>Image below shows how it looks when you visit their site.</p>
<p><a href="https://branch.climateaction.tech/"><img src="../images/posts/0045-carbon-awareness-02.png" alt="Image showing a homepage of the Branch magazine with a dropdown menu on the right showing the Grid intensity view: live, low, moderate, and high. High is marked as the current setting."></a></p>
<p>You have a way to choose your experience on their site. If the grid intensity is low, the web page will load full blown content - images, videos, gifs, and so on. Grid carbon intensity is low = energy coming from renewables. If the grid intensity is high, they will not load full content. Instead, the web page will show ALT text for images, videos and gifs. Website will consume less energy.</p>
<p>If you want to find out more about the way they design their website and why, visit <a href="https://branch.climateaction.tech/issues/issue-1/designing-branch-sustainable-interaction-design-principles/">this link</a>.</p>
<h3>Example of demand shifting</h3>
<p>A good example of demand shifting, is the Carbon Aware KEDA Operator. Now, KEDA stands for <em>Kubernetes-based Event Driven Autoscaler</em>. This application can schedule workloads in Kubernetes clusters based on certain events. One of them is carbon intensity of the grid.</p>
<p>Based on the documentation, you can use this Operator to schedule workloads by looking at the data from the <a href="https://www.watttime.org/">Watttime</a> or <a href="https://www.electricitymaps.com/">Electricity Maps</a>. Then it dynamically adjusts the behaviour of the KEDA scheduler.</p>
<p>To find out more about it, visit their <a href="https://github.com/Azure/carbon-aware-keda-operator">GitHub repository</a>.</p>
<p>Now, I haven't played around with this Operator, but I'll do a demo on it in some future article. It would be nice to provide some technical setup of it.</p>
<h2>Is there a negative side to carbon awareness?</h2>
<p>Before concluding the topic, I want to mention the following question.</p>
<blockquote>
<p>Is there a concern that when everyone time shifts to the same location or the same greener grids, that can increase the demand of those grid's energy, which could increase fossil fuel burning to meet that new demand?</p>
</blockquote>
<p>There are two parts of this question. The first part is - why we can burn more fossil fuels when we move computing to the greener parts of the world? And the second is - should we be concerned with that?</p>
<p>To answer the first part - in theory, this can happen. If the grid becomes overloaded, power grid companies will end up burning fossil fuels to match this increase. The energy coming from the burning of coal is more dispatchable. Meaning, it's easy to predict how much energy you will get from it. Unlike the solar and energy from the wind. You cannot predict in an easy way if there will be wind or more sun, and so on.</p>
<p>Can this happen? In the event that we have only one location providing the renewable energy - it might. The fact that we have plenty of regions in the world that provide the renewables, this is unlikely.</p>
<p>To provide an answer the second part, I will quote an answer from Asim Hussain. He is the executive director and chairperson of the Green Software Foundation and the director of Green Software at Intel.</p>
<blockquote>
<p>... that's one of them good problems. And that's what I think about this thing. So someone's telling me a problem and I'm like, this is a good problem to have. If we are ever even remotely getting to the point where demand shifting is affecting a grid, that is a level of achievement, which is excellent.
...
Yes, there are negative consequences to that approach, but we are not even remotely there right now. So worrying about that is I think, a little bit too hyperbolic at the moment. You shouldn't do something because if you take that thing to the absolute extreme, it will be negative
...
I would say demand shifting is never going to be the one solution you have in your pocket to reduce your emissions of your application, your architecture. I always describe it as one of the things that you can do. It's one of the easier things to do. It gets you started on the much more challenging journey of energy efficiency, hardware efficiency, reducing the amount of energy you use, reduce the amount of compute you use.</p>
</blockquote>
<p>This question was raised on the Environment Variables podcast. One of my favourite ones. This podcast provides latest news on how to reduce the emissions of software and how the industry is dealing with its own environmental impact. You can find the full episode and the transcript <a href="https://podcast.greensoftware.foundation/e/68rz0318-we-answer-your-questions">here</a>.</p>
<h2>Further exploration</h2>
<p>Now, I mention a lot of things here. Most of it comes from the excellent material from the <em>Green Software Foundation</em>. On <a href="https://learn.greensoftware.foundation/carbon-awareness">this link</a> you can find more about carbon awareness. This lesson is part of the material for the <em>Green Software Practitioner</em> certification. The whole certification process is free. It can help you understand sustainability in IT and principles of green software.</p>
<p>The <a href="https://mediaspace.ucsd.edu/media/HotCarbon%E2%80%9923%3A%20Bringing%20Carbon%20Awareness%20to%20Multi-cloud%20Application%20Delivery%20(Maji%20et%20al.)/1_xeq5wjfj/307441832">second link</a> is an interesting talk from <em>Hot Carbon</em> conference from this year. The title is <em>Bringing Carbon Awareness to Multi-cloud Application Delivery (Maji et al.)</em>. They discuss how carbon awareness of the Load Balancer on VMWare, decreased overall carbon intensity. The data available was from the research, not live environments. All applications were only stateless.</p>
<p>Congrats! You've reached the end of this carbon awareness article. Thank you for the focus and attention throughout the article.</p>
<p>Let me know in the comments below your opinion on the topic. I'm interested in your feedback and if you found something interesting or not. Feel free to share this article as well, so it can reach more people interested in the topic. Your feedback helps me stay motivated for the future!</p>
<p>Thank you!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>What are the greenest regions in GCP?</title>
			<link href="https://wonderingchimp.com/posts/what-are-the-greenest-regions-in-gcp/"/>
			<updated>2023-08-28T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/what-are-the-greenest-regions-in-gcp/</id>
			<content type="html"><![CDATA[
				<p>The story of the greenest regions on the most used public Cloud Providers continues. Today, I'm writing about the state of the GCP, or Google Cloud Platform. How Google does it?</p>
<p>First, we'll answer some of the questions related to the numbers that we're going to see later in the article.</p>
<p>The thing that we'll cover first is the <em>Power Usage Effectiveness</em>, or <em>PUE</em> for short. What does it mean? How is it measured? After that, we'll go on to analyse the data available from the GCP. What do they publish, and what are the greenest regions available there? How long did it take me to find this information?</p>
<h2>What is <em>Power Usage Effectiveness</em>?</h2>
<p><em>Power usage effectiveness</em> or <em>PUE</em> is a standard efficiency metric for power consumption in data centres. A simple definition of PUE is the ratio of total facility energy to IT equipment energy used in a data centre.</p>
<pre><code class="language-shell">PUE  = Total facility energy usage / IT equipment energy usage
</code></pre>
<p>Total facility energy includes the power dedicated to the data centre, measured at the meter. All loads, including IT equipment, cooling systems, lighting systems, and power delivery components.</p>
<p>Total IT equipment includes all energy fed to compute, storage, and networking equipment. As well as, other control equipment like KVM switches, workstations, monitors, and laptops.</p>
<p>Calculating PUE is not as straightforward as the formula seems. Despite the simple ratio and the acceptance as a standard performance metric.</p>
<p><em>PUE</em> is not a one-time metric. It changes from time to time. It depends on the time of the day, load on the servers, where energy is coming from, location, and so on.</p>
<p>PUE values closer to 1 are better. That means that the data centre is using most of its energy (if not all) for the IT equipment operation. I'm wondering if the PUE can actually be 1? Maybe not, but one can hope...</p>
<p>To find out more about <em>Power usage effectiveness</em> in general, follow <a href="https://www.vertiv.com/en-asia/about/news-and-insights/articles/educational-articles/what-is-pue-power-usage-effectiveness-and-what-does-it-measure/">this link</a>. The blog post explains the basics of PUE and what we need to understand about the numbers.</p>
<p>On the graph below, you can see average PUE in Google Data Centres from 2008 until nowadays. We can see that the trend is decreasing. This means that the efficiency of the Google Data Centres is getting better and better.</p>
<p><a href="https://www.google.com/about/datacenters/efficiency/"><img src="../images/posts/0044-greenest-regions-gcp-01.png" alt="Image showing graph of the PUE values throughout the years, starting from 2008 until 2023. It shows two lines, one red for trailing twelve-month PUE, and other blue for quarterly PUE. Values are below 1.25 and going down, reaching 1.10 in the 2023."></a></p>
<p>To find more information on the topic of PUE in Google Data Centres, follow <a href="https://www.google.com/about/datacenters/efficiency/">this link</a>. There you can find how PUE values are calculated and PUE throughout the year. Reports for each quarter for each data centre across the globe are also available. Pretty cool!</p>
<h2>Is PUE relevant to the greenness of a region?</h2>
<p>Not exactly. PUE shows how efficient are data centres in their energy consumption. It doesn't show from which sources that energy is taken. Still, it's better for data centres to be the most efficient as possible.</p>
<p>Regardless of the energy sources, the efficient usage of energy is vital. We should strive to use the least amount of energy. That's why I wanted to give a brief intro about PUE.</p>
<h2>What are the greenest regions in GCP?</h2>
<p>Google, being all Google - information at your fingertips, made the search for this info quite easy. There is info all over the places about the regions being <em>Low CO2</em>. Well, not all over the places, only two. But never mind that.</p>
<p>What it means for a region to be <em>Low CO2</em>? In Google's terms, they are taking three metrics into account:</p>
<ul>
<li><em>Google CFE%</em> - an interesting metric about which I'll write below.</li>
<li>Grid carbon intensity in <em>gCO2eq/kWh</em> - I've covered this in <a href="https://www.wonderingchimp.com/posts/how-much-carbon-does-my-server-emit/"><em>How much carbon does my server emit?</em></a> article.</li>
<li>Google Cloud GHG emissions - in short - Scope 2 market-based emissions. Not perfect, but at least it's something. We also covered this in <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-azure/"><em>the greenest regions in Azure</em></a> article.</li>
</ul>
<h3>Google CFE%</h3>
<p>This is the percentage of the carbon free energy consumed in a particular region, on an hourly basis. Plus the investments Google has made in carbon-free energy in that region. Besides renewable energy coming from the grid, Google also includes carbon-free energy it produces in that region.</p>
<p>In other words - the average percentage of time application will run on carbon-free energy. More in this case is better.</p>
<p>To find out more about the Google CFE% and carbon-free energy, follow <a href="https://cloud.google.com/sustainability/region-carbon">this link</a>. This article explains how they Google CFE% is calculated. Besides, it also provides some considerations in choosing the right Google cloud region.</p>
<p>Finally, the greenest regions in GCP are:</p>
<ul>
<li>Europe:
<ul>
<li>europe-north1 in Finland,</li>
<li>europe-west1 in Belgium,</li>
<li>europe-west2 in London,</li>
<li>europe-west3 in Frankfurt,</li>
<li>europe-west6 in Zurich</li>
<li>europe-west9 in Paris</li>
</ul>
</li>
<li>North America:
<ul>
<li>northamerica-northeast1 in Montréal</li>
<li>northamerica-northeast2 in Toronto</li>
<li>us-central1 in Iowa</li>
<li>us-west1 in Oregon</li>
</ul>
</li>
<li>South America:
<ul>
<li>southamerica-east1 in São Paulo</li>
<li>southamerica-west1 in Santiago</li>
</ul>
</li>
</ul>
<p>The table below shows a preview of which services are available in which region. The greenest regions have <em>Low CO2</em> mark below its name.</p>
<p><a href="https://cloud.google.com/about/locations#europe"><img src="../images/posts/0044-greenest-regions-gcp-02.png" alt="Image showing a table of regions at the top and services available in that region on the left. Available services are marked with a green dot. Unavailable services are marked with white dot."></a></p>
<p>For the complete list of services, and regions, follow <a href="ttps://cloud.google.com/about/locations#europe">this link</a>.</p>
<h2>Useful Information</h2>
<p>There is a plethora of information I found during my quest for the greenest regions in the GCP. Some of it is boilerplate, but some of it is rather interesting. Below, I'm going to share the links about the latter.</p>
<p>Google published their <em>2023 Environmental Report</em>. It's quite an interesting read, and you can find all the numbers in the accompanying <a href="https://www.gstatic.com/gumdrop/sustainability/google-2023-environmental-report.pdf">PDF</a>. To find out more, follow <a href="https://sustainability.google/reports/google-2023-environmental-report/">this link</a>.</p>
<p>To find out interesting facts about Google's take on sustainability, go to <a href="https://cloud.google.com/sustainability/">this link</a>. It's also a starting point to other Google sustainability-relevant sources.</p>
<p>Last, but not least, map showing climate impact by area! This, I found the most interesting. It is a map showing the amount of produced and consumed electricity. And, it's open source! Below is a preview of it.</p>
<p><a href="https://app.electricitymaps.com/map"><img src="../images/posts/0044-greenest-regions-gcp-03.png" alt="Image showing a map of Europe on the right with the different countries in marked in different colours showing the amount of carbon intensity. The countries with less carbon intensity are marked with green, countries with more in orange, and brown. Countries with no data are marked in gray. On the left is the legend showing a list of countries."></a></p>
<p>To find out more, and explore by yourself, check out <a href="https://app.electricitymaps.com/map">this link</a>.</p>
<h2>Key Takeaways</h2>
<p>Google publishes the data in an open manner, and it's quite easy to find what the regions are the greenest. If that is what you are looking for. I like that part. I also like their strive to be <a href="https://cloud.google.com/blog/topics/inside-google-cloud/announcing-round-the-clock-clean-energy-for-cloud">carbon-free 24/7 by 2030</a>. Following, are some of the key takeaways from this article:</p>
<ul>
<li><em>Power Usage Effectiveness</em> - how energy efficient are data centres.</li>
<li><em>PUE</em> isn't directly related to the region being green, but the less energy we use, the better.</li>
<li><em>Google CFE%</em> a percentage of carbon-free energy used in a region, calculated by Google.</li>
</ul>
<p>When deciding where to host your application, regardless of the provider, consider the following:</p>
<ul>
<li>Latency to end users can be different from region to region.</li>
<li>Prices differ from region to region.</li>
<li>Some regions can have higher carbon-intensity than others, choose them wisely.</li>
</ul>
<p>This is it! I hope you found the information above interesting and useful. In the following weeks, I'll wrap everything up in a short(er) summary. I will go through and compare the information from the major cloud providers I covered here.</p>
<p>My plan is to continue to publish sustainability-related articles. To learn and raise awareness. Stay tuned and see you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>What are the greenest regions in Azure?</title>
			<link href="https://wonderingchimp.com/posts/what-are-the-greenest-regions-in-azure/"/>
			<updated>2023-08-14T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/what-are-the-greenest-regions-in-azure/</id>
			<content type="html"><![CDATA[
				<p>A couple of weeks ago, I published an article about <a href="https://www.wonderingchimp.com/posts/what-are-the-greenest-regions-in-the-aws/">the greenest regions in AWS</a>. Check it out if you haven't! Today, I'm going to write about the greenest regions on Azure. Like the previous one, the questions I'm going to cover in this article are as follows.</p>
<p>First, we'll have a look at the emission scopes and their importance. Then we'll talk about methods used in calculating emissions on Azure (and AWS for that matter). Last but not least, we'll see the greenest regions in Azure and available reports.</p>
<h2>What are emission scopes?</h2>
<p>There is something called the <em>Greenhouse Gas protocol</em>. This protocol is the most used <em>standard</em> companies use to publish emissions of Carbon. It divides the emissions into 3 scopes.</p>
<ul>
<li><em>Scope 1</em> - Direct emissions from operations owned or controlled by the organization. For example, on-site fuel combustion or fleet vehicles.</li>
<li><em>Scope 2</em> - Indirect emissions related to emission generation of purchased energy. For example, heat and electricity.</li>
<li><em>Scope 3</em> - Other indirect emissions from all the other activities companies engage in. For example, emissions from an organization's supply chain. Business travel for employees. The electricity customers may consume when using your product. And so on.</li>
</ul>
<p>The one that is the most significant and the most difficult to calculate is, you guessed it - scope 3. It is often referred to as <em>value chain emissions</em>. It represents a full range of activities needed to create a product or a service. From the initial idea to the end distribution.</p>
<p>For example, every raw material used in the production of your laptop emits carbon. Emission resulting from material extraction and processing is part of Scope 3. This scope also includes emissions from the use of the laptop after you buy it.</p>
<p>You can find more information about emission scopes on <a href="https://learn.greensoftware.foundation/measurement#the-ghg-protocol">this link</a>.</p>
<p>Azure publishes scope 3 emissions, unlike AWS. It (AWS) includes only <em>Scope 2</em> emissions. This is the reason why I saw 0 emissions on Customer Carbon Footprint Tool on AWS.</p>
<blockquote>
<p>Is this what they call greenwashing? #askingforafriend</p>
</blockquote>
<h2>Methods to calculate emissions</h2>
<p>It is a complex process. Adding to that complexity, we need to take into account two methods per <a href="https://ghgprotocol.org/sites/default/files/Scope2_ExecSum_Final.pdf">GHG Protocol scope 2 reporting guidance</a>. These two methods are:</p>
<ol>
<li>Location-based method. Emission intensity of the power grids from which companies consume electricity. The energy you use.</li>
<li>Market-based method. Reflects emissions from the electricity that companies have bought. The energy you pay.</li>
</ol>
<p>Both these methods are ways of calculating carbon emissions from <em>Scope 2</em>. Now, the report above recommends using both of them in calculating emissions. This is <em>dual reporting</em>. According to the reports, AWS only uses a market-based calculation method. Azure (Microsoft) uses both location-based and market-based methods.</p>
<p>To be honest, I don't quite understand the need for a market-based method. Other than the following.</p>
<blockquote>
<p>We bought electricity from renewable sources, thus our emissions are 0. We don't care from which sources we actually get it.</p>
</blockquote>
<p>To me, the more logical solution is location-based, or a combo of these two.</p>
<h2>What are the greenest regions in Azure?</h2>
<p>Now to the exact numbers. For starters, Azure wasn't as straightforward as AWS. You need to dig deep to see what percent of renewables power their regions. Or, I wasn't able to find that out as I did for the AWS.</p>
<p>Microsoft pledges to shift to 100 percent renewable energy supply by 2025. That probably means they are close to it, so let's check.</p>
<p><img src="../images/posts/0043-greenest-regions-azure-01.png" alt="Table listing the figures for energy consumption within Microsoft in MWh. Total energy consumption and non-renewable fuel consumed numbers for 2022 are circled in red." title="2022 Environment Sustainability Report - Data Fact Sheet, Page 5, Table 6"></p>
<p>The table above is from the <a href="https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW13PLE">2022 Environment Sustainability Report - Data Fact Sheet</a>.</p>
<p>To calculate the percent of renewable energy used, we will use the above marked numbers. If we divide the two, and multiply by 100, we can see that ~2.5% of consumed energy comes from non-renewables. That means that the other 97.5% are from renewables. Which is great! That is, if the values they are reporting are correct.</p>
<p>Continuing on my quest to find the greenest regions, I found this.</p>
<p><a href="https://datacenters.microsoft.com/globe/explore"><img src="../images/posts/0043-greenest-regions-azure-02.png" alt="Image showing map of the world with yellow symbols across several continents showing sustainable projects Microsoft has implemented."></a></p>
<p>This screenshot shows the location of Microsoft's sustainability projects. They spread all over the US, Northern Europe, and a couple of those projects in the rest of the world. Then I went and applied some extra filters, shown in the image below.</p>
<p><a href="https://datacenters.microsoft.com/globe/explore"><img src="../images/posts/0043-greenest-regions-azure-03.png" alt="On the left, a Legend and Region filters on the map with Sustainability features opened, and Microsoft Circular Center marked in the drop-down. On the right, a result table is shown with regions of Asia Pacific, Europe, and United States in it."></a></p>
<p>I selected the <em>Microsoft Circular Center</em> sustainability feature. To translate this - the equipment from those data centres is going to be re-used when they reach the end of life. Or EOL, as one might call it. On <a href="https://learn.microsoft.com/en-us/shows/azure-videos/microsoft-circular-centers-overview">this video</a>, you can find more about this <em>feature</em>.</p>
<p>Selecting this <em>feature</em> produced an output below. Blue dots are regions, and yellow symbols are sustainable projects.</p>
<p><a href="https://datacenters.microsoft.com/globe/explore"><img src="../images/posts/0043-greenest-regions-azure-04.png" alt="Image showing map of the world with yellow symbols across several continents showing sustainable projects Microsoft has implemented and blue dots near those yellow symbols that mark the Azure datacenter regions."></a></p>
<p>This leads me to assume that the greenest regions on Azure are:</p>
<ul>
<li>North Central US</li>
<li>East US 2</li>
<li>North Europe</li>
<li>West Europe</li>
<li>Southeast Asia</li>
</ul>
<p>If you want to check this out on your own, go ahead and visit <a href="https://datacenters.microsoft.com/globe/explore">this page</a>.</p>
<p>The above is my assumption. And knowing that I'm not a fan of assumptions, I tried to find a bit more. And I found the below quote.</p>
<blockquote>
<p>We’ve also built one of our most sustainable cloud regions in Sweden that launched in November 2021. This will enable us to use 100 percent renewable energy for each hour of consumption. Sweden will be the first Microsoft region to use lower-carbon renewable fuel for backup power. <a href="https://datacenters.microsoft.com/globe/powering-sustainable-transformation">source</a></p>
</blockquote>
<p>Finally! We have a winner - it's the region in Sweden!</p>
<p>The following link leads you to the <a href="https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW15mgm">2022 Environmental Sustainability Report.</a> There you can see everything described in more detail.</p>
<h2>How to calculate your emissions?</h2>
<p>There is a tool for that on Azure, and it's called - <a href="https://www.microsoft.com/en-us/sustainability/emissions-impact-dashboard">Emissions Impact Dashboard</a>. It is a Power BI template that helps you calculate the emissions of your workloads running on Azure.</p>
<p>There is a catch, however. This tool is only available for Power BI Pro users, so if you are one of them, congrats! If not, I guess a demo would be nice to have a look at and play around. Like it did for me. And at least this one doesn't show 0 MTCO2e emissions.</p>
<p>To find out how to use and configure the tool, check out <a href="https://learn.microsoft.com/en-us/power-bi/connect-data/service-connect-to-emissions-impact-dashboard">this link</a>.</p>
<p>Since I don't have any workloads running on Azure, I haven't had a chance to check out this tool on proper terms. Have you had a chance to do so? What are your findings, learnings, and experience from using it? You can write your impressions in the comments below.</p>
<h2>Key Takeaways</h2>
<p>Researching for this article took me down the rabbit hole. I doubted if I will be able to find the exact numbers. I still doubt I found them. On the other hand, it helped me understand things. I saw how big sustainability is for Microsoft. And that gives me hope.</p>
<p>Below are some of the key takeaways throughout my journey in writing this article.</p>
<ul>
<li>There are 3 scopes of emissions. Direct, indirect via power consumption, and indirect emissions from all other activities.</li>
<li>When looking at the reported emissions, we need to have in mind the methods used in calculating them. Those methods are: market-based, location-based, or both.</li>
<li>Emissions calculated only by market-based method could lead in the wrong direction.  For example, emission numbers are smaller than it actually is.</li>
<li>I got the impression that Microsoft cares about sustainability and the way they report it. They are also the ones behind the <a href="https://greensoftware.foundation/">Green Software Foundation</a>. Kudos! Even though information is a bit harder to find, it seemed more concrete to me when I did the research.</li>
<li>You can use Emission Impact Dashboard to calculate emissions of Azure workloads.</li>
</ul>
<p>To go down the rabbit hole yourself, visit <a href="https://azure.microsoft.com/en-us/explore/global-infrastructure/sustainability/">this page</a>.</p>
<p>See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>What are the greenest regions in the AWS?</title>
			<link href="https://wonderingchimp.com/posts/what-are-the-greenest-regions-in-the-aws/"/>
			<updated>2023-07-31T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/what-are-the-greenest-regions-in-the-aws/</id>
			<content type="html"><![CDATA[
				<p>A couple of days ago, I wondered what are the greenest regions on the Cloud. Then, I got an idea and try to find out if there is going to be an easy way to see this. First I thought, I will go and have a look into AWS. See what information is available there. This article is the result of this quest. In some future articles I plan to concentrate on two other main cloud providers - Azure, and GCP. If you find this list is incomplete, let me know, so I can have a look into others as well.</p>
<p>In this article, I'll answer the above question - what are the greenest regions in the AWS? We will dig a bit deeper in the information AWS has available. We will look for some insights where you can find useful sustainability-related stuff.</p>
<h2>What do I mean by <em>green region</em>?</h2>
<p>The green region would be the region that is powered by renewable energy - wind, solar, geothermal... Below is the official definition of <em>Renewable energy</em> from the <a href="https://www.epa.gov/green-power-markets/what-green-power">US Environment Protection Agency</a>.</p>
<blockquote>
<p>Renewable energy includes resources that rely on fuel sources that restore themselves over short periods of time and do not diminish. Such fuel sources include the sun, wind, moving water, organic plant and waste material (eligible biomass), and the earth’s heat (geothermal).</p>
</blockquote>
<h2>What is the carbon footprint?</h2>
<p>To answer this, here is a quote from <a href="https://www.britannica.com/science/carbon-footprint">Britannica</a>.</p>
<blockquote>
<p>The carbon footprint, amount of carbon dioxide (CO2) emissions associated with all the activities of a person or other entity (e.g., building, corporation, country, etc.). It includes direct emissions, such as those that result from fossil-fuel combustion in manufacturing, heating, and transportation, as well as emissions required to produce the electricity associated with goods and services consumed. In addition, the carbon footprint concept also often includes the emissions of other greenhouse gases, such as methane, nitrous oxide, or chlorofluorocarbons (CFCs).</p>
</blockquote>
<p>In short - it is your or any other entity's carbon emissions. And, it's not as easy to measure it.</p>
<h2>How to measure carbon footprint on AWS?</h2>
<p>AWS has a tool which helps you with the carbon emissions generated from AWS usage. It presents the data in MTCO2e - metric ton of CO2 equivalent.</p>
<p>If you need a reminder on what CO2 equivalent is, check out my previous <a href="https://www.wonderingchimp.com/posts/how-much-carbon-does-my-server-emit/">article</a>.</p>
<p>On <a href="https://aws.amazon.com/aws-cost-management/aws-customer-carbon-footprint-tool/">this link</a>, you can have a look at the Carbon Footprint tool from AWS.</p>
<p>This is neat! But, is it working? I wasn't able to confirm that on my account(s). Even though I had some machines running for more than a few weeks, my carbon footprint was still zero. I'm curious to see if this is working for you, or you also receive the same numbers?</p>
<p>There might be a reason behind the value being zero. Spoiler alert - all my workloads were running in Europe. Is it region-related?</p>
<h2>What are the greenest regions?</h2>
<p>Well, I thought it was going to be a hard thing to find out, and actually it wasn't. It was quite easy, which bummed me a bit, to be honest. I was looking forward to checking things out and trying to calculate the numbers.</p>
<p>But, the information about the greenest regions is available on <a href="https://sustainability.aboutamazon.com/environment/the-cloud?energyType=true#renewable-energy-map">this page</a>.</p>
<p>Below, you'll see a map from that page, with the number of solar and wind farms owned by Amazon. The numbers are quite amazing! I was quite surprised about the amount of data being available.</p>
<p><img src="../images/posts/0042-greenest-regions-aws-01.png" alt="Map of the world with number of Renewable Energy sources that Amazon has. Those numbers are rounded in orange and violet circle, each representing different type of renewable energy source." title="Amazon Renewable Energy Map"></p>
<p>Based on that information, the greenest regions in 2022 are the following:</p>
<ul>
<li>U.S. East (Northern Virginia)</li>
<li>GovCloud (U.S. East)</li>
<li>U.S. East (Ohio)</li>
<li>U.S. West (Oregon)</li>
<li>GovCloud (U.S. West)</li>
<li>U.S. West (Northern California)</li>
<li>Canada (Central)</li>
<li>Europe (Ireland)</li>
<li>Europe (Frankfurt)</li>
<li>Europe (London)</li>
<li>Europe (Milan)</li>
<li>Europe (Paris)</li>
<li>Europe (Stockholm)</li>
<li>Europe (Spain)</li>
<li>Europe (Zurich)</li>
<li>Asia-Pacific (Mumbai)</li>
<li>Asia-Pacific (Hyderabad)</li>
<li>China (Beijing)</li>
<li>China (Ningxia)</li>
</ul>
<p>This is awesome. Nineteen of the AWS regions are 100% powered by renewables.</p>
<h2>Extra reports</h2>
<p>The reporting on the consumed energy is quite good as well. Based on my limited knowledge and  more than a couple of hours of research. Now, to the specific reports.</p>
<p>The <em>Renewable Energy Methodology report</em>. It summarizes a plan to reach net-zero carbon by 2040. Some of the commitments from the report:</p>
<ol>
<li>Powering the whole operations from renewables by 2025.</li>
<li>Produce enough renewable energy to match the electricity used by all active <em>Echo</em> devices.</li>
</ol>
<p>This report also mentions the strategies to achieving those commitments. How they measure the amount of renewable energy, and so on. It is a 3-pages-long report, and <a href="https://sustainability.aboutamazon.com/renewable-energy-methodology.pdf">available to public</a>.</p>
<p>Next one is the <em>Carbon Methodology report</em>. This paper summarizes what goes in the Amazon's carbon footprint. It is a bit longer - 7-pages read, also <a href="https://sustainability.aboutamazon.com/carbon-methodology.pdf">available to public</a>.</p>
<p>Last, but not least, there is the <em>Sustainability Report from 2022</em>. This is a complete report of all things related to sustainability that Amazon is doing. It shows the numbers for carbon emissions, renewable energy, packaging emissions, water, diversity, equity, and inclusion, training, community impact.</p>
<p>Below is a screenshot of an Amazon's 2022 Year in Review.</p>
<p><img src="../images/posts/0042-greenest-regions-aws-02.png" alt="An overview of the sustainability-related topics provided by Amazon, and grouped in boxes showing different topics." title="An overview of the sustainability-related topics provided by Amazon, and grouped in boxes showing different topics."></p>
<p>You can find all sustainability-relevant reports <a href="https://sustainability.aboutamazon.com/reporting">here</a>.</p>
<h2>How can we use this?</h2>
<p>Well, for starters, we can check in which region(s) we run our AWS workloads. If possible, consider migrating to the green ones mentioned above. One small step for us, but kind of impactful for the Planet.</p>
<p>Consult the Carbon Footprint tool and see what is the amount of MTCO2e your workloads emit. It's not working for me, but then again, I may be doing something wrong?</p>
<p>Check out the <a href="https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sustainability-pillar.html">Sustainability Pillar of AWS Well-Architected Framework</a>. It contains the design principles, operational guidance, best-practices, potential trade-offs, and improvement plans. We can use all that to meet sustainability targets for our AWS workloads. Prerequisite for this - <strong>have sustainability targets in the first place!</strong></p>
<p>Another thing we can do, and this applies to all the Cloud providers, is to check for <em>Cloud Zombies</em>. All those things we don't use on the Cloud but costs us money, energy, time... Check out <a href="https://www.infoq.com/news/2023/03/stop-cloud-zombies-qcon/">this link</a> to find out how to do this.</p>
<p>Let me know if the information shared above is helpful or if I missed something. Write down in the comments below your take on this.</p>
<p>In following weeks, I'll do the same coverage for Azure and Google Cloud providers. See you in the next article!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How much Carbon does my Server emit?</title>
			<link href="https://wonderingchimp.com/posts/how-much-carbon-does-my-server-emit/"/>
			<updated>2023-07-17T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-much-carbon-does-my-server-emit/</id>
			<content type="html"><![CDATA[
				<p>Hello!</p>
<p>It's been a while since I last wrote. I took a short vacation, and to be honest, I didn't feel like writing. That's why I missed the schedule. The latest article should have been in your inbox last week. I apologize for that. I'll make an effort to stick to the current schedule of once every two weeks. Although, I'm considering posting articles in a more regular manner. We'll see how things unfold. For now, I'm returning to the once-every-two-weeks newsletter. It might be too much to change it.</p>
<p>With this and future articles, I want to scratch the surface of a topic that I find very interesting. It's a topic that is of great importance to us and our planet. I'm referring to the topic of green (sustainable) software. Some of the questions I am going to answer are - How can we contribute to reducing global warming? How can software engineers help the Earth? What do certain terms mean, and why are they important? These are some of the things and questions I want to explore through this and future articles.</p>
<p>I'll start with small things. We are going to answer a simple question: <em>How much carbon does my server emit?</em></p>
<h2>Why is this important?</h2>
<p>By now, you should know a thing or two about global warming. If not, it would be a good idea to educate yourself further.</p>
<p>Here's a summary (a TL;DR one might call it). Earth is getting warmer and warmer due to emission of certain gasses. This is the greenhouse effect. And those gasses are, you guessed it, greenhouse gasses (or GHGs for short). Now, the warming by itself, shouldn't be a problem, right?</p>
<p>Nope, it is a problem. A big problem. The increase in Earth's temperature has a negative impact on us all. Both flora and fauna.  And while there are many debates about who caused it, the main contributors to global warming are us - humans. I have written about global warming before, check out <a href="https://www.wonderingchimp.com/posts/is-global-warming-a-known-system-behavior/">this link</a> to find out more.</p>
<p>Some studies show that the IT sector's contribution to global warming is between 1.8% and 2.8%. The belief is that this figure is much higher. This is because the entire life cycle and supply chain of equipment were not taken into account. Things like equipment, infrastructure, energy consumption during production... One of the sources for these figures is on <a href="https://www.sciencedaily.com/releases/2021/09/210910121715.htm">this link</a>.</p>
<p>Although the above percentage seems insignificant, it is definitely much higher. That's one of the reasons for this article. I want to focus on carbon emissions into the atmosphere from the equipment we use. I want to educate myself and you, the reader, about something new. With the hope of contributing to sustainability and a greener future.</p>
<h2>Why carbon?</h2>
<p>Carbon is often used as a term to denote the impact of all types of emissions and activities on global warming. Other types of emissions include, for example, methane emissions. The term carbon often refers to all greenhouse gases (GHGs).</p>
<p>To determine the degree of impact of gases on global warming, we use carbon equivalent. Or CO2eq/CO2-eq/CO2e for short. In simple terms - one ton of methane has the same warming effect as 80 tons of carbon dioxide. We normalize that value to 80 tons of CO2eq.</p>
<h2>How do we measure carbon emissions?</h2>
<p>With the help of carbon intensity. This metric shows the emission of carbon per kilowatt-hour (KWh) of electricity consumed. The standard unit of measurement is grams of carbon per kilowatt-hour - <em>gCO2eq/KWh</em>.</p>
<p>To give you an example. Imagine you live near a wind farm. And your power grid connects directly to it. If you plug in your laptop to the outlet, the electricity used by it would have a carbon intensity value of 0 gCO2eq/kWh. This is because the wind farm does not emit carbon to generate electricity. Again, this is a simplified example.</p>
<p>Of course, in real life, this is not the case. Often, we do not have direct control over the grid and the sources from which we get electricity. This means that our carbon intensity is a mix of all the current sources of electricity on the grid. These sources can be higher-carbon or lower-carbon. The latter is what we want - lower carbon. In Serbia, unfortunately, the higher-carbon sources are dominant.</p>
<h2>How to calculate the carbon intensity?</h2>
<p>In short, this process is not that simple. There are many calculation methods mentioned on Wikipedia. Not wanting to recycle the content from other sources, I won't go into detail about those methods here. You can learn about them on <a href="https://en.wikipedia.org/wiki/Emission_intensity">this link</a>.</p>
<p>For our example, let's consider the carbon emissions during actual equipment usage. To be exact, the <em>well-to-wheels</em> (WTW) method. I need to emphasize that this is an <em>estimate</em>. We cannot determine the exact carbon intensity of our machine in an easy way. The value depends on many variables.</p>
<p>For this example, I will use a simple <em>Raspberry Pi 3 model B</em> as the server. The first reason - it's because I have it laying in my apartment for some time. The second - it's because I want to save you some time. I don't want to go into details of many resources on different cloud providers. Whether they are Version A, Generation X, or whatever comes to mind, it's not important for this article. They exist only to confuse you.</p>
<p>Below are the most important specs of the Raspberry Pi I'm using.</p>
<ul>
<li>Quad Core 1.2GHz Broadcom BCM2837 64bit CPU</li>
<li>1 GB RAM</li>
<li>BCM43438 wireless LAN and Bluetooth Low Energy (BLE) on board</li>
<li>Micro SD port for loading your operating system and storing data</li>
<li>Upgraded switched Micro USB power source up to 2.5A</li>
</ul>
<p>So, we have 1.2GHz quad-core CPU, and 1 GB of RAM. That should be enough for this example. For full specs, check out <a href="https://www.raspberrypi.com/products/raspberry-pi-3-model-b/">this link</a>.</p>
<p>I admit, the term <em>server</em> is rather loose in this article. But, I do consider Raspberry Pi a type of server. It is compact, and enough for some operations. It often comes without a graphical interface (as it should). Last, but not least - it's running the Linux operating system.</p>
<p>Now, how much power does the Raspberry Pi consume? Searching the internet, I came across a figure of 3.6 Watts. That is, if the Pi is not working in idle mode. The energy consumption for a whole day would be approximately 86.4 Watts per hour (Wh). In kilowatt-hours, that is 0.0864 kWh. <a href="https://raspberrypi.stackexchange.com/a/5034">This link</a> provides more information.</p>
<p>Next, a bit more complex part of the calculation - the source of electricity. Each source of electricity carries a certain gCO2eq emission into the atmosphere. We need to make sure we take all into account. The image below shows the estimated emissions for different energy sources.</p>
<p><a href="https://commons.wikimedia.org/w/index.php?curid=115157229"><img src="../images/posts/0041-server-carbon-emissions-01.png" alt="A graph that contains the amount of carbon intensity of different energy sources. The values are provided in gCO2-eq/kWh. The biggest sources are Hard Coal, pulverized, with 1023 gCO2eq/kWh and Natural gas, Combined Cycle with 434 gCO2eq/kWh." title="By Gordonmcdowell - Own work, CC BY-SA 4.0"></a></p>
<p>Based on the report from 2022, the electricity production in Serbia varies a bit. The most dominant is the production from thermal power stations. The second is from hydroelectric power stations.</p>
<p>You can find the report linked <a href="https://ems.rs/wp-content/uploads/2023/05/GTI-o-radu-EMS-AD-u-2022.-godini-Correct.pdf">here</a>. Sorry for the Serbian version, I wasn't able to find the English one.</p>
<p>The graph below shows the electricity production in 2022 per month. The blue line shows the hydroelectric and the brown is from thermal power stations.</p>
<p><img src="../images/posts/0041-server-carbon-emissions-02.png" alt="A graph showing the by-month electricity production in Serbia for the year 2022. The two most dominant sources are thermal, shown in brown colour, and hydroelectric, shown in blue." title="EMS Technical Report for 2022, page 14."></p>
<p>So, roughly 2/3 of the consumption emits around 1000 gCO2eq/kWh, and 1/3 of the consumption emits around 11 gCO2eq/kWh.</p>
<p>The calculation would look like this:</p>
<pre><code>(0.0864 kWh x 2/3) * 1000 gCO2eq/kWh + (0.0864 kWh x 1/3) * 11 gCO2eq/kWh = 57.9168 gCO2eq
</code></pre>
<p>If I let the Raspberry Pi run for the whole day, it would have emissions of around 58 grams of CO2 equivalent. Not bad, you might think?</p>
<p>Well, I wouldn't exactly say so. Consider the size and power of the device itself.  The weight of the Raspberry Pi is ~300g, and its power is 3.6 Watts. Even though the consumption is low, the amount of carbon it produces is not negligible. In other words - during its full-day operation, it emits almost 1/5 of its weight in CO2eq. Thanks to electricity sources in Serbia.</p>
<p>A simple calculation, right? Well, sort of. Some information wasn't easy to find, though... These values might not be exact, and for sure may vary. This number could be higher.  Since I live in Belgrade, where the main source of electricity is a thermal power station. But, I wanted to stay as unbiased as I could, based on the values I found.</p>
<h2>Key takeaways</h2>
<p>As I mentioned above - these calculations are approximate estimates. Based on the information I gathered from the internet. They are not exact. They might be worse, but one can hope.</p>
<p>Some of the key takeaways from this article, for both you - the reader and me - the author, are the following:</p>
<ul>
<li><em>Carbon</em> - a term often used to encompass all Greenhouse Gasses (GHGs).</li>
<li><em>Carbon equivalent</em> - the warming effect of a GHG expressed in a warming effect of carbon.</li>
<li><em>Carbon intensity</em> - a metric that shows emission of carbon per kilowatt-hour (KWh) of power consumed. The measurement unit is <em>gCO2eq</em>.</li>
<li><em>Carbon intensity depends on energy sources</em>.</li>
<li>Transparent data about energy production is necessary to calculate carbon intensity.</li>
</ul>
<p>That's it for now. I hope you found this article informative.</p>
<p>Let me know in the comments below what you found the most interesting. Is there any information you'd want to share? Do you find the numbers and calculations I presented above wrong?</p>
<p>It would mean a lot if you could share this article. Or if you received it by e-mail, forward it to somebody who will find this topic interesting.</p>
<p>See you in the next issue!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Why and how to track progress in training (and anything else for that matter)?</title>
			<link href="https://wonderingchimp.com/posts/why-and-how-to-track-progress-in-training-and-anything-else-for-that-matter/"/>
			<updated>2023-06-26T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/why-and-how-to-track-progress-in-training-and-anything-else-for-that-matter/</id>
			<content type="html"><![CDATA[
				<p>Progress is of great importance to all of us. In our work, private life, and the activities we engage in during our free time... Overall, in numerous spheres of our lives. And that is completely normal. I believe that the desire for progress is encoded in the genetic code of each of us.</p>
<p>However, when progress is lacking, the majority of us begin to doubt our abilities. We think that everything is pointless and that every attempt we make is futile. Often, the lack of progress leads to a lack of motivation for that activity. Ultimately, the same lack of progress can also lead to quitting that activity altogether.</p>
<p>What we need to realize deep within ourselves is that the main reason for the absence of progress is the absence of activity. As long as we are working on something, even if there is no obvious progress to us, it is still there. It may be invisible, but it does not mean it doesn't exist.
In this article, I will focus on progress - why it is important and how to track it. I will look at it through the lens of the sport I engage in, the perspective of sport climbing. However, you don't have to stop there; you can take the practices I mention here and apply them to other activities and aspects of what interests you.</p>
<h2>Why is progress important?</h2>
<p>As I mentioned earlier, the desire for progress is what fundamentally drives us. To run faster, jump higher, be better than our opponents, know more about a specific topic... The examples are countless. When that desire for progress is fulfilled, and we are aware of the progress we have made, an even greater desire for improvement and advancement emerges. This leads us into an endless loop of desiring progress, which leads to progress, which further fuels the desire for that activity. And so the cycle continues.</p>
<p>Until that endless loop is interrupted by the absence of progress. Or at least the absence from our perspective.</p>
<p>Let me give you an example (whether it's personal or not is not that important). Let's say we want to start climbing (it doesn't matter if it's from personal experience or not 😅). We tried it and enjoyed it, but we don't know how to proceed. What is the first step? First, we might feel despair because we are so weak while everyone around us is incredibly strong. Just kidding, of course. We look at things from a different perspective, maybe start training more actively with a coach or on our own. And so, after some time, we realize that what seemed difficult for us at the beginning is now too easy.</p>
<p>However, as you can imagine, we don't want to stop there. We continue climbing and getting stronger. This time, the progress is even better, and we feel even more powerful. That's it, let's keep going! Determined, we set ourselves a goal to climb something that we didn't even consider when we started. We try the route day after day, but without success. We think that we are not strong enough, so we increase our training. Over time, we realize that we are nowhere near the top of the route. That might shake us a little, or maybe not, but we don't give up. Now, as time goes on, we notice that the progress is becoming weaker and weaker... Instead of completing the route we set out to conquer in a very short time, like we did at the beginning - none of that happens.</p>
<p>At that moment, as well-known Serbian actor Petar Božović would say - things start to fall apart. We begin to doubt our strength and dedication. Unaware that before this route we can't climb now, we've already climbed a vast majority of very difficult routes for beginners! All we have in our minds is the thought that there has been no progress in the past few weeks. We slowly start losing motivation to continue. We stop setting aside time for climbing. Not only that, but we gradually weaken and eventually completely stop training and climbing.</p>
<p>Whether this example is trivial or not, it shows us that the desire for progress is what drives us forward. And progress, as a result, is merely a trigger for further advancement and improvement. What is important to remember is that regardless of what climbing represents in your context, progress is always there. When we stop completely, progress also ceases.</p>
<h2>How to prevent activity from stopping?</h2>
<p>The answer to this question can be quite simple - by tracking that activity. When we start tracking the activity we're engaged in, we create a mechanism that reminds us of our progress. We create a place where we can see it in black and white, whether we have or haven't made progress.</p>
<p>The progress curve is always ascending, regardless of the current feeling. As long as the activity is present, the trend is always upward. When we look at things from the very beginning up to this moment, we have made a lot of progress. Despite feeling like we're not progressing at the moment, that feeling is incorrect. Although physically immeasurable and invisible, progress can manifest as improved concentration, thinking patterns, new perspectives... Choose for yourself...</p>
<h2>How do I track my progress?</h2>
<p>I have been involved in the story of sport climbing for a long time. I remember my beginnings and the days when I progressed rapidly in a short period of time. Then the progress became smaller and smaller. It became less visible. My desire for climbing remained, and fortunately, despite various events, it is still there.</p>
<p>What keeps me engaged in the whole story is the fact that I have learned to see my progress even when I haven't tracked it in black and white. However, as the years go by, I increasingly have to remind myself how I felt during training sessions last week or a month ago.</p>
<p>That's why I turned to tracking my training sessions and performance.</p>
<p>There are many ways to do it, and the simplest one is to use a notebook or a computer program (such as LibreOffice Calc or Excel) where we record our plans and results. That's the simplest way. However, me being a geek who often spends more time following and researching ways to do something instead of actually doing it, I also wasted several hours looking for the best way to track training sessions. The way that suits me.</p>
<p>After watching numerous videos by various gym bros on the topic of how to best manage training sessions, I came across the simplest and cheapest method - the Notion application!</p>
<p>A small note - this is not an advertisement for Notion, claiming it's an ultra, mega, super cool application that will solve all your problems. It's not that kind of advertisement, and I don't think Notion is such an application. This is just one way of using the Notion that I found suitable for myself. I still think that the application itself is too complex, for example, if you want to create a simple note on your phone. For now, I only use it as a tool to help me track my training sessions.</p>
<p>It looks simple, consisting of two tabs. One is a calendar, and the other is a table. In the calendar, I enter my training plan for the upcoming week. Those same training sessions are automatically projected into the table in the next tab. When I complete or don't complete a training session, I simply enter the results (or reasons why I didn't finish), how I felt, etc. And that's it!</p>
<p>Below are the screenshots of my Notion document.The first page of the Training Tracking document</p>
<p>The second page of the Training Tracking document</p>
<p>Now, since I'm still not proficient in the Notion ecosystem, and I'm not sure how to share a page template, if you want to learn more about it, feel free to reach out to me in the comments, and I can send you the document template. You can then use it for your own needs, modify it, whatever works for you.</p>
<h2>Conclusions</h2>
<p>To keep it short, I believe I've talked more than enough about progress. To give you an idea of how much I emphasized it, just think that I've mentioned the word &quot;progress&quot; or its root a total of 38 times in this text! Now, let me highlight only the most important points. Or, as they say, TL;DR (too long; didn't read).</p>
<ul>
<li>The only reason for the absence of progress is the absence of activity.</li>
<li>Progress is present even when we can't see it with the naked eye, in the form of thoughts, understanding, comprehension...</li>
<li>Understanding that progress always exists makes things easier and keeps us more motivated.</li>
<li>Tracking progress is a mechanism that can further motivate us and provide a clear insight that we are actually progressing.</li>
</ul>
<p>That's all for now. If you have any further questions or if I've said something incorrectly, please leave a comment below or contact me directly.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How Go Captivated my Mind?</title>
			<link href="https://wonderingchimp.com/posts/how-go-captivated-my-mind/"/>
			<updated>2023-06-12T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-go-captivated-my-mind/</id>
			<content type="html"><![CDATA[
				<p>When you search on the internet for <em>How to learn Go?</em>, nine out of ten times, information related to GoLang will come up. That one time will be related to learning GoLang through some new, ultra super-duper intelligent artificial intelligence system that was just waiting for you.</p>
<p>Jokes aside, this article will not be about the Go programming language. This article will be about the game of Go, one of the most popular strategic games in Asia, a game that few people here in the West know anything about.</p>
<p>Just like in everything I do, I'm not an expert in this either. This is the story of a beginner who became fascinated by the patterns, connections, and symbolism of the game, and the process of overcoming oneself through the game. This is the story of how I accidentally discovered and became interested in a game called Go (Igo).</p>
<h2>The Beginning</h2>
<p>The homepage of my YouTube profile is filled with climbing videos. Thanks to some strange setting in some algorithm, one of the recommendations was a documentary film from 2016 - <em>AlphaGo</em>. I didn't know what it was about, and since I was relatively idle at that moment and craving the dopamine that the information age provides us with - I clicked on the <a href="https://www.youtube.com/watch?v=WXuK6gekU1Y">link</a>.</p>
<p>After about ten minutes, I understood what it was about. It was a documentary about how a computer succeeded in defeating a human in Go for the first time. Hmm, didn't a computer beat the world's best chess player a long time ago? What does Go have to do with it now? These were some of the questions that, truth be told, I didn't ask myself during the film. Instead, I kept watching, completely unaware of the story behind it all. As the story unfolded, I learned more.</p>
<p>I found out that the computer defeated a human in chess thanks to the technique of brute-force search<a href="https://en.wikipedia.org/wiki/Brute-force_search">^1</a> - a technique that a computer uses to solve a specific problem by trying every possible alternative before providing its final answer. But as I further learned, that's not possible in Go! Why? Because with every move in Go, it opens up over 200+ moves, then another 200+ moves. This eventually leads to an inconceivable number of possibilities. And the computer cannot find a solution using this method. A different approach must be applied. The approach of artificial intelligence.</p>
<p>Without going into too much detail about the film, which I highly recommend watching, I just want to briefly mention the feeling that the film evoked in me. I was fascinated. The world that was revealed to me, the culture of the game that the Far East nurtures and develops. And since I had previously been influenced by the Far East in my interests (origami, manga, anime movies, Korean cinema), it was a natural course of events for me to become interested in the game of Go.</p>
<h2>Short History</h2>
<p>Now a little about the history of the game (or how I copied and paraphrased everything from Wikipedia).<a href="https://en.wikipedia.org/wiki/Go_(game)#History">^2</a></p>
<p>In brief, Go is an abstract strategic board game in which two players compete for territory. The oldest written record of the game dates back to the fourth century BCE in the Chronicles of the Zuo. Later, this game was also described in Confucius's books. It was called <em>Yi</em> in China, and today it is known as <em>Weiqi</em>, which roughly translates to &quot;encirclement board game.&quot;</p>
<p>According to legend, the mythical Chinese Emperor Yao (2337-2258 BCE) told his advisor Shun to design a game that would teach his son Danzhu wisdom and good behavior. Some other theories suggest that this game actually originated from Chinese tribal rulers and generals who marked positions on a map with stones to plan their attacks.</p>
<p>Later, this game spread to Korea (where it is called <em>Baduk</em>) and Japan (<em>Igo</em> or <em>Go</em>). Today, these three countries are referred to as the Three Kingdoms of Go, because in those cultures, Go is nurtured like football in England (justifiably) and football in Serbia (unjustifiably). In the Far East, it was considered one of the Four Noble Arts, alongside calligraphy, playing the lute, and painting.</p>
<h2>&quot;Basic&quot; Rules</h2>
<p>Let's compare Go with the popular game in the West - chess. Unlike chess, where each player has 16 different pieces with different movements and 6 pieces with unique movements, Go has two types of pieces called stones, which are black and white. They have the same shape. However, there are many more of these stones - 181 black and 180 white. Once a stone is placed on the board, it cannot be moved. Stones are placed on intersections, not in squares like chess pieces. The ways they move are simple - a stone is placed on an intersection and remains there. What is more complex than chess is how these stones can be combined to achieve the goal.</p>
<p>The goal in chess is to defeat the opponent, while in Go, the goal is to capture more territory. And that's what attracted me the most to the game - that pacifistic aspect. It's not necessarily about fighting against others (although that is also an option), but simply playing. As it often turns out, this is a game against oneself. Knowing oneself and one's only opponent, I realized that this is a good game for me.</p>
<p>In summary, apart from the turn order where moves are made alternately and the black player goes first, and the scoring rules, there are only two rules in Go:</p>
<ol>
<li>Rule of liberty - This rule states that every stone remaining on the board must have at least one open point (liberty) directly orthogonally adjacent (above, below, left, or right) or must be part of a group that has at least one liberty beside it. Stones or groups of stones that lose their last liberty are removed from the board.</li>
<li>The Ko rule - It prohibits repeating a previous board position by placing stones. Such a move is forbidden for a simple reason - it would result in endless loop.</li>
</ol>
<p>In the GIF (or /dʒɪf/, as some would say) below, there is a graphical representation of how the board and the course of the game look like.</p>
<p><img src="../images/posts/040-go-game-01.gif" alt="Go table gif showing some basic moves in the Go game."></p>
<p>If you want to learn more about the game and its rules, visit <a href="https://www.youtube.com/@GoMagic">this</a> YouTube channel; it's full of excellent videos and tutorials for beginners.</p>
<h2>What next?</h2>
<p>That's how I, thanks to the YouTube algorithm, delved into the world of Go. Whether it was accidental or not, I don't want to get into that, but I'm grateful for it. Besides the reasons I mentioned earlier, the beauty of the game itself is what kept me interested. To be honest, I still don't fully understand that beauty, and I don't know if I ever will. But I enjoy what it evokes in me and how it makes me see things from a different perspective.</p>
<p>If you watched and enjoyed the documentary above, here are a few more links to interesting content about the game of Go:</p>
<ul>
<li>An interesting and useful video about the history of the game, available on this <a href="https://www.youtube.com/watch?v=THXS9tFN8Gk">link</a>.</li>
<li>Another <a href="https://www.imdb.com/title/tt3973724/?ref_=plg_rt_1">documentary</a> about the game of Go and human dedication to it.</li>
<li>Lastly, but not least, there are <a href="https://www.imdb.com/title/tt0426711/?ref_=plg_rt_1">anime</a> and <a href="https://www.viz.com/shonenjump/chapters/hikaru-no-go">manga</a> with Go as their main theme - <em>Hikaru no Go</em>.</li>
</ul>
<p>Enjoy!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Lessons from climbing I&#39;m applying in life - patience</title>
			<link href="https://wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-patience/"/>
			<updated>2023-05-29T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-patience/</id>
			<content type="html"><![CDATA[
				<p>How to be patient? How to manage not to have everything, immediately? These are just some of the questions I'm interested in finding answers to every day. Will I be able to answer them in this article? I don't know. Probably. And maybe not.</p>
<p>What I will try to do here is to share with you my process of learning patience. Yes, you heard it right, learning patience. Sometimes it can be very difficult, and other times it's easy, like breathing. I hope to present things from a different perspective. Something that might make you question your approach.</p>
<p>How does climbing fit into all of this? Well, I think this activity has taught me a lot about it. How to overcome the obstacle of youthful impatience. An obstacle that seemed insurmountable to me before. All that is true, but how, someone might wonder at this moment. Well, I'll start from the very beginning and give you an example of my impatience and carelessness. I'm not saying I've completely eliminated them. Far from it. But now, thanks to climbing, I have managed to recognize those situations. To see them from a different angle. And <em>take a step back.</em></p>
<p>Interesting fact before I continue - I will write this article in Serbian for the first time during preparation, and then translate it into English. I haven't done that before. Everything I wrote was from the preparation stage to the final publication in English. I'm very curious to see how this article will turn out. Will it be different? Maybe it will make more sense. We'll see.</p>
<h2>Why is patience important to me?</h2>
<p>Isn't it important to each of us?</p>
<p>Like most of us in our youth, I had that thread of impatience. The carelessness and the belief that I can do, read, figure out, listen to... things somehow faster. Time is limited, and it needs to be used to the fullest. It's necessary to do more and constantly be doing something. That's how I used to think when I was younger.</p>
<p>As I grew up and started learning new things, carelessness and impatience were always there. For example, while running, I had to listen to a podcast episode. It went so far that I spent a lot of time searching for the &quot;perfect episode&quot; while running and even missed going for a run a few times. When taking notes, I often had the urge to write down everything I read, heard, or saw, often losing sight of the meaning behind it. There are many more examples, but I don't want to bother you with them...</p>
<p>Somewhere along the way, I expected that as I matured, these traits of impatience and carelessness would change, and be eradicated, but that was not the case. I continued going through life with episodes of impatience. They were everywhere, every day.</p>
<h2>Climbing = Patience?</h2>
<p>Then climbing came into the picture. I started training about ten years ago. Nothing serious, I was finishing my studies, and climbing provided a good continuation of everything I had been doing until then. I wasn't particularly good at the beginning. Now, when I try to recall things from that time, I can't get rid of the smell of the climbing gym when I entered. It was a strange mix of air freshener and other unfamiliar scents to me at the time... That and the sneakers I wore - they were gray running shoes that I wore everywhere - one pair of shoes for all occasions, so to speak.[^1]</p>
<p>Of course, when I started climbing, did I automatically become more patient? Hell no! It took time for that. Hours and hours of scraping my fingers on plastic in various gyms and on rocks in Serbia and around.</p>
<h2>Eureka?</h2>
<p>The thing that pushed me in the direction of patience, no less, no more, was the desire to improve. I was bad at the beginning. There was progress, but nothing significant, nothing impressive! Nothing that would show me that I was made for this, for climbing. Nothing that would indicate that maybe I had some hidden talent, just waiting for me to start climbing. Nope!</p>
<p>I trained like that for a long time. Aimlessly and impatiently, moving from one training session to another, with one single desire - to progress as much as possible. And as I mentioned above, there was progress, I was just blind to it. I wanted more.</p>
<p>Then came the end of my studies and the return to my hometown. Throughout this process of returning and reintegrating into a small community, climbing was on my radar. Even though I didn't have a place to climb (how wrong I was!), every training session I had was tailored towards that activity. Everything I did was to become a stronger climber.</p>
<p>Thanks to my father, I got in touch with people who were involved in climbing there, and I slowly got to know them, socialized, and eventually continued climbing. That's when I began to understand many things. The first was that I had a magnificent climbing oasis near my home - the Stol mountain and I didn't even know it. The second - hey, I have actually been improving this whole time!</p>
<p>I realized I was making progress when I first went to that climbing gym. It was an office of one of the city's local communities, where we had a wall about 5 or 6 meters long and around 3-4 meters high. I remember that before every training session, we had to move the tables on which people had meetings that day. It was all so great!</p>
<p>It was there, in that space and with those people, that I began to familiarize myself with the benefits of patience. I learned not to rush into the wall. Not to act like a headless fly. I realized that the wall would always be there. Even if I was in the gym and a route was taken down, I knew that a new one would come that would challenge me. Before that, I thought I was a patient person. Now, when I look back, I realize that I was quite mistaken. I only thought I was patient.</p>
<p>How did I learn that? By pausing. I fell off the route and didn't immediately jump back in to try again, but decided to take a break. When I thought I was ready, I rested a little more. It was hard to force myself to do that, at first.</p>
<p>And not only that. I started to notice daily how much progress I was making. Of course, the progress was small, but it was there. Yesterday, I was on the verge of reaching the last hold on the route. Tomorrow, I will try again and see where I am. If I don't succeed, well, okay, I'll succeed another day. This way of thinking slowly spread like water to other aspects of my life. And it pulled me further. And it kept me in climbing. And it taught me patience more and more.</p>
<h2>Why now?</h2>
<p>What prompted me to write about patience from this perspective (<a href="https://www.wonderingchimp.com/slow-and-steady-wins-the-race/">again</a>) is something I experienced in the past week or two. I had a minor setback that required me to take a break from climbing and training for a few days. Maybe even a week. Can you imagine?!</p>
<p>It might sound funny, and it is a bit funny, but initially, it was difficult for me. I was getting out of the routine I had settled into so well, I felt great, and everything in my life seemed to fall into place. I impatiently wanted to go back, to return to training and that feeling before and after each session. I wanted everything to be as it was before! That impatience turned into mild frustration with the unplanned rest days.</p>
<p>Then I remembered my experience with how I improved in climbing. To make progress (in any field), it requires patience, nothing comes overnight. What I didn't realize is that the same goes for recovery. Recovery also requires patience. Nothing comes overnight.</p>
<p>Although, after ten years, I can say that I have made a lot of progress, both in climbing and in patience, I can't say that I have completely eradicated that thread within me. As can be seen from the paragraphs above.</p>
<p>Now, whether it's perhaps some <em>nerdy impatience of millennials influenced by technology</em>, as David Foster Wallace calls it in his work <em>Infinite Jest</em>, or something else - I don't know. I only know that I am aware of these shortcomings and I'm working on overcoming them. This is one way to do it - to actively think and write about it...</p>
<p>I hope you enjoyed the text. Write me a comment if you would like me to publish a version in Serbian as well. Maybe it will be easier for some of us to relate, just as it was easy for me to write this.</p>
<h2>Notes</h2>
<p>[^1]: Perhaps I have learned patience over time, but when it comes to having more pairs of shoes for various occasions, it's difficult. Of course, I don't include climbing shoes in that, as the number of pairs I own would make shoe stores envious at one point.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Learning in a foreign language</title>
			<link href="https://wonderingchimp.com/posts/learning-in-a-foreign-language/"/>
			<updated>2023-05-15T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/learning-in-a-foreign-language/</id>
			<content type="html"><![CDATA[
				<p>Each of us is a learner. We can be divided into those who are learning intentionally, and those who do it unintentionally. And that is the only valid polarization there should be. The world would be much easier with it.</p>
<p>Anyhow, as we are all learners (I strongly believe that), with the vastness of content available in only one or couple of languages, most of us are learning in our non-native language. This is especially true when working in IT, but in other branches of industry as well.</p>
<p>With this article, I want to address the topic of learning in a non-native language. What is the impact on us? How does it look? What do I do when I want to get the gist of something I'm learning? I'm also super curious to find out about your approach. Stick until the end, and then maybe add some comments on your ways of learning in a non-native language.</p>
<h2>What do the scientists say?</h2>
<p>So, after writing the introduction to this article, I've looked at what the researchers say about this topic. And to my amazement, I've found a lot of research. I've discovered that there is a thing called <em>Foreign Language Effect</em>. This is connected with thinking in a foreign language. What does it do to our critical thinking and biases that we have? How do we perceive certain words in a foreign language - do they have the same effect on us or not?</p>
<p>All in all, the whole topic of learning and thinking in a foreign language has plenty of research papers and books, and I wouldn't want to bother you with all. For that matter, I wouldn't even have the time for that.</p>
<p>However, I want to point out one research paper that I've found quite interesting. It is called <em>Effects of Learning and Teaching in a Foreign Language</em>, by <em>Wim M.G. Jochems</em> from the <em>Eindhoven University of Technology</em>. It's from January 1991, and it was published in <em>European Journal of Engineering Education</em>.<a href="https://www.researchgate.net/publication/233073286_Effects_of_Learning_and_Teaching_in_a_Foreign_Language">^1</a></p>
<p>This paper investigates the possible effect of learning and teaching <em>engineering programs</em> in a foreign language. That is what I needed in the first place. Someone to tell me what is the effect of learning about engineering topics in a foreign language. If all of my previous and future articles' research could be this easy...</p>
<p>Anyhow, I've gone through the research findings and the conclusion is - teaching and learning in a foreign language has numerous negative effects. Some of them that are mentioned include:</p>
<ul>
<li>Students have lower scores on tests.</li>
<li>The passing percentage decreases.</li>
<li>Both teachers and students need to prepare more to teach/learn in a foreign language, resulting in more effort.</li>
</ul>
<p>Those negative effects can and will decrease with time, sure, but in the beginning, it will be tough for both sides.</p>
<p>There is also another side to this. Other research says that thinking in a foreign language can be beneficial to us, as it brings a new perspective, a new way of thinking and looking at things.<a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2020.549083/full">^2</a></p>
<p>Yes, I completely agree. But that depends on various things, one of them being - what is the degree of proficiency in a foreign language? If it's high - I assume both understanding and thinking in a foreign language will be better for us and the overall understanding of the topic. But what happens if the proficiency is low? That person spends more time learning and understanding the topic.</p>
<h2>What is my approach?</h2>
<p>Now, I consider myself proficient in English. I've read numerous books in English, both non-fiction, technical, and fiction. For some time I've even preferred reading in English more than Serbian. I thought I read faster in English. Even my blog is in English due to <em>better visibility</em>. And the fact is that by being able to read/think in English the material available to you grows exponentially.</p>
<p>Just recently, however, I started to challenge myself differently. I'm sure I'll see the effects of it sooner rather than later, but until then, I'm going to stick with it.</p>
<p>This is the approach. When reading some technical, non-fiction book, or watching/listening to some material in a foreign language, if there is something I want to remember, I try to explain it to myself in my native language. Try to note down what I need to remember in plain old Serbian.</p>
<p>My previous modus operandi was - I took notes and did the learning in the language of the material. If I read something in English, I took notes in English, and so on.</p>
<p>Results so far - fun, tough and interesting. Fun because there are numerous of things I now translate to Serbian, including some terms like deployment, layered architecture, two-tier... Tough, because it's tough to sometimes go check the dictionary and look for synonyms if the translation is not good enough. And interesting because I'm able to better understand and explain things that I've learned. Both in English and Serbian. Previously I had problems when trying to explain things in Serbian.</p>
<h2>Conclusion</h2>
<p>Thinking about this topic always reminds me of a scene from the TV show <em>Modern Family</em> where <em>Gloria</em> (played by Sofia Vergara) during some argument with her husband <em>Jay</em> (Ed O'Neill) asks him - <em>Do you even know how smart I am in Spanish?</em></p>
<p>Sometimes, I felt the same. And sometimes I even noticed that I'm more biased about somebody's expertise based on their proficiency in a foreign language. And that is plain wrong. We shouldn't judge a person's knowledge and expertise based on how well they speak a language that is not native to them.</p>
<p>Now, on the topic of learning more effectively - I found that my understanding and learning are better when after reading something in English, I try to explain it in Serbian. Also, when trying to recall things in Serbian rather than English helps me explain them better later in the process. In both Serbian and English. Maybe it's just a whim and it's temporary, or maybe it is better. The time will tell.</p>
<p>What about you? What is your preferred way of learning in a foreign language?</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Will AI kill my hobby?</title>
			<link href="https://wonderingchimp.com/posts/will-ai-kill-my-hobby/"/>
			<updated>2023-05-01T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/will-ai-kill-my-hobby/</id>
			<content type="html"><![CDATA[
				<p>With the emergence of ChatGPT and Large Language Models, I want to give my two cents on the topic of AI (or Machine Learning). I want to address how it is changing the playing field. Assumptions of others that will make some jobs obsolete. Writing, which I consider one of my many hobbies, is among ones that are becoming obsolete.</p>
<p>That's at least how some people think.</p>
<p>Now, if I would put my writing into a specific category, I would say it's a mix of creative and technical writing. But I don't want to do that. And it's not the point of this article.</p>
<p>The point is - will the AI (ML) make obsolete the thing I do because it makes me feel good, helps my mental health, helps me learn and share what I've learned, what I think about certain stuff and so many other things? Short answer - <strong>No.</strong></p>
<p>Now let's see why.</p>
<h2>AI vs ML and how does LLM fit here?</h2>
<p>I read somewhere that the proper way to address the ChatGPT and everything around it is as a Machine Learning model(s), not Artificial Intelligence. ML instead of AI.</p>
<blockquote>
<p>AI actually refers to the general concept of creating human-like cognition using computer software and systems, while ML refers to only one method of doing so.<a href="https://www.coursera.org/articles/machine-learning-vs-ai">^1</a></p>
</blockquote>
<p>So, from now on I'm going to use AI/ML instead of just AI. Even though AI sounds way cooler.</p>
<p>Now, to be honest, I'm not sure what Large Language Model or LLM is as well. So, let's quickly explore that. I checked both with so popular ChatGPT and with the good ol' Wikipedia.</p>
<p>To paraphrase both sources in my own words, for better understanding, and learning - A Large Language Model is a type of <em>Machine Learning</em> model that uses deep learning algorithms to process and generate human-like language.</p>
<p>And deep learning is a type of machine learning that uses artificial neural networks to process and analyze large amounts of data. These neural networks are designed to mimic the way of information processing in human brains.</p>
<p>So, in a nutshell, they process a large amount of information and return it to you in the desired way.</p>
<h2>Why do some people think AI/ML will kill writing?</h2>
<p>The ease of generating text. And not just any text - text that sounds sane, and to the point. Text that sounds as if it was written by someone who knows their stuff. In this case something. In any way we want it - witty, serious, simple, funny... Imagine having that at your fingertips.</p>
<p>It's pretty powerful, for sure. And with us humans, being lazy, and wanting to have everything and to have it now, the whole thing around AI/ML blew up. Hopefully, it won't go out of control. 🤞</p>
<p>That easiness of getting content quickly, content that most of us couldn't think of, made some people assume how that AI/ML will make many things obsolete. And we all know by now that to assume makes an <a href="https://www.wonderingchimp.com/posts/how-to-dev-oops-more/#you-are-prone-to-blame"><em>ass</em> from <em>u</em> and <em>me</em></a>. So, I'm not going to assume anything.</p>
<h2>Why I don't think the same?</h2>
<p>I consider myself a person who focuses on quality rather than quantity. That's how I try and organize my life. It's also my approach to writing. I get overwhelmed by the sheer quantity of everything now and then, but I've learned to filter it, take a deep breath, and continue.</p>
<p>Quality of information is one of the points why I think AI/ML will not make writing obsolete. It will only improve writers to use it in a way to improve their writing. Improve the quality of the writing.</p>
<p>If you consider using it to write a school essay, or maybe a quick and easy way to write an article - sure, it can be used for that, and be quite good and undetectable. But, what about the quality of that information? What about the human touch and feel? What about you learning from it, and improving yourself?</p>
<p>If your main objective is quantity, go ahead and generate those &quot;useful&quot; articles and stay in the hustle. Machines can take it, but what about you and the people around you?</p>
<p>Another point that made me think AI/ML won't make writing obsolete is effort and learning from failures. It takes a lot of effort, with lots of tries and fails, to create something of high quality. If that is missing from the equation, what is the point? Where is the value in that?</p>
<p>Don't get me wrong, I'm totally for using AI/ML in your daily work as a way to improve yourself, and everyone and everything that surrounds you. But use it wisely.</p>
<p>With great power comes great responsibility.<a href="https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility">^2</a></p>
<h2>What could AI/ML kill?</h2>
<p>However, there is one thing that concerns me. If we're not careful enough, the use of AI/ML could destroy critical thinking.</p>
<p>Not long before AI/ML became a hype, critical thinking in us humans was in a bad place. Now, it's hanging by a thread. And we should all grasp it before we lose it.</p>
<p>There is a phrase <a href="https://en.m.wikipedia.org/wiki/The_medium_is_the_message"><em>The medium is a message</em></a> coined by the Canadian communication theorist Marshall McLuhan which is quite convenient.</p>
<p>We tend to focus on the message itself, instead of the medium that gave us that message. The &quot;content&quot; of a medium is a juicy piece of meat carried by the burglar to distract the watchdog of the mind.[^3]</p>
<p>We tend to focus on the obvious, the message.</p>
<p>What we should focus on instead is the medium itself. How can we improve ourselves through that medium? How can we regulate that medium so it fosters (critical) thinking? How can we improve that medium? How can we use that medium to make the lives of everyone on this planet better?</p>
<p>Read along to find out some of the ways to improve critical thinking.</p>
<h2>Summary</h2>
<p>As I noted above, I don't think AI/ML will make writing obsolete. The ease of the mind, clarity of thinking, and the possibility for learning that writing brings cannot be discarded and changed by AI/ML.</p>
<p>Will AI/ML change writing? Yes, definitely, but it <strong>will not replace it.</strong></p>
<p>The key takeaways from this article you should consider every time you interact with AI/ML are listed below.</p>
<ul>
<li><strong>Never assume.</strong> Check the information you get, and then check it again. In that way, you will not just check for the correctness of it, but foster your critical thinking.</li>
<li><strong>Be creative.</strong> Check different perspectives and see how they interact with each other.</li>
<li><strong>Don't rely only on AI/ML.</strong> Check by yourself, go through the content that the tool(s) generated, and see if it's any good. Check with others as well. <a href="https://www.wonderingchimp.com/posts/slow-and-steady-wins-the-race/">Slow and steady wins the race</a>.</li>
<li><strong>Question everything.</strong> This is the most difficult, as it takes time. But everything of value takes time, so get to it.</li>
</ul>
<p>[^3]: McLuhan, Marshall (1964) Understanding Media, Routledge, London</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Git rid of it: The case for removing sensitive data from Git</title>
			<link href="https://wonderingchimp.com/posts/git-rid-of-it-the-case-for-removing-sensitive-data-from-git/"/>
			<updated>2023-04-17T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/git-rid-of-it-the-case-for-removing-sensitive-data-from-git/</id>
			<content type="html"><![CDATA[
				<p>It's a normal afternoon. You start to slowly wrap up your things at work. You decide to go through the changes made to the code base you're working on by checking out its history. Suddenly, lo and behold, you find a file in the repository that shouldn't be there! You find a password present!</p>
<p>Now, before going any further, let me answer the following - who the hell would decide to spend their afternoon at work by going through Git history? Or any afternoon? Who the hell looks at Git history? Well, you might not, but other people do. If not now, then, for sure later, when debugging a problem, exploring, researching... There are numerous occasions when one might go through Git history. Trust me.</p>
<h2>What you shouldn't do?</h2>
<p>Let's continue with the situation. You found out the committed password, what do you do? First, don't panic.</p>
<p>Next, you should just go and revert the commit, correct? Well, you might, but that <strong>will not remove the password</strong> completely. The revert will only convert all your <code>+</code> to <code>-</code> and vice-versa. For every file or line of code committed, <code>git revert</code> will do the opposite.</p>
<p>But that is what we wanted, right? Yes, we wanted to remove the file from the repository, sure. But if we return to the beginning of the post - what about the Git history?</p>
<p>To completely remove the file, you will need to <strong>delete the file from history.</strong></p>
<h2>What you should do?</h2>
<p>Deleting a file from Git history might sound complex and scary - to delete it, you will need to re-write history. How the hell one does do that without breaking anything?</p>
<p>There are three ways to do that. Spoiler alert - two of them are safer approaches.</p>
<ol>
<li>By using <a href="https://github.com/newren/git-filter-repo">git-filter-repo</a>.</li>
<li>By using <a href="https://github.com/rtyley/bfg-repo-cleaner">BFG Repo-Cleaner</a>.</li>
<li>By using native <a href="https://git-scm.com/docs/git-filter-branch">git filter-branch</a>.</li>
</ol>
<p>Now, before diving more deeply into how to use each approach - let's first discuss the safety part. I've written above that two of the approaches to do this are safe. But which two? Definitely, the third option is good because it's native, right? Well, no, not actually.</p>
<p>If you follow the link in the third option, you will see a warning on the official Git documentation that this is not a recommended way to rewrite history. Recommended ways are the two tools mentioned before, the first and second options. These are the approaches we are going to further dissect. For brevity, and safety purposes.</p>
<h3>Using git-filter-repo</h3>
<ol>
<li>
<p>To start using it, we would need to install it first. Now, I tried following the <a href="https://github.com/newren/git-filter-repo/blob/main/INSTALL.md">installation guide</a>. I opted first for the installation through the package manager. But, on my RHEL9-based system, I wasn't able to find it in the package repository, so I turned to the simple installation. The steps I used to install it are described below.</p>
<pre><code class="language-shell"># Download the raw file to the /usr/local/bin/ directory
$ curl -o /usr/local/bin/git-filter-repo https://raw.githubusercontent.com/newren/git-filter-repo/main/git-filter-repo

# Add executable rights to the file
$ chmod +x /usr/local/bin/git-filter-repo

# Test out the installation
$ git-filter-repo --version
ae71fad1d03f
</code></pre>
</li>
<li>
<p>Go into your repository working directory and run the following command.</p>
<pre><code class="language-shell"># Change to your repository path
$ cd $YOUR_REPOSITORY
$ git filter-repo --invert-paths --path COMPLETE_PATH_TO_YOUR_FILE
...
Completely finished after 0.83 seconds.
</code></pre>
<p>The above command does the following:</p>
<ul>
<li>force Git to process, but not check out, the entire history of every branch and tag;</li>
<li>remove the specified file, as well as any empty commits generated as a result;</li>
<li>remove some configurations, such as the remote URL, stored in the .git/config file;
<ul>
<li>you may want to back up this file in advance for restoration later!</li>
</ul>
</li>
<li>overwrite your existing tags.</li>
</ul>
</li>
<li>
<p>Make sure to ignore the file in <code>.gitignore</code> to prevent accidental commits.</p>
<pre><code class="language-shell">$ echo &quot;FILE-WITH-SENSITIVE-DATA&quot; &gt;&gt; .gitignore
$ git add .gitignore
$ git commit -m &quot;Add FILE-WITH-SENSITIVE-DATA to .gitignore&quot;
</code></pre>
</li>
<li>
<p>Check if sensitive data is present in some other file. If yes, repeat steps 2 and 3. Make sure the history is updated correctly (without the sensitive data in it).</p>
</li>
<li>
<p>When we finish with all of this, we'll need to force-push our changes to the remote, so we remove all the sensitive data from the remote Git history.</p>
<pre><code class="language-shell">$ git push origin --force --all
...
</code></pre>
</li>
</ol>
<h3>Using BFG Repo-Cleaner</h3>
<ol>
<li>
<p>To use the BFG Repo-Cleaner, we need to install it. We need to download the <code>JAR</code> file from <a href="https://rtyley.github.io/bfg-repo-cleaner/">this link</a>.</p>
</li>
<li>
<p>Next, for simplicity, we would want to add an alias in our <code>~/.bashrc</code> -&gt; <code>alias bfg=java -jar /location/of/bfg.jar</code></p>
</li>
<li>
<p>Next, if we want to remove the file with sensitive data and leave latest commit untouched, we would need to run the following command.</p>
<pre><code class="language-shell">$ bfg --delete-files FILE-WITH-SENSITIVE-DATA
</code></pre>
</li>
<li>
<p>If you want to replace all text listed in <code>passwords.txt</code> wherever it can be found in your repository's history, run the below command.</p>
<pre><code class="language-shell">$ bfg --replace-text passwords.txt
</code></pre>
</li>
<li>
<p>After we removed all the sensitive data, we would need once again to perform a force push to rewrite the remote history.</p>
<pre><code class="language-shell">$ git push --force
</code></pre>
</li>
</ol>
<h3>Why git filter-branch is not recommended?</h3>
<p>Long story short - <code>git filter-branch</code> is quite complex to use. You will need to know what you are doing to use it properly. And not mess something up while using it.</p>
<p>Another thing to take into account is its performance. <code>git filter-branch</code> is a lot slower compared to the solutions described above.</p>
<h2>Takeaways</h2>
<p>Removal of passwords, API keys, or any kind of sensitive data from the Git history is possible. Some time ago, this was quite complex, by using <code>git filter-branch</code>. It was a bit sluggish and you needed to really know your stuff.</p>
<p>Now, it's made easier with the tooling at hand. Following are some things to have in mind when doing this.</p>
<ul>
<li>Don't beat yourself up if you committed sensitive files in the first place, it happens to all of us. To err is human.</li>
<li>Analyze the approach that is most suitable for you (by using either the <code>git-filter-repo</code> or <code>bfg</code>).</li>
<li>Go through the tools documentation and carefully follow the instructions mentioned there.</li>
<li>Inform your team about the changes you are about to make.</li>
<li>Go ahead and remove the sensitive files from the repository.</li>
</ul>
<h2>More Information</h2>
<ul>
<li>An answer from <a href="https://stackoverflow.com/questions/8394442/how-do-i-edit-past-git-commits-to-remove-my-password-from-the-commit-logs">Stack Overflow</a></li>
<li><a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository">GitHub instructions</a></li>
<li><code>git-filter-branch</code> usage <a href="https://git-scm.com/docs/git-filter-branch#_warning">warning</a></li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Learning to Learn</title>
			<link href="https://wonderingchimp.com/posts/learning-to-learn/"/>
			<updated>2023-04-03T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/learning-to-learn/</id>
			<content type="html"><![CDATA[
				<p>We just try, and try, until we succeed. Or not. Some of us is quite lucky with supportive teachers, parents, mentors. But most of us lack that support.</p>
<p>Now, one (among many) of the problems with school system in Serbia, besides teachers and professors not getting nearly enough credit as they should, is <em>learning how to learn.</em> In school I didn't attend a class that explains you how to learn. It was just assumed that you know how, and can, for that matter.</p>
<p>This article, even though I already started a rant on school system in Serbia, won't be about that! Here I want to touch base with the skill of learning. How do I see learning, and what are my approaches to it. And hopefully, you will find some of the information useful.</p>
<h2>Are you a life-long learner?</h2>
<p>Bluntly - yes. I consider myself a life-long learner! A long time ago I've finished my official studies, but the love of learning stayed. I wasn't like that during primary or high school, to be honest. I was a good student, but somehow I didn't enjoy learning. Something changed when I finished high school, and at that point, I started seeing myself as a life-long learner. Started to really enjoy learning. And I didn't stop ever since.</p>
<p>Throughout my life I continued to immerse myself in different things, learning about new stuff I didn't know, and exploring new things. It was great! And still is! This can be seen from the variety of topics I write about on this blog - <em>Wondering Chimp.</em></p>
<p>Besides learning I like to learn about techniques that will help me learn better. Oftentimes, instead of actually going and learning the material in hand, I research different ways how to learn better, how to remember better, and how to be a better student.</p>
<p>Humans are learning new things every day, sometimes intentionally, sometimes not. But we are learning. Our brains need it. Crave it. Some studies have shown that we can fight Alzheimer's disease just by learning a new language.<a href="https://www.nationalgeographic.com/culture/article/100218-bilingual-brains-alzheimers-dementia-science-aging">^1</a></p>
<h2>Food for thought?</h2>
<p>Then, why don't we just satiate that craving our brain has? Well, we do, kind of. But in a wrong way. Endless scrolling, I'm looking at you.</p>
<p>By being so dependent on scrolling through pages and pages of social media, chats, inboxes, and whatnot, we deprive ourselves, our brains, of the joy of learning. The joy of figuring out how something works, why something happened, how to do some calculations, etc.</p>
<p>The first thing we can do to feed our brains in the right way is to <em>dedicate time for learning.</em> Start small, maybe half an hour each day (bonus points if more), where you will explore some topic you were always curious about, but haven't found the time to do so. The important thing here is that it needs to be in a distraction-free zone. In other words - no smartphone!</p>
<h2>Learning approach</h2>
<p>Throughout the years, and endless scrolling of different material on how to be a better learner, I found some things that work for me the most. As with most things, there is no <em>one-size-fits-all</em> here. Use the following as a guideline and experiment on your own.</p>
<p>When I want to learn about a topic either professionally, be it some new technology, approach, or tool; or privately - how to optimize and train better, learn a new language, a skill, etc.; the first thing I do is to <em>dedicate time</em> to search for <em>a good source on that topic.</em></p>
<p>The sources I prefer are those that I can <em>read</em> - books, articles, blog posts, e-mails. I don't mind watching a video or listening to a podcast. However, in that case, I need to fully focus on them. I cannot go about my day and at the same time try to learn something from a podcast, and ride a bicycle. Or maybe I'm not just that good at it?</p>
<p>I need to <em>dedicate time for it</em>. And with reading this comes naturally. I cannot do anything in parallel while reading.</p>
<p>When I've finally found a good source, let's say a book - I start reading one chapter, or a couple of pages, just to get the feel of it. During the reading, I <em>take notes.</em> With <em>pen and paper.</em> Most often by using the <a href="https://www.wonderingchimp.com/about-note-taking/#the-cornell-method">Cornell method</a> for note-taking.</p>
<p>The things I note are usually in a form of <em>questions and answers</em>. On the left side of the page, I write down the questions, and on the right I try to answer them, in <em>my own words</em>. Not by copying it word for word from the book. Instead, I focus on <em>how I understand the material</em>. Then I check to make sure I haven't made a mistake. Usually, I re-do the answer, if I totally missed the point.</p>
<p>The next stage is to <em>repeat</em> - answer the questions I've written down. I also like to add a <em>review of notes</em> I've taken down on that page or for the book chapter I'm currently reading.</p>
<p>After that, I go once again through the questions and answers. Then I <em>create flashcards</em> based on those questions and answers. There is a great tool that I use for this - <a href="https://apps.ankiweb.net/">Anki</a>. It has an application for almost every platform or OS, and it's quite versatile. You can add different types of flashcards, use images, graphs, and so on. Check out <a href="https://www.youtube.com/watch?v=WmPx333n5UQ">this tutorial</a> for more on how to use Anki. Small disclaimer - this person is a pro!</p>
<p>The last step in my learning process is to <em>regularly repeat or answer the questions</em>. For this, I basically schedule a short task in my todo application to go and check Anki flashcards on my phone or laptop, and that's it. Anki itself will prompt you for the things you need to review, and the things you don't. This step however is <em>the most important one.</em> It shouldn't be missed if you want to learn something in a long run.</p>
<h2>TL;DR</h2>
<p>So, now, to summarize the things that help me learn better and more effectively. For all of you who reached this point. And for myself. Also, this part makes the article more readable.</p>
<ul>
<li>Take time and find the good source(s) on the topic you want to learn about.</li>
<li>Start learning by reading, listening, watching.</li>
<li>Take down notes, preferably by hand.</li>
<li>Ask yourself questions, and answer them in your own words.</li>
<li>Review those questions and answers.</li>
<li>Create flashcards.</li>
<li>Review those flashcards.</li>
<li>Bonus tip: practice, practice, practice!</li>
</ul>
<h2>Where did I got all of this?</h2>
<p>The process I mention above didn't come overnight. It required a lot of try and fail runs. Then, I stumbled upon a great course on <a href="https://www.coursera.org/learn/learning-how-to-learn"><em>Learning how to learn</em></a>.</p>
<p>The course instructors are Barbara Oakley, a Professor of Engineering at Oakland University and Dr. Terrence Sejnowski, Francis Crick Professor at the Salk Institute for Biological Studies. The course is hosted on Coursera, and it's free. I highly recommend you check it out! There you will find some of the things I mention above, and many more.</p>
<p>Professor Oakley also wrote a <a href="https://barbaraoakley.com/books/a-mind-for-numbers/">book</a> on the same topic, and this book is a base for the course above. Even though the subtitle of the book says - <em>How to excel at Math and Science</em>, the techniques described there can be used throughout various topics.</p>
<p>Enjoy and happy learning!</p>
<p>And also, let me know in the comments what are your thoughts on learning. How do you approach it?</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Slow and Steady Wins the Race</title>
			<link href="https://wonderingchimp.com/posts/slow-and-steady-wins-the-race/"/>
			<updated>2023-03-20T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/slow-and-steady-wins-the-race/</id>
			<content type="html"><![CDATA[
				<p>It is said that humans living in today's world are exposed to new information as much as previous generations would learn in years, tens of years, or even hundreds. <a href="https://www.science.org/doi/10.1126/science.1200970">^1</a></p>
<p>The outcome of that velocity is the impatience each of us feels at some point or another. Maybe it's just me, but from time to time I feel so impatient. With everything. I feel that time is passing at the pace of a snail, and my brain is in a hurry. Sometimes, in conversations, I catch myself eager to respond instead of actually listening. Eager to do something, instead of being present.</p>
<p>Maybe there are better explanations for why sometimes I feel this way. But, I would like to address the factor of impatience that each of us has, in today's world. Being it in regular conversation with a fellow human being, or in general - the impatient living.</p>
<h2>Why this much impatience?</h2>
<p>As it happens usually - if you are in a certain environment, that environment will start to impact your behaviour. <a href="https://www.fastcompany.com/90449165/our-environment-shapes-our-personality-much-more-than-we-think">^2</a> Therefore, we get impatient because we are in an environment where everything is fast.</p>
<p>For example, we forgot the date of an event from history. You know you knew it a lifetime ago, but cannot remember it at this moment. What do you do? You impatiently take your smartphone and search for that information. And voila - it's on the tip of your fingers. But, let me ask you this - how long will you hold this information? Wouldn't it be better for you to try and recall that date by yourself instead of impatiently grabbing your smartphone?</p>
<h2>Causes of impatience</h2>
<p>From my point of view, the main cause of this impatience (in my life, at least) is the availability of information. That and maybe learned patterns of behaviour when I was younger, and wanted to have everything and to have it now.</p>
<p>Years ago, when I was starting college, I wrote to myself that boredom is bad. And I've been living by these words for a while. Every moment I had, I spent it reading, doing something useful, helping someone, learning, training, meeting friends, and so on. <em>Boredom is bad</em> was my motto. This was the root cause of my impatience.</p>
<p>Other sources of impatience can also be tied to the plethora of information - books, videos, stories, podcasts, etc. There is so much information you are not sure if they are a ton of <em>BS</em> or not. And, the most important thing - the discovery of whether what you're reading is a ton of <em>BS</em> or not is a hard process. It involves much research and critical thinking. But you don't have time for that, do you?</p>
<p>To be frank, it's not just time that we lack (this may be a topic for another article?). There are more reasons why we don't research some topics as much as we would need, not want, but <em>need to</em>. One thing impacting it is confirmation bias - we don't want to look for the opposite aspect of the information we're taking in if we generally agree with the concept.<a href="https://en.wikipedia.org/wiki/Confirmation_bias">^3</a> But, this as well might be a good topic for <strong>YAA</strong>[^4].</p>
<h2>Consequences of impatience</h2>
<p>So, what are some consequences I (or you might) feel because of being impatient?</p>
<p><strong>Commence multitasking.</strong> We start listening to some podcasts while doing something else. Watching a conference, and at the same trying to stay concentrated on some important work. Being in an online meeting, while endlessly scrolling social media, or replying to e-mails.</p>
<p>I learned that multitasking is a myth. There is no multitasking, just fast context-switching. If you think otherwise, let me know in the comments below. I do want to hear the other side as well. (<strong>YAA</strong>)</p>
<p>Interestingly enough, some time ago, when I started experimenting with meditation, I remember stopping it because it was impacting my multitasking ability. In hindsight, I know I made a mistake.</p>
<p><a href="https://www.youtube.com/watch?v=uioB2XKDHg4"><strong>Half-assing</strong></a> things. That is the main product of impatience, and multitasking, for that matter. I personally wasn't fully focused on the thing at hand. Only partially, switching my focus from one thing to another, ending up in <em>half-assing</em> both things.</p>
<p><strong>React instead of listen.</strong> When you are talking to someone, you are more focused on replying instead of just listening. Try notice that moment of eagerness to reply with your own experience, and channel it to something else. For example, ask a question to learn more.</p>
<p><strong>Focus on goal.</strong> Instead of concentrating on the process and enjoying it, we're focused on just completing it, reaching the goal. While completely forgetting how we ended up where we are.</p>
<p><strong>Haste makes waste.</strong> Or as we would say in Serbia - <em>Što je brzo, to je i kuso.</em> Whatever you do fast, it wouldn't be done correctly.</p>
<h2>How to stop being impatient?</h2>
<p>Finally, after dropping all this jibber-jabber on your head, what can we can do to fix this?</p>
<p>The below list can vary on a person-to-person basis. However, I found the below tasks helping me in one way or another.</p>
<ul>
<li><strong>Meditation.</strong> It helps you be more present, in the moment. It increases your ability to focus. At least it did increase for me.</li>
<li>Remember that <strong>being informed doesn't mean being in the know.</strong> Take time to explore more of the information you consume (this article included!), and look at both sides of the coin, instead of just one.</li>
<li><strong>Breathe.</strong> Whenever you feel impatient, just breathe through the feeling.</li>
<li><strong>Prioritize.</strong> Use some prioritization mechanism, e.g. <a href="https://www.wonderingchimp.com/posts/things-i-do-to-be-more-focused-and-productive/#eisenhower-matrix">Eisenhower matrix</a>, or something similar. It will help you concentrate on the things that are most important to you.</li>
<li><strong>There is always time.</strong> If you organize well and prioritize, you will always find time for the things you want and love to do.</li>
<li>Allow yourself to experience <a href="https://www.youtube.com/watch?v=9qLHEaZNzbM"><strong>boredom</strong></a>.</li>
</ul>
<p>[^4]: Yet Another Article - a topic that will hopefully be covered someday, but definitely won't.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Unlocking Your Flow State: A Beginner&#39;s Guide to Finding the Zone</title>
			<link href="https://wonderingchimp.com/posts/unlocking-your-flow-state-a-beginners-guide-to-finding-the-zone/"/>
			<updated>2023-03-06T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/unlocking-your-flow-state-a-beginners-guide-to-finding-the-zone/</id>
			<content type="html"><![CDATA[
				<p>The quote above is the opening line from a song by <em>Roxette</em>, a Swedish pop-rock duo, called <em>Spending my time</em>. Although it is a love song, I always end up finding the opening verse appropriate for the thing I'm going to write about in this article.</p>
<p>Have you ever been so immersed in something and completely forgot <em>what's the time</em>? Or you felt that the only thing important to you was the task itself?</p>
<p>Well, most of us did, at some point or another. That feeling of being in the zone and fully focused on one task is called <em>the flow state</em>.</p>
<p>Now, as I said, most of us experienced it. We might have been aware of it, or we weren't, but it was present. For me, one of the places where I experience flow state the most is rock climbing. You'll see how in the paragraphs below.</p>
<h2>What is a flow state?</h2>
<p>The term <em>flow state</em> was first coined by psychologist Mihály Csíkszentmihályi in 1975, but that doesn't mean that the sense of it hasn't existed until then. It was widely present in our history, under different names, but it wasn't scientifically researched until Csíkszentmihályi wrote about it.</p>
<p>A flow state, also known colloquially as being in the zone, is the mental state in which a person performing some activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by the complete absorption in what one does, and a resulting transformation in one's sense of time.<a href="https://www.tandfonline.com/doi/abs/10.1080/00222216.1994.11969966">^1</a></p>
<h2>What are the ingredients of flow?</h2>
<p>Let's go a bit more deeply into what are the things that create the flow state. According to Csíkszentmihályi, there are several components of it<a href="https://en.wikipedia.org/wiki/Flow_(psychology)#Components">^2</a>:</p>
<ul>
<li>Intense and focused concentration on the present moment.</li>
<li>Merging of action and awareness.</li>
<li>A loss of reflective self-consciousness.</li>
<li>A sense of personal control or agency over the situation or activity.</li>
<li>A distortion of temporal experience, as one's subjective experience of time, is altered. (In other words - <em>What's the time?</em>)</li>
<li>Experience the activity as intrinsically rewarding.</li>
<li>Immediate feedback</li>
<li>Feeling the potential to succeed</li>
<li>Feeling so engrossed in the experience, other needs become negligible.</li>
</ul>
<p>All of these aspects can be independent of each other. But to be in the flow state, we need to experience all of them, more or less. This means that when you endlessly scroll through social media and lose track of time cannot be considered a flow state.</p>
<h2>How do I experience it?</h2>
<p>The time when I'm in the flow state the most is when I'm rock climbing. Interestingly enough, in his book <em>Flow: The Psychology of Optimal Experience</em>, Csíkszentmihályi as an example of people being in the state of flow mentions rock climbers. 🤭</p>
<p>This is how it all starts. I stand in front of the wall. Tie the rope to the harness. Check my helmet. Put on my climbing shoes. Put chalk on my hands. Ask my belayer (a friend who keeps me safe by taking the rope with a climbing safety device) if everything is okay. I wait for them to confirm. I take a deep breath and say - Climbing. They respond with - On belay (some usual climbing procedure). And I start climbing.</p>
<p>During the climbing process, I tend to breathe as loud, and as deeply as possible. It helps me focus on the climbing, and longer inhales and exhales help me stay calm and full of attention. I am aware of my body, where to put my right and left leg as I progress, and what to hold with my hands. All that while breathing as loud and controlling as I can.</p>
<p>As soon as I stop doing this, the insecurity increases. Then comes the doubt, fear, uncertainty, and swoosh - I'm out of the flow state. And usually, I end up falling.</p>
<p>However, I know I was in the zone when I climb or fall from the route and then start to realize where I am, what is happening, <em>what's the time</em>...</p>
<h2>How can you experience it?</h2>
<p>The beauty of the flow state is that it differs from person to person. There is no exact recipe, or how to do it. I learned that the harder way.</p>
<p>However, some things can help you lay the foundation. I found the following points rather useful, either during climbing, during work, playing go, reading, writing...</p>
<ul>
<li><strong>Remove distractions.</strong> Usually, it's enough to put your smartphone in another room. However, notifications from your laptop and different sounds from your surrounding can also be distracting. Focusing mode on laptops (if you're trying to achieve the flow state while working on it), or noise-canceling headphones can be of help there.</li>
<li><strong>Control your breathing.</strong> A couple of deep inhales and exhales can help you focus better. Also, I learned that the boxed breathing technique helps - breathe in for 3-4 seconds, hold for the same amount, exhale for the same amount, and wait the same amount before inhaling.</li>
<li><strong>Use timer.</strong> Use a Pomodoro technique and work for 25 mins, take a 5 min break, and so on. I usually do 30 min work, then a 5 to 7 min break, and repeat it 3 or 4 times.</li>
<li><strong>Be persistent.</strong> Try and do focused work for one timed cycle (e.g. 25 mins), make this your goal. Then, see how you feel about continuing.</li>
<li><strong>Create a routine out of it.</strong> Before starting, find something that will put you easily into the zone - a cup of coffee or tea, sitting in a certain position, standing, taking a walk. Whatever comes to mind. You will do this every time you set out yourself for some focused task. It will help you immensely.</li>
</ul>
<h2>(Useful) Sources</h2>
<p><a href="https://www.youtube.com/watch?v=4jJuCi8EMh4">The song</a> that I mention above. It isn't tied in any way to the experience of the flow, except I somehow managed to connect the first verse with the flow. It's a nice listen, nevertheless. And I had it in my head the whole time I was writing this article.</p>
<p>In <a href="https://app.thestorygraph.com/books/a7817486-804b-4997-be5f-a816629a9875">this book</a>, Csíkszentmihályi writes about the flow state, the optimal experience. He goes into detail about his research and its results. I highly recommend it! The book I read was a Serbian translation, which helped me a lot grasp the whole thing. <em>Hint:</em> if you are reading the <a href="https://www.knjizara.com/Tok-Mihalj-Ciksentmihalji-151257">Serbian translation</a>, the conclusion from Žarko Trebješanin Ph.D. who reviewed the book, was excellent! When I read that part I felt that I didn't need to read the whole book.</p>
<p>The <a href="https://www.youtube.com/watch?v=x4m_PdFbu-s">great episode</a> on breathing, from an even greater podcast by Andrew Huberman. If you don't have time to go into scientific explanations of how breathing works, check out the chapters of the podcast. The part about the boxed breathing technique is <a href="https://www.youtube.com/watch?v=x4m_PdFbu-s&amp;t=4184s">here</a>.</p>
<p>Since I'm touching on the topic of rock climbing, I want to mention <a href="https://warriorsway.com/the-rock-warriors-way-mental-training-for-climbers-2/">this book</a> as well. It's about the psychological aspect of climbing, how to conquer the fear of falling, focus more on the climbing process, and be a better not just climber but a person. It is inspired by various self-help books such as <em>Way of the Peaceful Warrior</em> by <em>Dan Millman</em>, <em>The Teachings of Don Juan</em> by <em>Carlos Castaneda</em>, and many more.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Everyday Epiphanies: The Benefits of Keeping a Daily Journal</title>
			<link href="https://wonderingchimp.com/posts/everyday-epiphanies-the-benefits-of-keeping-a-daily-journal/"/>
			<updated>2023-02-20T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/everyday-epiphanies-the-benefits-of-keeping-a-daily-journal/</id>
			<content type="html"><![CDATA[
				<p>So, that is exactly what I did. Not responding to each question of the above every day, but most of them, for sure.</p>
<p>In this article, I want to describe my process, how I felt so far with the journaling, how it looks, what are some plans ahead...</p>
<h2>How it all started?</h2>
<p>As mentioned above, I started journaling on a whim. That specific line in <a href="https://gurwinder.substack.com/p/stoicism-the-ancient-remedy-to-the?publication_id=589242&amp;post_id=68282554">this article</a> was my trigger.</p>
<p>And the exact line was this.</p>
<blockquote>
<p>Marcus Aurelius preferred to journal first thing in the morning so he could set his agenda for the day. (His journal, Meditations, is freely available to read.) He used journalling to get to know himself, and to set the day’s goals and the steps he’d take to meet them. Try this instead of doing what most people do when they awake—opening social media—which only allows strangers to set your agenda for you.</p>
</blockquote>
<p>And before reading this article I didn't know much about stoicism. And it's still the case, unfortunately, I don't know much. But, this specific line got me thinking - why don't I try this for a week or so, and see how it goes? What can go wrong?</p>
<p>The week passed, and then another, and another. Time kept passing, but my will to keep daily journaling going remained.</p>
<h2>How does it look?</h2>
<p>The process in itself is quite simple. When I wake up, after brushing my teeth, instead of taking my phone for messages, I take a pen, and notebook, and start writing.</p>
<p>Often, I write about what I did yesterday, how I felt, how I feel today, what I have to do today, and tomorrow, some general comments and notes about everything that surrounds me, and so on. Whatever I'm in a mood for that morning, whatever I feel.</p>
<p>There are days I'm not in the mood for anything. On those days I just write plainly what are my plans for today, what I did yesterday, and nothing more. Some days are to remember, and some are to forget, but both are equally mentioned in the journal.</p>
<p>The most important for me is to do this early in the morning, upon waking up. If I don't have time for that in the morning, on some rare occasions, I put a mental note for myself, and write in the evening or the next day. I started journaling in mid-August, and up until now, I filled one notebook. And, overall, I think I missed maybe 7 days in total when I didn't have time for writing.</p>
<p>For a week or so, I also tried to write in the evening. That didn't really work for me, however. I forgot some and missed the other days. That's when I decided to focus only on morning journaling.</p>
<p>One more thing to mention - reading what I have written is also helpful. I don't do it daily but once a week - I go through what I have written down for the week and note some things that I want to return to at some point. This reading and rereading helped me learn a lot about myself and how I feel about my environment. In my opinion, this is the key part of journaling, besides writing.</p>
<p>When you read what you wrote it helps you put things into context. You can see better how some situations affected you, how you responded, what didn't work, and what could be changed. Reading what you wrote is one step toward understanding yourself.</p>
<h2>What kept me going?</h2>
<p>This wasn't the first time I tried journaling. I have been trying to start a journal for a long time in the past. I have written down something, then stopped, then started back again. It was an on-and-off relationship.</p>
<p>Oftentimes in the past, I told myself - I need this specific notebook or a pen, and I will start writing, it's that simple! No, it isn't. It is simple to start, but you don't need a specific notebook or a pen. The problem lies in how to keep at it every day.</p>
<p>The thing that kept me going was, in fact, quite simple. I heard about it before but I wasn't aware that I'm doing just that. The process is called, I think, habit swapping. Swapping one habit (often a bad one), with another (a good one).</p>
<p>Instead of grabbing my phone every time I wake up, I grab a pen and paper and start writing. When it's hard to write, I start with one, or two sentences, then I want to write the third, and fourth, and then I lose count.</p>
<h2>What have I learned?</h2>
<p>The first thing I learned about myself was - Oh my God, I'm a difficult person to maintain. I thought that I was rather simple and easygoing, and I am, most of the time. But the complexity of me was the thing that surprised me the most!</p>
<p>Besides the above, I learned other things about myself as well. How I feel about the world we live in, the humans that surround me, and all other aspects of my life. It helped me put things into contexts, contexts into settings, settings into doings, and so on.</p>
<p>The only plan I have is to keep going with morning journaling. Because I feel this is sort of like a wildcard for all other plans. One plan to rule them all kind of thing.</p>
<h2>Takeaways</h2>
<p>Another article finished. Congrats! 🎉</p>
<p>Following are some of the key takeaways from this article, and overall things I found helpful in regard to journaling.</p>
<ul>
<li>If you didn't think of journaling before, start thinking, and start writing.</li>
<li>If you thought of it, but somehow you didn't find the time, will, or something else, just stop all that, grab a pen and paper, and start writing!</li>
<li>Set a goal for yourself to write a couple of sentences each day. This will be enough to get you started.</li>
<li>Try swapping your morning habit of grabbing a phone with grabbing a journal.</li>
<li>Test if it's more convenient for you to write in the morning, evening, or some other part of the day.</li>
<li>Don't be mad at yourself if you miss one day. If you miss the next one as well, maybe you can address that by writing why you missed them in the first place.</li>
<li>You don't need to be a well-versed writer to write for yourself!</li>
<li>You will always know how to read your handwriting, so writing in a journal is better than writing on your laptop.</li>
<li>Read and re-read what you wrote.</li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Kubernetes Ingress: The &#39;Fun&#39; Way to Control Traffic</title>
			<link href="https://wonderingchimp.com/posts/kubernetes-ingress-the-fun-way-to-control-traffic/"/>
			<updated>2023-02-06T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/kubernetes-ingress-the-fun-way-to-control-traffic/</id>
			<content type="html"><![CDATA[
				<p>Yes, I don't know that much about Kubernetes Ingress. I think I cover the basics, but that is all. However, with this article, I want to change that! I want to tackle the topic of Kubernetes Ingress - what it is, how can be enabled and used, what to use it for, and so on.</p>
<p>I want to move from &quot;Fun&quot; to Fun!</p>
<p>The approach I'm going to take is this - reading the official Kubernetes documentation about it and trying to digest it for myself, and hopefully for many of you readers as well. This article however will not be a deployment or a how-to instruction, it will only explain as simply as possible, what each part is. So, let's begin.</p>
<h2>What is Kubernetes Ingress?</h2>
<p>In plain words - Kubernetes Ingress is just a group of rules that exposes routes outside the cluster to the services inside the cluster. These routes include only HTTP and HTTPS routes. By adding Ingress, you will not automatically have routing. Ingress only defines the rules.</p>
<p><em>The actual routing is done by Ingress Controller.</em> The Ingress controller is the one responsible for fulfilling the Ingress.</p>
<p>And what is the Ingress controller? It is a specialized type of load balancer for Kubernetes. It is a type of controller, but the only difference between this and other controllers is that the Ingress controller is not run as a part of <code>kube-controller-manager</code>. That means that it is not started automatically with the cluster. You will need to choose a third-party provider for the Ingress controller.</p>
<p>The bottom line is this - by deploying Ingress into the Kubernetes cluster you will not get anything until you deploy the Ingress controller. One doesn't work without the other. The deployment itself will not fail, it just won't work, and that's it.</p>
<p>Below you can see an example diagram of what goes where.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"><img src="../images/posts/0032-ingress-01.png" alt="A flowchart diagram illustrating Kubernetes Ingress traffic routing. A client node connects via a dashed arrow labeled &quot;Ingress-managed load balancer&quot; to an Ingress node. The Ingress connects via a solid arrow labeled &quot;routing rule&quot; to a Service node. The Service then splits into two solid arrows, each pointing to a separate Pod (Pod 1 and Pod 2). The Service, Pod 1, and Pod 2 are grouped inside a box labeled &quot;cluster&quot;." title="Ingress Diagram"></a></p>
<p>This part was so hard for me to understand in the beginning and I kept mixing the Ingress with Ingress controllers, load balancers, services, and so on. However, I hope my explanation above is good and you can get a grasp of it.</p>
<h2>When to use it?</h2>
<p>Ingress and Ingress controllers can be used whenever you want to expose some HTTP and/or HTTPS route outside of the cluster. If you have many services you want to expose - going in the Ingress and Ingress controller direction is a way to go. Add to that the always-terrifying SSL configuration, and this option is a no-brainer.</p>
<p>There is another option, a simpler one. It is, however, a bit more specific - it applies only if you run a Kubernetes cluster on a cloud provider.</p>
<p>If you run a GKE, AKS, or EKS cluster, and you want to expose only one application or test the exposure of it - you can use a Kubernetes Service instead. You just define <code>Type: LoadBalancer</code> within that Service, and voila. The cloud provider (if you set it up properly) will spin up the load balancer and it will tie it to your service. And you can access that service externally. Some of the cloud providers offer you to specify the IP address of the load balancer within the Service, but some of them don't offer that option. So, this option is simpler, but, as always, there is a trade-off.</p>
<h2>What does the Ingress look like?</h2>
<p>Deploying Ingress resource is quite easy - you just define it, apply it to the cluster, and that's it! But, that is not enough! You will need an Ingress controller to make it work. If you hate yourself, you can even have multiple Ingress controllers, but, we'll not do that here.</p>
<p>Following is just an example of what the Ingress looks like. I will go through and explain each part of the resource in specific.</p>
<pre><code class="language-yaml">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - host: &quot;foo.bar.com&quot;
    http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
</code></pre>
<p>One paragraph explanation - each request that goes to the <code>foo.bar.com/testpath</code> URL will be rewritten to <code>foo.bar.com</code> and will be routed to the service <code>test</code> on Kubernetes, on port <code>80</code>.</p>
<h3>Drilling down the specifics</h3>
<p>Now, to get into some of the details, what all this above actually means.</p>
<p>The first part of the above resource definition is the <code>metadata</code>. It contains general information about the resource itself. The important part is the <code>annotations</code> block. This specific annotation shows us the Ingress controller-specific configuration. In this case, it is an Nginx Ingress controller <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/rewrite/README.md">rewrite annotation</a>. It defines the target URI where the traffic must be redirected. Whenever the <code>foo.bar.com/testpath</code> is received, it will be rewritten to <code>foo.bar.com</code>.</p>
<p>Important to note is that these annotations vary from one Ingress controller to another, so you need to be aware of them.</p>
<p>The next part is the <code>spec</code> block. The <code>ingressClassName</code> option defines an Ingress class to be used. And an Ingress class is yet another resource, which defines the controller that should implement the class and a controller-specific configuration. This field can be omitted, but it is recommended to always specify the <code>ingressClassName</code> because, in the scenario where we have multiple Ingress controllers, we might not be aware of what is the default one.</p>
<p>The <code>spec.rules</code> block defines the rules where the traffic will go. This specific rule shows us that everything that goes to the host <code>foo.bar.com</code> with the URL path <code>/testpath</code> will be routed to the service called <code>test</code> and to port 80 of that service.</p>
<p>The <code>host</code> field above is optional. It tells us to which host this rule applies. When the <code>host</code> field is not specified, it applies to all inbound HTTP traffic through the IP address specified by the Ingress controller.</p>
<p>One thing to mention here - for this to work you will need to have a DNS record that matches the domain <code>foo.bar.com</code> to the specific IP. This part is covered with the Ingress controller configuration, but I want to note it nevertheless.</p>
<p>The <code>pathType: Prefix</code> tells us that every request with <code>/testpath</code> in the URL will go to the defined backend. For example - <code>http://foo.bar.com/testpath</code> will go to the <code>test</code> backend, as well as <code>http://foo.bar.com/testpath/hello</code>. So everything under the <code>/testpath</code> will be matched.</p>
<p>If we don't want to match everything under the <code>/testpath</code>, we can use <code>pathType: Exact</code> option. It will only match if the request has that exact path specified. For example - <code>http://foo.bar.com/testpath</code> will go to the <code>test</code> backend, however, <code>http://foo.bar.com/testpath/hello</code> will not.</p>
<p>The last but not least, the <code>backend</code> block of the rules defines where the request will go. So, as mentioned and shown, it will go to the <code>test</code> service and port 80. Besides the service, it can be a specific resource backend.</p>
<p>And, a resource backend is a reference to another Kubernetes resource, within the same namespace as the Ingress object. In order for Ingress to work, we need to specify either a resource or a service backend. The former could be used if we want to ingress data to an object storage backend with some static assets.</p>
<p>A Resource backend is an ObjectRef to another Kubernetes resource within the same namespace as the Ingress object. A Resource is a mutually exclusive setting with the Service backend and will fail validation if both are specified. A common usage for a Resource backend is to ingress data to an object storage backend with static assets.</p>
<h2>Key Takeaways</h2>
<p>This is it, at least for this week's article! To wrap the story up, the following are some of the key takeaways.</p>
<ul>
<li>The Ingress resource only provides a set of rules, and it will not work without an Ingress controller.</li>
<li>With annotations, you can provide different configuration parameters.</li>
<li>Use Ingress/Ingress controller when you need to access multiple services externally.</li>
<li>A valid DNS record needs to match the IP address of a cluster where everything is running.</li>
<li>You can route traffic to a Service, or some other Kubernetes resource specified with the Resource backend.</li>
</ul>
<p>To find out more about Ingress, Ingress controllers, and everything else connecting them, check out the link to the official <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Kubernetes documentation</a>.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>The complexity of deploying a Kubernetes cluster</title>
			<link href="https://wonderingchimp.com/posts/the-complexity-of-deploying-a-kubernetes-cluster/"/>
			<updated>2023-01-23T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/the-complexity-of-deploying-a-kubernetes-cluster/</id>
			<content type="html"><![CDATA[
				<p>It's safe to say that everything related to Kubernetes can be (quite) overwhelming. At least in the beginning, when you are learning about it. However, it can be overwhelming later in the process as well. With all the stuff you need to take care of, upgrades, deprecations, backward compatibility, etc.</p>
<p>The reason for this is simple - it is a technology that is evolving quickly, and everyone wants, and needs, to keep up. In other words, it became a trend. It seems that everyone is doing it, even though not everyone needs it.</p>
<p>Let's return now to the vastness of the Kubernetes ecosystem. This vastness is present in its <a href="">learning material</a>, as well as in deployment options. In this article, we'll focus on the latter - all the places where you can run Kubernetes clusters. With it, I hope I will present Kubernetes in a bit more tolerable way, and try to answer some of the questions from above.</p>
<p>Just a note before we start - I am going to use the term <em>deployment</em> rather lightly in this article. It will encompass the installation, configuration, and operation of a Kubernetes cluster.</p>
<h2>Types of deployments</h2>
<p>Even though there are many ways to deploy it, we can group all of them into four types:</p>
<ul>
<li>On-premise deployment.</li>
<li>Cloud-managed deployment.</li>
<li>Hybrid deployment.</li>
<li>Platform deployment.</li>
</ul>
<h3>On-premise deployment</h3>
<p>Risking not to state the obvious - this option takes into account that you (and hopefully your team) install and manage VMs on which you will then install and manage the Kubernetes cluster. Due to the complexity of the installation, usually you will not go component by component and install each one of them, instead, you will use a deployment tool for that.</p>
<p>There are different tools for this (of course there are). For brevity, we will mention only some of them, that are being used in the wilderness.</p>
<p><em>Kubeadm</em> is the first one. This is the officially supported tool for deploying Kubernetes. It provides a best-practice &quot;fast path&quot; to get you started with the Kubernetes cluster. It assumes that the VMs where you want to run the Kubernetes cluster are available and configured properly. <a href="https://kubernetes.io/docs/setup/">Official documentation</a> for the installation of a self-managed Kubernetes cluster evolves around this tool.</p>
<p>Next up is the <a href="https://docs.k3s.io/"><em>K3s</em></a>. It is a lightweight distribution of Kubernetes that is suitable for edge devices, slower and weaker computers, servers or VMs. Provided by Rancher, it is a great solution if you are opting for a simple, easy, and lightweight deployment.</p>
<p>The third up is <a href="https://kubernetes.io/docs/setup/"><em>Kind</em></a>. It is a tool meant for running local Kubernetes clusters, using Docker containers as nodes. It was primarily developed to test the Kubernetes cluster, but can be used for development and CI purposes. <strong>It is not recommended for production purposes.</strong></p>
<p>Another great tool meant to be used for development and/or CI is <a href="https://minikube.sigs.k8s.io/docs/"><em>Minikube</em></a>. It quickly sets your local cluster, and it provides ease of deployment and management. As mentioned in the sentence before - <strong>it is not recommended for production purposes.</strong></p>
<p>Last but not least - <a href="https://k3d.io/v5.4.6/"><em>K3d</em></a>. Another tool great for development and CI purposes, <strong>but not for production</strong>. Provided by the community, it runs as a wrapper for <em>K3s</em> and helps you test your applications before deploying them to <em>K3s</em>.</p>
<p>Not complex enough? Wait for it.</p>
<h3>Cloud-managed Deployment</h3>
<p>If you do not want to go through different stages of headaches, depression, and self-doubt, you will opt for this type of deployment. Just kidding, the first option is good if you want to have everything your way, learn more, and set everything from scratch.</p>
<p>This option includes setting up and deploying a Kubernetes cluster as a service, provided by different Cloud Providers. This is good if you want to have everything already set and pre-configured.</p>
<p>It has also some downsides - you cannot interact with some or all control plane components (e.g. etcd, api-server, etc), you depend on the provider of the service to support the latest Kubernetes version, some options are not as simple as they would be if you are running your own Kubernetes cluster.</p>
<p>Here, I'll mention only three services, that are somewhat the same but are offered by different Cloud Providers. Those services are, in alphabetical order, <em>Azure Kubernetes Service</em> or <em>AKS</em>, <em>Elastic Kubernetes Service</em> or <em>EKS</em>, and <em>Google Kubernetes Engine</em> or <em>GKE</em>.</p>
<p>Each of them has different deployment options and possible configurations, but in essence, are the same - Kubernetes clusters are provided by the Cloud Provider(s), and you don't need to set up and configure those clusters or trouble yourself with managing control plane components. All that is done by the Cloud Provider. You just need to concentrate on running your applications there and possibly add some other components to the Kubernetes cluster.</p>
<p>You get a fully functioning and operational, production-grade, Kubernetes cluster for an adequate price that varies by the Cloud Provider. Which option to choose depends mainly on the desired Cloud Provider, if you are already running something on a specific provider, and the price.</p>
<p>This is where the complexity begins.</p>
<h3>Hybrid deployment</h3>
<p>If you are more into hybrid stuff - you already have your own Data Center, and just want to install Kubernetes there, and plug it into some Cloud Provider or Platform - this option is for you. It allows you to install and configure Kubernetes cluster with ease on existing VMs, servers, and connect them with your platform, or connect it to the Cloud Provider.</p>
<p><a href="https://www.rancher.com/products/rke"><em>RKE</em></a> is one of those tools. It stands for <em>Rancher Kubernetes Engine</em>, and it is suitable for hybrid environments - it is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks.</p>
<p>Next up is <a href="https://anywhere.eks.amazonaws.com/"><em>EKS Anywhere</em></a>. It is an open-source deployment option for Amazon EKS that allows you to create and operate Kubernetes clusters on-premises, with optional support offered by AWS. EKS Anywhere supports Bare Metal, CloudStack and VMware vSphere as deployment targets.</p>
<p>This is where we add more to the complexity.</p>
<h3>Platform deployment</h3>
<p>The option that offers you the most &quot;bang for your buck&quot; - the &quot;ease&quot; of deployment, configuration, and management is the Kubernetes Management Platform. This solution is good if you are running everything on Kubernetes, you have multiple different Kubernetes Clusters, both on-prem and cloud, and want a central place where you can manage, configure, access, provision, and deploy everything.</p>
<p>The biggest &quot;players&quot; in the Kubernetes Management Platform domain are - <em>Red Hat Openshift</em>, <em>SUSE Rancher</em>, <em>VMWare Tanzu</em>, and <em>Google Anthos</em>.</p>
<p>Not wishing to go deep dive into what each of these platforms offer, have and don't have, pros and cons of one against all others, I want to mention one thing. In a nutshell, they all offer the same - simplified cluster operations, consistent security policy and user management, and access to shared tools and services.</p>
<p>If you want all that, go ahead and evaluate one of the Platforms mentioned above, and good luck!</p>
<p>I hope this is where we are finished with all this complexity.</p>
<h2>Summary</h2>
<p>The decision to use or not to use Kubernetes can be hard in the beginning. Especially when we are not aware of other options and opportunities. Maybe we don't even need a Kubernetes cluster or multiple clusters, maybe there is a simpler and easier solution.</p>
<p>If in the end, you choose the path of the Kubernetes ecosystem, I hope this article will be there to give you a bigger picture and help you understand different ways to deploy the Kubernetes cluster in your environment.</p>
<p>Is there some other option I haven't taken into consideration? Do the types I'm mentioning have sense? Write down in the comments your take on this, I'm eager to know your view and learn more on the topic.</p>
<p>For more Kubernetes-related articles, subscribe to my newsletter.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>K8s - where to start?</title>
			<link href="https://wonderingchimp.com/posts/k8s-where-to-start/"/>
			<updated>2023-01-09T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/k8s-where-to-start/</id>
			<content type="html"><![CDATA[
				<p>If you responded to at least one of these questions affirmatively, this article is for you. If there is something else troubling you with Kubernetes that I didn't mention, well, stick until the end, maybe you will be able to find something that will be useful and will invoke further research.</p>
<h2>Let's start from the beginning - what Kubernetes is?</h2>
<p>If you have come across the Kubernetes <a href="https://kubernetes.io/docs/concepts/overview/">official documentation</a> (which I find quite useful, to be honest), they define Kubernetes like this:</p>
<blockquote>
<p>Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.</p>
</blockquote>
<p>I find this explanation good and quite encompassing. Especially if you have experience with Kubernetes and have worked with it. But, if you haven't had time to check it out, or you are starting to get into the Kubernetes overwhelming world, this definition doesn't tell you a lot.</p>
<p>Now, I will not go into details and dissect this explanation from the documentation, rather, I will provide you with different sources where you can find the answers yourself.</p>
<h2>Different Perspectives</h2>
<p>As it happens with the world around us, we see everything from a different perspective. The same is true for Kubernetes - being it a small world on its own, we may approach it from different perspectives.</p>
<p>Those perspectives may vary from a person to person, so I've decided to list the most common ones. We will then use these different perspectives to list some sources where you can find the answers you are looking for.</p>
<ul>
<li>A Software Developer who is interested in developing and running your application on Kubernetes, but they don't know where to start.</li>
<li>An Operations Engineer (or DevOps Engineer as some might say) interested in setting up the Kubernetes cluster, but are not sure what option to choose, what structure it will have, and so on.</li>
<li>A Product Owner who heard about it and wants to know more about it.</li>
</ul>
<h2>Books</h2>
<p>List of books related to Kubernetes is vast, and for sure can be overwhelming. In the following paragraphs, I will recommend books to help you get started with the vast and really interesting world of Kubernetes.</p>
<p>First up is the book I read when I first started with the subject. It was, and, in my opinion, still is, one of the best I've read on Kubernetes so far. <a href="https://www.amazon.com/Kubernetes-Action-Second-Marko-Luk-C5-A1a-dp-1617297615/dp/1617297615/ref=dp_ob_title_bk">This book</a> is more for people who would like some hands-on exercises and wish to immerse themselves in the ecosystem by typing commands and by doing some exercises.</p>
<p>Who should read it? Everyone, no matter the perspective should at least read Part 1 - Overview, and Part 2 - Core Concepts. If you are DevOps engineer, however, it will be helpful to read the whole book.</p>
<p><a href="https://www.amazon.com/gp/product/1492046531/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1492046531&amp;linkCode=as2&amp;tag=booksoncode-20&amp;linkId=679aeb485ea22e9189f32572f343b554">This book</a> is excellent as well, and one of the authors is Kelsey Hightower - the one who authored the popular tutorial <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way">Kubernetes the Hard way</a>. Both the book and his tutorial are, however, more suitable for DevOps and Ops engineers. Others can find something useful in them as well, but the material is leaning more toward the infrastructure side of the coin.</p>
<p>On the other hand, if you are interested in developing applications for Kubernetes, <a href="https://www.amazon.com/Kubernetes-Developers-develop-applications-containers-ebook/dp/B07931YQK3">this book</a> would be the most suitable for you. It is a developer-focused approach to Kubernetes and its components, covering everything from the initial setup of the development environment, through various Kubernetes resource lifecycles, to the troubleshooting steps.</p>
<p>Last, but not least, is another one leaning more toward the Dev side of the coin as well. If you are interested in developing Cloud Native applications - <a href="https://www.amazon.com/gp/product/1492050288/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1492050288&amp;linkCode=as2&amp;tag=booksoncode-20&amp;linkId=4737b874a9c9bf12bd91e0a3f96ce686">this book</a> is for you. It explains foundational, behavioral, structural, configuration, and advanced patterns, and how to apply the in the development of Cloud Native applications.</p>
<h2>Courses and Videos</h2>
<p>If books are not your thing really, and you feel confident when learning from videos, don't worry - there is a lot of them out there as well.</p>
<p>First up is the great video from Nana Janashia <a href="https://www.youtube.com/watch?v=X48VuDVv0do">Tech World with Nana</a> and it explains everything you need to know to get started. It is 3 and a half hours long and it covers all the basics of Kubernetes components and resources. It is more oriented towards the DevOps and Ops side of the coin, however, developers can get a good grasp from it as well.</p>
<p>If you don't have, or don't want to, spend too much time learning about Kubernetes, <a href="https://www.youtube.com/watch?v=s_o8dwzRlu4">this video</a>, again from Nana, will explain all you need to know about Kubernetes in an hour or so. It is good if you are a total beginner, and just want to find out the basics.</p>
<p><a href="https://www.udemy.com/course/certified-kubernetes-application-developer/?matchtype=b">This course</a> is suitable for both developers and DevOps and ops people. It is leaning more toward the developer side of the coin, but it is a good course to follow, especially if you want to get certification in the process.</p>
<p><a href="https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/">Another course</a> from the same author - Mumshad Mannambeth, is a great one if you are more leaning toward the Ops and DevOps side of things. Especially if you are interested in getting certified as a Kubernetes administrator, this course is definitely for you. I completed this course before passing the CKA exam, and cannot recommend it more.</p>
<h2>Other Useful Material</h2>
<p>With this last section, I want to share other stuff and material I found useful when started learning about the Kubernetes ecosystem.</p>
<p>https://azure.microsoft.com/en-us/resources/kubernetes-learning-path/</p>
<p><a href="https://azure.microsoft.com/en-us/resources/kubernetes-learning-path/">This is</a> one of my favorites. Even though it's more oriented toward Azure Kubernetes service, it is a great first step, for all of you, no matter the perspective! If you just want to know the basics, days one to five will be enough. If you want to know more, go ahead and check out the whole learning path. Have in mind, however, that at some point you will get a bit more info about AKS - a managed Kubernetes service from Azure. And the service itself is good, however, its maybe not suitable for your use case.</p>
<p>Even though it may be somewhat counterintuitive I'm mentioning it at the end, I find <a href="https://kubernetes.io/docs/home/">Kubernetes official documentation</a> high on my material recommendation. It has everything you need to know about Kubernetes, and it is a great point of reference for everything relating to Kubernetes. Or at least it can be if you know where to look.</p>
<p>Last on the list is the <a href="https://kubernetes.io/blog/">official blog from Kubernetes</a>, where you can find a plethora of interesting articles related to Kubernetes. There the Kubernetes maintainers and developers explain decisions they made in the Kubernetes design and implementation, releases, changelogs, etc.</p>
<h2>Summary</h2>
<p>Kubernetes ecosystem sure can be overwhelming, and the material explaining it, as well. The most important thing is to keep an open mind, start small, and build on top of that. Everything else will come with time. There is no need to hurry and worry that you don't know anything. Nobody does at the beginning.</p>
<p>Let me know in the comments below what you think about the article. Would you add something to the list? Do you have some feedback about the material I recommended?</p>
<p>And subscribe to the list to get more Kubernetes-related articles in your mailbox.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How to Dev*Oops* more?</title>
			<link href="https://wonderingchimp.com/posts/how-to-dev-oops-more/"/>
			<updated>2022-12-05T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-to-dev-oops-more/</id>
			<content type="html"><![CDATA[
				<p>In the previous article - <em>Why DevOps might be the wrong term?</em> I argued that the fear of failure, and the blame that inevitably follows along, were the actual reasons for the silos between the teams in the first place.</p>
<p>This article dives deeper into the topic <em>DevOops instead of DevOps</em>. I will concentrate on the blaming culture that follows along. Why it is there in the first place? What are the steps to shake those bad foundations and eventually destroy them?</p>
<p>We all blame others when something goes wrong. This is in fact how our brain works, and we are not aware of it.</p>
<p>Sharing where you were wrong, being open about it, and focusing on systems approach to blaming and problem-solving, will help us destroy the old foundations, and build a culture where people are not afraid to fail.</p>
<h2>You are prone to blame</h2>
<p><a href="https://www.nature.com/articles/srep17390">This study</a> shows that you, and everyone else, are hard-wired to blame others or other circumstances when something goes wrong. We are never to blame.</p>
<p>Why is that? Well, one of the reasons may be the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3115647/">fundamental attribution bias</a>. We think that everything people do is a reflection of who they are. We are not even considering other factors that can influence their behavior. At least in most cases.</p>
<p>We all feel at ease when it's not our fault. In that way we don't need to change, others need to change. With this, we add one, or more, bricks to the foundation of the walls of confusion.</p>
<p>Another brick in the foundation may be a biological one. The <a href="https://www.nature.com/articles/srep17390">above-mentioned research</a> shows that positive events are processed differently than negative ones. Positive stuff is processed by the prefrontal cortex, which takes a while and usually concludes that good things happen by accident. Negative events are processed by the amygdala, the one responsible for the <em>fight-or-flight</em> response. That leads our brain to think that bad things happen on purpose. We do this so fast, that we don't even notice that we made an assumption.<a href="https://hbr.org/2022/02/blame-culture-is-toxic-heres-how-to-stop-it">^1</a></p>
<p>That brings me back to the quote I heard some time ago - <em>Never assume anything. When you assume, that makes <strong>ass</strong> from <strong>u</strong> and <strong>me</strong>.</em></p>
<h2>How can you fix this?</h2>
<p>Fixing something we are all biologically or psychologically prone to do is not that easy. However, there are some steps you can take to implement the blameless culture and tackle the foundations of the silos. No matter what you call it.</p>
<h3>Share your mistakes</h3>
<p>People at Google are really into a <a href="https://sre.google/sre-book/postmortem-culture/">Blameless Postmortem Culture</a>. The basics are - after each incident they encounter, providing it is serious enough, they write a postmortem - what happened, why, what were the consequences, and what can we learn from it.</p>
<p>All engineers contribute to writing postmortems, and drafts are reviewed by more senior members of the team. They even have a postmortem reading club, where they discuss various postmortems, and how to write good ones. And also the <em>Postmortem of the month</em> newsletter where they inform people about the interesting postmortem they had this month.</p>
<p>Creating postmortems takes engineers' time. The time that they can spend on engineering stuff rather than writing reports. That's why it's important to make postmortems part of the workflow. Have a criteria for triggering them - you don't want to write a postmortem for each incident you encounter, only the ones that are serious enough.<a href="https://sre.google/workbook/postmortem-culture/">^2</a></p>
<h3>(Proactively) Blame the system</h3>
<p>Instead of focusing on finger-pointing and adding bricks and mortar to the walls of confusion, blame the system. See the failure, or incident, as a system, not human error, and work on improving the system. Work on improving the system is key, just blaming the system without any action behind it is not that good.</p>
<p>Improving human (behavior) by blaming them is not getting us anywhere. Instead of asking - <em>Who's fault was this?</em>, try asking - <em>Where did the system break down and why?</em>. This will lead all of us forward, wanting to improve the system.</p>
<p>Seeing the system(s) as faulty and wanting to improve it (them), be it within a team, company, or even at a larger scale, all the while not blaming others, not only destroys the walls of confusion between all of us but helps us build better foundations for the future.</p>
<ul>
<li></li>
<li></li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Why DevOps might be the wrong term?</title>
			<link href="https://wonderingchimp.com/posts/why-devops-might-be-the-wrong-term/"/>
			<updated>2022-11-10T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/why-devops-might-be-the-wrong-term/</id>
			<content type="html"><![CDATA[
				<p>The whole idea of DevOps had a goal to demolish barriers between Developers (the Dev part), and Operations (the Ops part). It ended up mostly relabeling the Ops people as DevOps. At least in most cases, it did. And the barriers remained.</p>
<p>This article will not delve into the endless battle, what DevOps is, what it represents, or who should do it... We all should be aware of what it represents and &quot;do&quot; DevOps. But that's not the point, at least not in this post.</p>
<p>With this article, I want to concentrate on the reason for the barriers in the first place, and the fact that the ability to fail on all parts of the equation is important, if not the most important aspect of creating something. Whatever we decide to call that something.</p>
<p>I wrote about failure <a href="https://www.wonderingchimp.com/posts/lessons-from-climbing-im-applying-in-life-failure/">before</a>, but from the rock-climbing perspective, and how to apply it in life. This time I want to concentrate on the importance of failure on projects and how that fear might be connected to the barriers between the project teams.</p>
<p>If we don't try we will not learn. If we fail during that trial, we mustn't beat ourselves because of it, but shift our perspective and learn from that trial and try again. If we are afraid of failing, well, this is where the problem begins.</p>
<h2>The foundation of barriers</h2>
<p>These barriers go beyond the physical, real world. One side could say something like - We finished our part, tested it, and it works. Now, if it fails, it's no longer our fault. The other side could respond with - If it fails to run, we know it's not on our side, it must be on the other (their) side.</p>
<p>What is similar to both of these sides, despite both of them being human and prone to mistakes? It is the fact that we are shifting the blame - if it fails, it's not us, it's them. And shifting the blame is only one representation of the fear of failure which is behind it all - If something fails, let's blame the other side, in that way, we will not get punished.</p>
<p>This could be the initial spark that had built the walls between teams - the walls of confusion. The walls which we'll all use to throw the blame over. The same spark that remained to kindle, keeping those same walls alive.</p>
<p>And why are we afraid to fail? The initial fear of failure comes from the fear of getting punished. That we are somehow going to be punished if something doesn't work as expected. That the failure might end up in us losing our jobs.</p>
<h2>Shaking the foundation</h2>
<p>Instead of concentrating on tearing down the walls, let's attack the possible cause for them in the first place.</p>
<p>One way to tackle this fear, and start shaking the foundations of the barriers is to <em>accept failure as a normal part of the process.</em> We don't want to shift the blame to others and throw it over the wall. We need, as a team, to be aware of the failure, own it together, and work on the best way to learn from that failure.</p>
<p>With the risk of sounding corny, the catchphrase - There is no I in a team, should be expanded to the failure as well - when we fail, we fail together.</p>
<h2>How can we do that?</h2>
<p>Easy. If failure is something we don't want to see in a production environment, well, too bad, because, at some point, things will fail. Be it the application, the infrastructure behind it, or the whole system.</p>
<p>So, instead of thinking of failure as an exception, we should consider it a normal part of the process, and count on it from the start. We should build our systems with failure in mind. When they fail, <em>when</em>, not <em>if</em>, instead of seeking the blame, let's have mechanisms in place that will help us quickly recover from them, and learn from them for future iterations.</p>
<p>Yes, you might think - it's easier said than done, and what this guy knows about this in the first place? Well, to be honest, it is easier said than done, but the fact is, it shouldn't be the reason not to do it. And what do I know about it in the first place - not much, but whenever I or someone else tried shifting the blame, it didn't end well.</p>
<p>Thinking of failure as a normal part of the process can be one step toward tearing down the barriers. Barriers that are still there, although not (wanted to be) seen. Or even called differently.</p>
<p><strong>Instead of DevOps, we should have called it Dev<em>Oops!</em> With the accent on <em>oops!</em> And consider that <em>oops!</em> an expected part of the process.</strong></p>
<h3>And what if I'm wrong?</h3>
<p>I might be. But if you end up considering failure as a normal part of the process and you build around that, I would count that as a win for everyone nevertheless.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How Kubernetes determines which pods to terminate in case it is running out of resources?</title>
			<link href="https://wonderingchimp.com/posts/how-kubernetes-determines-which-pods-to-terminate-in-case-it-is-running-out-of-resources/"/>
			<updated>2022-10-27T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-kubernetes-determines-which-pods-to-terminate-in-case-it-is-running-out-of-resources/</id>
			<content type="html"><![CDATA[
				<p>There are two ways to set limits and requests for resources on Kubernetes. The first one is with <em>LimitRange</em> resource on a namespace level, and the second is to do it directly on the resources you are running - pods, deployments, or stateful sets.</p>
<p>The first approach is better if you want to make sure that all resources within a specific namespace have default CPU and memory limits and requests if not specified. Additionally, you can configure minimum and maximum resource constraints on a namespace with this approach - ensure that the resources running within a namespace can have limits and requests set up to a certain value.</p>
<p>The second approach is to set it directly on a service level (pod, deployment, or stateful set) under <code>spec.containers[].resources</code> path. This approach gives you more control over the resource, and it is good to use it in combination with the first approach - set defaults, maximum and minimum values for CPU and memory on a namespace level, and put application-specific resource configuration on the service level (pod, deployment, or stateful set).</p>
<p>Now, the premise of this article is not going to be on how to configure any of these approaches, the instructions for that are great and can be found in the official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#what-s-next">Kubernetes documentation</a>.</p>
<p>Here, I want to discuss why setting these requests and limitations is important, and what does Kubernetes do to rank your applications for termination, in case it runs out of the available memory and CPU.</p>
<h2>How does CPU limitation work?</h2>
<p>The CPU resource is measured in CPU units. One CPU, in Kubernetes, is equivalent to one hyperthread on a bare-metal processor with Hyperthreading, one AWS vCPU, one Azure vCore, and one GCP Core.<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units">^1</a> CPU limits and requests are there to help with the adequate usage of available CPU resources on a Kubernetes cluster.</p>
<p>When you set CPU limits and requests, you define a hard ceiling on how much that container can use (a limit) or sort of a weighting (a request) of a container on a CPU.</p>
<p>And this is really interesting, if a container exceeds its CPU limit, it might or might not be allowed to that for extended periods of time. However, container runtimes don't terminate Pods or containers for excessive CPU usage.<a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run">^2</a></p>
<h2>How does memory limitation work?</h2>
<p>The memory limits and requests are measured in bytes. You can express memory as a plain integer or as a fixed-point number using one of these quantity suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. The important thing to note here are the suffixes. If you request 400m of memory, this is a request for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (400Mi) or 400 megabytes (400M).<a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory">^3</a></p>
<p>Why do we need these requests and limitations? Because we want to ensure that the resources of our Kubernetes cluster is used in the most efficient way.</p>
<p>If a container tries to allocate more memory than its limit, the Linux kernel out-of-memory subsystem activates and, typically, intervenes by stopping one of the processes in the container that tried to allocate memory. If that process is the container's PID 1, and the container is marked as &quot;restartable&quot;, Kubernetes restarts the container.<a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run">^4</a></p>
<p>So, to summarize - if a container exceeds its CPU limit it might or might not be allowed to do that, but in general, it will not be restarted by the container runtime. And if it exceeds its memory limit, it will be restarted at some point by the container runtime.</p>
<h2>How do QoS classes fit the picture?</h2>
<p>Kubernetes Quality of Service (QoS) classes are used to control the scheduling and evicting of Pods, based on which class they belong to. And to determine which class they belong to, Kubernetes uses CPU and memory limitations! In other words, if Pods belong to different classes, they will be treated differently when Kubernetes worker nodes run out of resources. This is okay, because we are speaking here of lifeless applications.</p>
<p>So, those classes are the following:</p>
<ul>
<li><em>Guaranteed</em> - this class ensures that pods get top priority, and they remain running until they exceed their limits. To be classified as <em>Guaranteed</em>, every container in a pod needs to have both memory and cpu limits and requests, and those resource-specific limits and requests need to match each other. In other words - <code>memory.limit</code> == <code>memory.request</code> and <code>cpu.limit</code> == <code>cpu.request</code>.</li>
<li><em>Burstable</em> - pods part of this class have some minimal resource guarantee, but can use more resources when available. A class with middle priority. To be classified as <em>Burstable</em>, the pod shouldn't meet criteria for <em>Guaranteed</em> and at least one resource-specific limitation or request need to exist on a container.</li>
<li><em>BestEffort</em> - this class is the lowest priority, and pods part of this class will be the first one to go. To be part of this class, the containers in the pod mustn't have any CPU or memory request or limitations.</li>
</ul>
<h2>Why am I mentioning all of this?</h2>
<p>Well, to start with, it's good to know the Kubernetes-specific priority of your applications within a cluster. If some applications got restarted or have been evicted, due to worker nodes getting out of resources, and others didn't, it's good to know why.</p>
<p>The difference in CPU and memory limitations is good to know - when container exceeds the CPU limit, it might or might not be restarted, however, if it exceeds the memory limit, it will get restarted by the runtime at some point.</p>
<p>Last, but not least, its good to configure resource limitations and requests on your Kubernetes cluster, so the resources are used in the most efficient way there is.</p>
<h2>More information about the Kubernetes QoS</h2>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/">QoS</a></li>
<li><a href="https://kubernetes.io/blog/2021/11/26/qos-memory-resources/">QoS Memory</a></li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Is global warming a known system behavior?</title>
			<link href="https://wonderingchimp.com/posts/is-global-warming-a-known-system-behavior/"/>
			<updated>2022-10-13T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/is-global-warming-a-known-system-behavior/</id>
			<content type="html"><![CDATA[
				<p>We all know, or at least should, the scientific explanation of climate change - global warming - an increase in Earth's air and ocean temperature. And this is bad because global warming is causing ice sheets and glaciers to melt. The melting ice is causing sea levels to rise at a rate of 2 millimeters per year. The rising seas will eventually flood low-lying coastal regions. Entire nations, such as the islands of Maldives, are threatened by this climate change. And that's not all. Global warming causes trouble to oceans and their salinity, to nature and various habitats, the whole Earth to be exact. <a href="https://education.nationalgeographic.org/resource/pollution">^1</a></p>
<p>The main cause of global warming is the emission of greenhouse gasses (GHG for short). The most notable of them are carbon dioxide (CO2), methane (CH4), and nitrous oxide (N20). Mentioning these chemical elements brings back some memories from school and chemistry classes. I didn't like chemistry then. I'm not sure how I feel about it today though.</p>
<p>These are all facts and proven reasons for climate change. There is one additional perspective to this, the perspective from the Earth as a system, and a couple of questions I want to answer - how did we get here, and why did we do it?</p>
<p>How did we get here? Okay, we, humans, have increased the emission of GHGs in the atmosphere, that is a fact, and how we got to this point.</p>
<p>Why did we do it? Well, in order not to dwell on some philosophical and moral-related answer, there is a trap into which working and functioning systems might fall, and since the Earth is a system on its own, it applies to it. The name of this trap is <em>the tragedy of the commons</em>.</p>
<h2>Earth as a System</h2>
<p>It can be said that the Earth acts as a natural thermostat. It receives heat from the Sun, some of it is radiated back into space by clouds and ice, and the other, bigger, part is absorbed by the land, ocean, and atmosphere. This absorbed energy heats our planet.</p>
<p>As the Earth heats, its energy is radiated into the atmosphere where much of it is absorbed by water vapor and long-lived GHGs. When this energy is absorbed, these water and GHG molecules turn into tiny heaters that radiate heat in all directions. The heat that is radiated back toward the Earth is increasing the temperature in the lower atmosphere and the surface, and in that way, it enhances the heating from the sunlight.</p>
<p>This is why the Earth's temperature is (still), at a comfortable level, and this effect is called the natural greenhouse effect. This is how the Earth acts as its natural thermostat.<a href="https://earthobservatory.nasa.gov/features/GlobalWarming">^2</a></p>
<p>This natural thermostat can be seen as a system - a structure that contains a set of elements that are organized and interconnected with a specific purpose. Its purpose is to make life on Earth possible and comfortable for all species living there, both flora and fauna. Since (almost) everything that surrounds us can be seen as a system, the Earth and its temperature are not any different. This, however, is a lot simplified overview of the system I had in mind.</p>
<p><img src="https://www.wonderingchimp.com/content/images/2022/10/earths-temperature-figure-1.jpg" alt="A simple flow diagram showing Earth's temperature as a central process. A cloud shape on the left represents the Sun, connected by an arrow labeled &quot;Heat from the Sun&quot; pointing into a rectangle labeled &quot;Earth's Temperature&quot;. From that rectangle, an arrow labeled &quot;Heat to the atmosphere&quot; points right into a second cloud shape, representing the atmosphere." title="Earths temperature - figure 1"></p>
<h2>Earth as a Common Resource?</h2>
<p>Now, why do I mention this, and why do I show my perspective on the Earth's temperature as a system? Because we are on our way to falling into a trap that is a result of a system's behavior.</p>
<p>Let's think of it this way - imagine a village where there is one common pasture for all the cattle in the village. This common pasture is enough to fulfill the necessities of all the cattle in the village for the whole period of grazing. It manages to grow during the off-grazing period (e.g. late autumn, winter, and early spring, my guess). And this common pasture is there at the service of villagers for years and years.</p>
<p>Then, one day, John decides to add ten more heads of cattle. Marry sees this and decides, oh well, since John is doing it, she might as well add five to her herd, why not? And this goes on, and on, and we end up with the common pasture that was once able to fulfill the needs of all cattle in the village, to the common pasture now devastated by the increased number of cattle. Because of this increase, pasture is not able to grow to the same amount, and this causes it to deteriorate.</p>
<p>The structure of this system makes selfish behavior much more convenient and profitable than the behavior of being responsible for the whole community and the future. This is why we fall into this trap - by being selfish and short-sighted!</p>
<p>This is what is happening to Earth and what has been happening to it for quite some time. We, humans, consider the Earth's climate, not just temperature, but everything that surrounds it, the whole Earth, as commons - a resource that is available to all of us, to use at our control and will, and it will be there forever.</p>
<p>This is where we are wrong, however - the Earth is not only available to us, rather, it belongs to all species on Earth, and all future generations of those species. And almost all resources on Earth are scarce.</p>
<h2>Is there a way out?</h2>
<p>Of course, as in most situations, there are certain ways we can get out of this trap that systems can fall into - the tragedy of commons. Some apply to the commons I'm describing here, and some are not that applicable there. Let's see.</p>
<p><strong>Educate and encourage.</strong> We need to help people see, and understand the consequences of their behavior. We are on a good track there, but more can be done, and needs to be done! We as a species need to understand that the Earth's resources are finite, not infinite.</p>
<p><strong>Privatize the commons.</strong> You might think of this as a bit controversial opinion, but let me help you understand better what is meant by this. The privatization of the commons is not applicable on the whole Earth(!), however, it's applicable on a smaller scale (e.g. pasture from the example above, or some other common resource) - divide the common resource in a way that each person impacting it can see consequences of that impact directly. Let's take the common pasture from above as an example - if we divide it equally among people, they will be able to see and feel the direct consequences of adding more heads of cattle.</p>
<p><strong>Regulate the commons.</strong> We don't do enough of this! To escape this trap, regulations need to be there, meaningful regulations, but also a way of ensuring those regulations are being followed. For example - don't say - We pledge to net zero emissions until year XYZ (input desired year). Instead, show what you are doing to get there, how, which regulations are you planning to change or improve, and what is most important - what will we do if we don't achieve those goals? The whole &quot;Net Zero By XYZ&quot; stories seem like false promises, without any legal consequences.</p>
<p>However, those consequences are more than real!</p>
<h3>Notes</h3>
<p>The inspiration for this article came from the book I have read recently - <a href="https://app.thestorygraph.com/books/d8756cac-1bbb-4a43-b38f-65b044bb9a03">Thinking in Systems</a> by Donella Meadows. It is about systems thinking, from basics to some more advanced aspects. And it is a way for me to learn more about the topic, and review what have I read there.</p>
<p>The trap I'm writing about here - the tragedy of the commons is described in part two of the book, in chapter 5 - &quot;System Traps... and Opportunities&quot;.</p>
<p>Let me know in the comments below what you think about the article, does it make sense, and if not, why? I'm curious to see other people's perspectives on this and eager to learn more about the systems and their behavior.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>The Reading Serendipity</title>
			<link href="https://wonderingchimp.com/posts/the-reading-serendipity/"/>
			<updated>2022-09-01T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/the-reading-serendipity/</id>
			<content type="html"><![CDATA[
				<p>After some research, I found a somewhat appropriate term for it - serendipity. The dictionary about serendipity says this - <em>the faculty or phenomenon of finding valuable or agreeable things not sought for</em><a href="https://www.merriam-webster.com/dictionary/serendipity">^1</a>. Since I've noticed this during reading, I will call it the reading serendipity.</p>
<p>Now, let me tell you about some of those serendipitous situations that happened to me while I was reading.</p>
<p>Some time ago I was feeling kind of lost. Both work and private life were going well, but I wasn't sure about the direction I'm going. I wasn't trying to do some research on how to explore and counter this feeling that I felt, I haven't had the strength for that. I felt like a deflated balloon. Then, an interesting thing happened. Just when I was feeling that way, I stumbled on a chapter in a book I was reading at the time. It was about how to develop high sense of purpose. I got so immersed in that chapter, I devoured it completely. And voila! Without even looking for it, I found the direction. Or at least a motivation to look for a direction.</p>
<p>It was an interesting feeling of reading about the exact thing I needed the most, even though I wasn't looking for it, in a sense.</p>
<p>For some time I was thinking about starting to journal every day. I never got the proper push in the bottom for it. I was often like - okay, I need a proper setting, I need something different, blah, blah, blah. Then, I received a newsletter with an article about stoicism and how it is an ancient remedy for the modern age. I started reading it, and after some lines, I thought - this was just the thing I need! The pattern was the same as the above - I devoured the article, even re-read it a couple of times, to make sure I've got the point. And voila! Without me even looking for it, I started my mornings with journaling. It's been going on for some time now, and I enjoy it!</p>
<p>Quite a while ago, I wasn't able to structure my training. I was jumping from one exercise to another, looking for progress and structure. I read some articles and books that I thought could help me structure the training sessions properly, with an adequate amount of exercise and rest in between. I thought I found it, so I put some sort of structure, and started following a program from an application - religiously. After some time I felt that this doesn't work for me - I was going to sessions so focused on exercises and my phone, and timers and all that, that I lost that joyous feeling when I was climbing. I was often too tired to try climbing routes after that amount of training.</p>
<p>After some time, I stumbled upon an article that was explaining the importance of climbing exercises, keeping it simple, and rest in climbing. I am not sure if it was an article or a chapter from a book, but I am certain of the outcome of receiving that information. I completely restructured my training sessions, with a focus on climbing. I have certain exercises that I do besides just climbing, but I am keeping them as simple as I can. This helped me a lot! And, without me even looking for it. Or when I stopped to look for it.</p>
<p>Maybe this is reading serendipity, maybe it's not, however, I find it quite convenient and I like to notice it. The important thing is that sometimes, you need to stop searching for an answer to find the answer you were searching for all along. And never stop reading!</p>
<p>See you in the next post!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>The Laws of Human Nature by Robert Greene - should you read it?</title>
			<link href="https://wonderingchimp.com/posts/the-laws-of-human-nature-by-robert-greene-should-you-read-it/"/>
			<updated>2022-08-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/the-laws-of-human-nature-by-robert-greene-should-you-read-it/</id>
			<content type="html"><![CDATA[
				<p>As some of you might already know from the <a href="https://www.wonderingchimp.com/monthly-roundup-july-22/">July update</a>, I have been reading <a href="https://www.goodreads.com/book/show/39330937-the-laws-of-human-nature">The Laws of Human Nature</a>, and I finally finished it! And by finally, I don't mean it like that boring finally, when you sigh a bit - oh, finally. I mean - finally, but in an exciting sense! And in the following lines, I'll try to translate that excitement into words.</p>
<h2>About the author</h2>
<p>What Wikipedia says about Robert Greene is the following. An American author, born on May 14, 1959. He is the author of books on strategy, power, and seduction. He has written six international bestsellers: The 48 Laws of Power, The Art of Seduction, The 33 Strategies of War, The 50th Law (with rapper 50 Cent), Mastery, and The Laws of Human Nature.</p>
<p>The Laws of Human Nature is his sixth book, published in 2018. It's about people's conscious and unconscious drives, motivations, and cognitive biases.<a href="https://en.wikipedia.org/wiki/Robert_Greene_(American_author)">^1</a></p>
<h2>Book overview</h2>
<p>In short, this book is big. It is around 600 pages long. But, don't let that demotivate you! I think that this book is awesome! One of the best I have read so far. But then again, it is maybe one of the rarest I interacted with the most. I have written quite some pages, taking notes, while reading this book. The important thing is that you don't need to read this book in one go. I did it because I was so curious about what will next chapter bring. And you don't need to read chapters in order, it would be good, but you don't have to do that - you can jump around.</p>
<p>This book contains 18 chapters. Each one of them with this specific structure.</p>
<ul>
<li>The first part is a story about a certain human trait that some important person from history had. There are stories about Coco Chanel, Pericles, Howard Hughes, Queen Elizabeth (not the current one), John D. Rockefeller, and so on.</li>
<li>Next, the author goes in explaining certain human trait that was described in a story at the start of the chapter, with key specifics, what to take into account, what to be aware of, and how to interact with certain personality types.</li>
<li>The third part is reserved for how to improve yourself, and how to be a better human. For example, if you notice in you some narcissistic traits, the author explains how to be aware of them and how to use them in the best way for you and your environment.</li>
</ul>
<h3>(Short) description of each chapter</h3>
<p>Now to the important part - what are the chapters about?</p>
<p>The first chapter is about irrationality - how we humans tend to be a bit more irrational, and how to be a more rational self. How to cultivate your inner Athena. I'm not going to describe this term here, I think you could find a lot better description in the book itself.</p>
<p>The second part is about narcissism. This chapter describes, among many other things, what are the root causes of narcissistic behaviour, different narcissistic types, and how to cultivate more empathy.</p>
<p>The third chapter is the one about the role-playing aspect of human nature - what are the aspects of it, and some basics of how to be better in the art of role-playing.</p>
<p>The fourth one was about compulsive behaviour - what decides a person's character, how to see more of people's character, what are some toxic types, and how to cultivate a <em>superior character</em>.</p>
<p>The fifth part is the law of covetousness, a synonym for envy. What is the root cause of it, some strategies for stimulating the desire, and in the end the <em>supreme desire</em> - how to use it to your advantage and become a better human.</p>
<p>The sixth part is about shortsightedness - how to notice the signs, and how to overcome it.</p>
<p>The seventh chapter is about the defensiveness of human nature, how to make others less defensive during interactions with you, how to be more flexible in communication, and so on. Really interesting chapter.</p>
<p>The eighth part is about self-sabotage - describing both negative and positive attitudes, and how to cultivate the latter.</p>
<p>The ninth is the law of repression, the dark side in us all. What are some types of dark traits we all have, how to find yours, how to be aware of it, and show it up to the adequate point.</p>
<p>The tenth part is about envy - how to test for it, what are envier types, envy triggers, and how to go <em>beyond envy</em>. This chapter is connected to the fifth one, but it explores envy from a different perspective.</p>
<p>Chapter eleven is about grandiosity - what it is, why people feel this way, and how to cultivate <em>practical grandiosity</em>.</p>
<p>Chapter twelve is about gender rigidity, anima and animus, gender projections, how to cultivate both masculine and feminine thinking, and styles of action.</p>
<p>The thirteenth chapter is about the law of aimlessness and strategies for developing a high sense of purpose - how to find the inner compass, and awareness of so-called false purposes.</p>
<p>Part fourteen is about conformity, and how to resist the downward pull of the group. What are some group types, and how to be part of the <em>reality group</em>.</p>
<p>Chapter fifteen is about the law of fickleness - how to be a better leader, and how to cultivate inner authority.</p>
<p>Part sixteen is about human aggression, what is the source of it, where aggressive energy goes, how to counter them, and how to develop controlled aggression.</p>
<p>Chapter seventeen is about generational myopia - how to be aware of the generation shifts and what each generation can bring to the world.</p>
<p>The last chapter is about the law of death denial - the paradoxical death effect - how we become more aware of it when we either brush past it or encounter it in people close to us. It contains how to form a philosophy of life through death.</p>
<h2>Summary</h2>
<p>As I mentioned, this book is full of different concepts, some you may know, some you may hear for the first time. Chapters that I enjoyed were the first, second, fifth, sixth, eighth, ninth, tenth, eleventh, thirteenth, sixteenth, seventeenth and the last one. Now that I see it, I should've gone in a different direction - note chapters I didn't enjoy as much.</p>
<p>If I could sum up in one sentence, what have I learned from this book, it would be this - always look inward for answers, and don't be afraid of them.</p>
<p>Now, should you read it - yes - it will help you learn a lot about us, humans, and show you some different perspectives you (maybe) weren't aware of.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Lessons from climbing I&#39;m applying in life - problem-solving</title>
			<link href="https://wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-problem-solving/"/>
			<updated>2022-08-18T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-problem-solving/</id>
			<content type="html"><![CDATA[
				<p>The problem-solving skill is the eagerness to solve problems that you encounter. You might hear others call these problems challenges, or chances. In the end, whatever your preference, the goal is the same - solve a problem in front of you. If you want to dive deeper into problem-solving, just have a look at its <a href="https://en.wikipedia.org/wiki/Problem_solving">Wikipedia page</a>.</p>
<p>This blog post will not, by any means, be as long as the Wikipedia page on problem-solving. It will probably contain some of the problem-solving strategies or methods described there but written in some other context. Here, I plan to continue rambling about <a href="www.wonderingchimp.com/tag/climbing/">life lessons I've learned from climbing</a>.</p>
<h2>Problem-solving in practice</h2>
<p>In essence, besides being the sport of endless failures, climbing is also a sport of endless practice of problem-solving.</p>
<p>What do I mean by that? Well, every time you approach some route, be it a boulder, sport climbing, longer multi-pitch, or traditional route, you are tasked with a problem. This problem is - I want to reach the top of that route, and I have so many obstacles in front of me. Some of them I can see, and some of them not. How do I overcome these obstacles?</p>
<p>Here we can use different approaches. First, we can try and imagine hands and feet placement - where does the left hand go, where should I place my right hand, and so on. What will happen if we cannot do the specific move? We look for alternatives, often the most creative ones.</p>
<p>If the top of the route is no way near, we might go and see how others did it. We can ask for advice or let some person tell us that advice without us even asking (a.k.a. beta sprayer). If no one did reach the top, we might join some relaxed brainstorming session(s) and see what others think of it and what their approach will be.</p>
<p>We might try and split the route into multiple smaller parts, fix it one by one, and in the end connect them all. And, if we are more advanced, we can visualize ourselves on the route, visualize the exact moves we are going to do, maybe even simulate the moves with our body, and then apply them to the route itself.</p>
<p>Possibilities are many.</p>
<h2>What you can learn from all of this?</h2>
<p>These possibilities of problem-solving we are continuously encountering in climbing can easily translate to life. Here are just some of the examples. Feel free to use them as you see fit, or just as examples to guide you.</p>
<p>Let's say we want to pass an exam. And we need to study, a lot! The first thing to do, after the episode of despair on having so much to learn and cover, is to step back and try to split the learning material into meaningful and smaller parts. Then, we can start completing parts one by one. If there is some section we don't understand, we can go and ask somebody for help - don't be afraid to ask for support! If you find others having trouble understanding the same things as you - go ahead and discuss those concepts with them, and try and learn to understand that together. Last but not least - you might find that the learning approach you initially thought of doesn't suit you, so you can spend some time finding new ways to go through the material and pass the exam.</p>
<p>The next example comes from work life. Imagine you have some report to complete, but you don't have a slight idea of what the content for that report should be. Yeah, it happens to all of us! Here, you often try and find some similar approaches, in other words - how others did it. If you are not able to find any, then you ask around - what info should or shouldn't be part of that. I'm repeating myself but, don't be afraid to ask for help! If nobody can help you, then try and find some other, creative approach to complete the necessary report. Maybe you can include this relevant graph or some science-backed facts, with footnotes to sources. Possibilities are endless, we are just choosing how to use them.</p>
<h2>Summary</h2>
<p>Following is my list of the problem-solving approaches learned from climbing. I try to practice them daily.</p>
<ul>
<li>If something doesn't work - try a different thing. Fail often, and fail fast.</li>
<li>Visualize the solving process.</li>
<li>Don't be afraid to ask for help.</li>
<li>Discuss similar problems with others.</li>
<li>Split the bigger problems into smaller, chewable chunks.</li>
</ul>
<p>Thank you for reading the blog post. I am really curious to see what problem-solving skills you've learned and are applying in life. They don't have to be climbing-specific. Feel free to write them in the comments below, or send me an e-mail, I am more than happy to learn about other people's ideas and approaches to problem-solving.</p>
<p>Until the next post!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Lessons from climbing I&#39;m applying in life - failure</title>
			<link href="https://wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-failure/"/>
			<updated>2022-08-04T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/lessons-from-climbing-i-m-applying-in-life-failure/</id>
			<content type="html"><![CDATA[
				<p>As you may know, I'm very much into rock climbing. If you didn't know that, well, I guess I haven't bothered you enough with it, yet. I was first introduced to climbing almost ten years ago, and it is the sport, alongside occasional running streaks, that I kept practicing on, and on, and on. Almost daily.</p>
<p>When I think of the things that kept me attached to rock climbing for so many years, performance certainly isn't one of them. What kept me going back to it are the lessons I have learned during my process of climbing, lessons I learned while climbing, sitting in nature, chilling with friends, walking towards a climbing route... Lessons I try to apply in life, almost like training for climbing - daily.</p>
<p>The first of the many lessons I learned is that of failure.</p>
<p>All we do in climbing is fail, amongst many other things. We do it daily - not being able to hang from some edge while training, or not being able to climb the specific route or do specific moves. Possibilities for failing here are endless! And often with the interesting results - if you fall from a route, you experience a direct rush of adrenaline that comes with that failure. Some say that climbing is about 99% of failing. What impacts the other 1% is how we deal with that 99%.</p>
<p>What keeps people going through these failures? This I don't know, I guess it is different for each one of us. I'm amazed by all those world-class climbers trying unbelievable climbing routes and failing, continuously, just to succeed one time. And not just world-class climbers - I'm amazed by every one of us who persists (spoiler alert!).</p>
<p>What keeps me through all these failures?</p>
<p>Well, at first, it was the frustration - how can somebody do it and I can't?! This kept me for a while, in the beginning, at least. I was trying the routes, doing the training, because somebody else did it.</p>
<p>Then came the acceptance - the failure is a normal, even expected, part! So I failed, constantly, and every day. The thing that started happening with each failure was that I started learning something from it. Either my foot was too high, or I squeezed my hands too much, my balance was off, and so on. That helped me see every failure to do something, not just failure in climbing, as a learning opportunity. That kept me centered and prevented me from feeling down after a failure.</p>
<p>I learned to see failure as a building block for improvement.</p>
<p>P.S. Good climbing performance is still to come! 🤞</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>I am an intellectually obese person</title>
			<link href="https://wonderingchimp.com/posts/i-am-an-intellectually-obese-person/"/>
			<updated>2022-07-21T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/i-am-an-intellectually-obese-person/</id>
			<content type="html"><![CDATA[
				<p>To be honest, I found myself in a really strange position the last couple of weeks - I was thinking of writing something about how we tend to overshare things on social media, and how we need to concentrate more on the quality rather than the quantity of information. I started writing about it, then I stopped and tried to re-write it, but it didn't seem good to me at the time.</p>
<p>Then, luckily, I stumbled upon a great article by Gurwinder <a href="https://twitter.com/G_S_Bhogal">@G_S_Boghal</a> about intellectual obesity which had a great impact on me. It got me thinking about how I process the information online, and how I share that information. It got me motivated to wrap my head around everything I have been thinking of and to write this blog post as a way of providing my point of view, experience so far, and some decisions I have made along the way.</p>
<h2>What the intellectual obesity is?</h2>
<p>It is, if I might add, a really convenient, term coined by the author and means that we are consuming a vast amount of information which, as junk food, makes us obese. But the part of our body that becomes obese is our brain.</p>
<p>As you all know, we live in an informational age where the information itself is at every corner, with a click of a button, and most of that information is of low quality (read junk). Our mind processes that information like it processes sugar - it gets hit by a dopamine rush on each consumption, and it always wants more. It ends up flooding our brain with a lot of information, most of which is unnecessary, making us unable to think things through, learn something new, reflect...</p>
<h2>Am I an intellectually obese person?</h2>
<p>As the title says - yes, I am. And let me just give a brief explanation of why.</p>
<p>First, my line of work expects that you're always in the loop, updated on the latest trends, technologies, and so on. And second, there is my private life and my endless curiosity for various things. If you read my blog, you saw my jumping from one topic to another.</p>
<p>This means that I'm overwhelmed with various information daily - how to do this, how to fix that, what is the best training plan, what book should I read next, oh wait, there is this podcast I should listen to... I'm not sure how you deal with it, but I think I need to find a new way of coping with the information overload!</p>
<p>I need time to think, reflect, and learn about the things I read, listen or watch.</p>
<p>The information overload often leaves me tired, not able to think things through, and missing some important stuff. I feel like a deflated balloon.</p>
<h2>Things we can do</h2>
<p>As described in the linked article below, there are a few things that we can do to keep our minds healthy when it comes to junk information, and the things I will incorporate into my way of handling things.</p>
<ul>
<li>Use the 10-10-10 rule. This means that before clicking on something you need to raise awareness if the information is useful to you, and how will you feel about it in 10 minutes, 10 months, and 10 years. If you do not get any straight answer to any of these questions, discard the information. In other words - when you search some topic, or how to solve some specific problem - type it in your web search and slowly go through the results, don't go open each result in a separate tab and scroll through them like the house is on fire. Except if your house is on fire and you are searching for a firefighter's number.</li>
<li>Limit the use of junk information sources. Endless scrolling never did any good in the end, it just leaves that feeling of emptiness and tiredness, with occasional outrage or anger. We don't need that.</li>
<li>Write. Write about the things you have read, listened to or watched. It will help you better understand them, find some other perspectives, and define your opinion on the matter.</li>
</ul>
<h2>How will my intellectual diet look?</h2>
<p>First, I will start with being aware of the information I'm consuming and its impact on my life in both the short and the long term. This will include - not clicking on the content just for the sake of not properly reading it, cleaning my e-mail inboxes and unsubscribing from all of those newsletters I &quot;accidentally&quot; subscribed to (heads up - this doesn't mean that you should unsubscribe from this blog).</p>
<p>Next, I'll limit my time on social media. I never was so keen on being on social media, if it weren't for this blog, I'd probably deactivate my accounts, but I will filter the information out, and will apply the same thing to the blog as well.</p>
<p>Third, and last one - I will for sure write more! This really got me motivated to write, and I will continue to do so, but probably not in the same manner as before - I'll limit my blog posts to once or twice a month.</p>
<p>Again, I encourage you to read the article shared below to get a deeper explanation of what it is, and how intellectual obesity impacts our brain. I will finish with the quote from that article that propelled me into action.</p>
<blockquote>
<p>And when you notice the myriad holes that all this junk has left in your memory, then it’ll finally be clear that you weren’t consuming it as much as it was consuming you.</p>
</blockquote>
<p>You can find in <a href="https://www.gurwinder.blog/p/the-intellectual-obesity-crisis">this article</a>.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Things I do to be more focused and productive</title>
			<link href="https://wonderingchimp.com/posts/things-i-do-to-be-more-focused-and-productive/"/>
			<updated>2022-06-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/things-i-do-to-be-more-focused-and-productive/</id>
			<content type="html"><![CDATA[
				<p>And who am I to tell you all this? Well, to be honest, not much, at least not yet. I consider myself a lifelong learner, I like my job, I like my personal life, and so far I’m in a good position.</p>
<p>When I first immersed myself in the world of productivity, the same as with <a href="https://www.wonderingchimp.com/posts/yes-there-are-more-ways-to-take-notes/">note-taking</a>, I found out there was a plethora of books, articles, tools, and techniques. I was so involved, that I often thought that I was spending more time learning and trying some new and hype productivity tool or technique, than getting things done. I am glad that phase has passed (I hope 🤞).</p>
<p>Going through all of this material, some (read many) of them long forgotten, I somehow naturally started using some of the techniques for sorting out my priorities, organizing my time, stay more productive and present.</p>
<p>I see productivity as an ability to achieve something effectively and efficiently, without that being harmful to the quality of the thing we do. How does focus fit into the picture? Well, quite naturally - to be more productive, you need to be more focused on the thing you do. More present. <em>Don't half-ass it</em> - as <a href="https://www.youtube.com/watch?v=yGU-vRWa5Zs">Matthew McConaughey’s dad would say</a>.</p>
<p>The best way for me to be or stay productive and focused is to organize my thoughts and my time as much as I can. I write down my thoughts, those thoughts often become some new things I want to try out, check, learn, and so on. And after that, fully focused, I go into doing those things.</p>
<p>Now, enough jibber-jabber, let me describe the things I do to be more productive, and the things that keep me focused on, well, other things. The list is as follows:</p>
<ul>
<li>Pomodoro technique</li>
<li>Eisenhower matrix</li>
<li>calendar schedule</li>
<li>meditation</li>
</ul>
<h2>Pomodoro technique</h2>
<p>You probably all heard of it. If you haven't, well, the Pomodoro technique is a time-management technique that breaks your work into some intervals, usually 25 minutes, with a 5-minute break, and after 4 cycles of that, you take a 20 to 30 minutes break. That is the basis of it at least.</p>
<p><em>How do I use it?</em> When I work, read some non-fiction, or write these texts, I tend to take 45-minute-long work intervals, followed by 5-10 minutes of break. After 4 or 5 cycles of that, I take a longer break, for example, when I work, that happens usually around lunch. During these intervals, I don't look at my phone (that is the most important), don't check social networks, I just try and focus as much as I can. During breaks, I take a stroll around my apartment, do some small exercise, talk to my girlfriend, and so on.</p>
<h2>Eisenhower matrix</h2>
<p>This is a way of organizing your tasks by giving them a certain priority. It is said that this method was used by US president Dwight D. Eisenhower, hence the name. The matrix is divided into 4 quadrants, as in the image below.</p>
<p>[image]</p>
<p>The first is - important and urgent - we do those things first. The second is important, but not urgent - we schedule them for later. The third is the not-so-important, but urgent - we delegate those things. The fourth and the last is not important and not urgent - we discard those tasks.</p>
<p>Simple as that.</p>
<h2>Keeping a schedule</h2>
<p>The thing I do quite often is, well - keep my calendar up to date! It is related to that second thing from the Eisenhower matrix - the important, but not urgent. So all of those tasks that I or others came up with, I keep in an adequate calendar - either work or personal.</p>
<p>Whenever I have something to do, but it's not urgent, I tend to put it in the calendar and allow myself only one re-schedule when it comes up. If I allow myself to re-schedule it more than once, well, those things tend to stay in the calendar a bit longer.</p>
<h2>Meditation</h2>
<p>The thing that helps me the most in staying focused is meditation practice. I started with some occasional meditation here and there in college, and just recently (I think from the beginning of this year), I've made it a daily practice. Usually, in the morning, I use 20 or so minutes just to sit in silence, not think of anything other than my breath and meditate.</p>
<p>This keeps me centered, in the moment, and more often than not, focused on my breath instead of my wandering mind. And if you followed my blog so far, you know how I can wander from one topic to another...</p>
<h2>How to apply these things?</h2>
<p>Well, nothing complex there - you just go and try them. These are just some of the techniques I use, but there are many more out there which are not covered by this post and I leave you to discover them for yourself. The main thing to remember - if you want to be more productive, you need to be more present, more focused, and more in the now. <em>Don't half-ass it!</em></p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>How I run my first, and probably the last, ultramarathon</title>
			<link href="https://wonderingchimp.com/posts/how-i-run-my-first-and-probably-the-last-ultramarathon/"/>
			<updated>2022-06-18T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/how-i-run-my-first-and-probably-the-last-ultramarathon/</id>
			<content type="html"><![CDATA[
				<p>Hello there!</p>
<p>In this blog post, I write about my experience of running the ultramarathon - a footrace longer than the traditional marathon length of 42.195 kilometers.<a href="https://en.wikipedia.org/wiki/Ultramarathon">^1</a> I write about how I started running, what made me apply for the ultramarathon, the preparation process (nothing too deep and too sport-specific), how I felt throughout the race, and in the end, what I have learned from it.</p>
<p>Let me know in the comments what you think, or feel free to share this post if you found it interesting and maybe even inspiring!</p>
<p>Enjoy!</p>
<h2>How did it all start?</h2>
<p>Running... When I think of it I always remember when I was little and seeing my dad going for a run on the hills above my house, I was so amazed by it. It was so cool, I thought. Then I got older both he and I started running together, once or twice per week. I enjoyed it! That is one of the things that got me into running.</p>
<p>Years went by, and I continued to run, both physically, and metaphorically. In my twenties, I ran my first half-marathon. To be honest, I was running two or three times per week at the time, but I never did prepare for it. I went for a shorter race, but at some point just made a turn on a different route and I managed to complete the half-marathon in below two hours. I was so proud of myself, but also in a lot of cramps days after that, which I still remember to this day. I was living in a dorm at the time, and I wasn't able to go out of my room, my roommate was bringing me the food.</p>
<p>Then I started climbing, and running fell into the background, as an addition to overall conditioning training. I continued to run, but a lot less than before. I was never so keen on competing and running races, so I ran only to satisfy my physical and mental health, even though I wasn't aware of doing the latter at all.</p>
<h2>What, or better - who made me decide?</h2>
<p>Two years ago, a good friend of mine said to me - I want to run a 100km race, wanna join? I was like - what, how, when? So many questions were asked, but not so many of them were answered. Not sure if it was smart or not, but soon after that I just went for it, and applied for the race. In the end, the distance was 64km - the 100km one being canceled due to the pandemic.</p>
<p>The race was on Stara Planina - the biggest mountain range in Serbia. And it was a trail run - the thing I never did before! Okay, I have less than a year to prepare for it - challenge accepted!</p>
<p>Just a side note before I go any further - trail running is a sports activity that combines running and, on steeper terrains, hiking, that is run on any unpaved surface.<a href="https://en.wikipedia.org/wiki/Trail_running">^2</a> So basically you go into some hills or mountains, find an unpaved road, and start running. When it becomes too steep you switch to hiking, and vice versa.</p>
<h2>The preparation</h2>
<p>The preparations started. I slowly started getting back to running, longer duration and distance each week. I was running solo, or with my friends who were also preparing for the same race. The longest distance I ran during one week of preparation was, if I remember correctly, around 60km in total. I remember even going alone on a mountain near my hometown - Rtanj, and I scaled it from the north entrance to the south exit in less than 3 hours. The distance was around 20km, but the difference in altitude was around 800m in the first 8km. I felt so prepared because it was the beginning of March, and the race was in June.</p>
<p>The date was getting nearer, but also our preparation got a bit more serious. We went for a hike after some climbing outside, and at some point started running because we were bored of walking. It felt nice. Kilometers were passing so quickly. Everything was so unreal.</p>
<p>I was, and still am, a complete beginner when it comes to running trail races, and the thought of the 64km being the first one didn't actually come to my mind. Which is maybe a lucky coincidence, or my modus operandi - when I have some tough challenge before me, I don't think about it much, I just start, and endure along the way.</p>
<p>Preparations continued on, both my friends and I were feeling ready. And to be honest, there weren't any dramatic things happening before the race, like I got cold feet (yeah, right!), or got scared (well, maybe a little), so I'll just go ahead to the race itself.</p>
<h2>The race</h2>
<p>It started really nice, there was an announcer at the start of the race, he was great in bringing the atmosphere up, everyone was so pumped and eager to start. The horn marked the beginning and we all started running, hyped and with smiles on our faces. I started slow, for the first couple of hundred meters, then I started running. I ran whenever I could - I am a big fan of uphill running, so that part wasn't a challenge for me, the downhill, however, was a bit of a mess. Fortunately enough, I didn't slip or slide during the downhill, so that part went well.</p>
<p>The real struggle for me happened around a 20-ish kilometer. I thought I could endure more, but my legs started getting cramps. I drank water, lots of it, with mixed electrolytes in it, and the pain started to die down, just a bit. During the whole race, I was alone with my thoughts and the sounds of nature around me. So no playlist to keep me busy, or occupied and distracted from the pain I felt. However I was able to channel that pain to the next step, and the next step, and the next. Then I started singing tunes of some random songs, continued to run with a combination of hiking, and in no time (around 10 hours in total), I was at the end. So, in a sense, I can say that it was a 10 hours long meditation.</p>
<p>Not sure what place I finished, but my goal was to finish in less than 10 hours. I almost completed that goal, only 15-20 minutes above the mark. But, nevertheless, the feeling of completing the longest race, at least the physical ones, of my life was done. I sat down on a near grass field, started stretching, also one of the longest ones in my life, and called my parents. They were so happy about the result, my dad especially. Then, I went to my room, took a cold shower, and continued to drink as much water as was safe for me at that time. Then my friends came and we were all so happy and full of various experiences from the race.</p>
<h2>What I've learned?</h2>
<p>After this one, I did another, this one half the distance, around 30km, but a lot scarier, harder, and with a lot of great vistas - on one of the most scenic mountains in Montenegro - Prokletije. I felt good throughout the race, but after completing it I decided it was enough for me, at least for now. Those long distances I ran really affected my climbing - I was even worse than I was before starting with trail runs and I hadn't climbed a lot in general, so the decision was obvious to me. Also, I'm not so keen on competing, as I mentioned in the beginning, so the choice was easy.</p>
<p>The thing I've learned during these two long-distance runs was that I should concentrate on the process, rather than the goal itself. Some of the hills in these races were so hard for me, but going up, focusing on one step at a time, made me climb them. And all the pain I felt through both of these runs was there, beside me, but I didn't let it stop me in one way or another. Maybe I sat somewhere a bit longer, but that was also okay, I challenged myself, and was able to complete that challenge. And that was the whole point.</p>
<p>Anyhow, if you are into running, I would highly recommend that you try trail running - it was such an experience for me, and the best way of running in my opinion - both uphill and downhill, in nature, with magnificent vistas along the way. Everyone should at least try it! If you are not that into running, well, going into nature can help your physical and mental health, so use that opportunity wisely and think outside, no box required.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Clean working directory without getting your hands dirty - The story of Git, part six</title>
			<link href="https://wonderingchimp.com/posts/clean-working-directory-without-getting-your-hands-dirty-the-story-of-git-part-six/"/>
			<updated>2022-06-11T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/clean-working-directory-without-getting-your-hands-dirty-the-story-of-git-part-six/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>Welcome back to another blog post from my <a href="https://www.wonderingchimp.com/tag/git/">series about git</a>. Make sure to check them out if you're interested in Git, how it works, how to fix some mess if you end up in it, and so on. This one will be about the <code>stash</code>. Mind you, everything is legal here - we're talking about <code>git stash</code>.</p>
<p>Feel free to share this article if you liked it, forward it to friends if you received them via e-mail, and you can also add comments or feedback below.</p>
<p>Enjoy!</p>
<h2>What it is?</h2>
<p>As with everything, we'll start from the beginning - <em>What is <code>git stash</code>?</em> It is a way of cleaning your working directory without committing anything, but recording it nevertheless.</p>
<p>But what does that mean?</p>
<p>Imagine your working directory being your table. On that table, you have a big pile of books. To <code>commit</code> those books would be to put them on a shelf, however, to <code>stash</code> them would mean to put them beneath the table and return to them when you need them. They are still there and you can get them a lot easier.</p>
<h2>How does it work underneath?</h2>
<p>A stash entry itself in Git is represented as a commit that records the working directory, with two parents - the first one being the <code>HEAD</code> when the entry was created and the second parent records the state of the index when the entry was made, and that parent also becomes a child of a <code>HEAD</code> commit.</p>
<p>This is what the ancestry graph looks like.</p>
<pre><code class="language-shell">       .----W
      /    /
-----H----I
</code></pre>
<p>The <code>H</code> here is the <code>HEAD</code> commit, <code>I</code> is the commit recording the state of the index, and <code>W</code> is the commit that records the working directory.</p>
<p>The latest stash which was created is stored in refs/stash, and the older ones are located in the <code>reflog</code>. Make sure you keep this in mind when we go through the problem I write about at the end.</p>
<h2>When can you use it?</h2>
<p>Some of the use cases of <code>git stash</code> are as follows:</p>
<ul>
<li>Pulling into a dirty working directory - you are in the middle of something and you learn that the upstream changes impact yours; you want to pull, but Git doesn't allow you, because of a conflict, now what? Well, you can use <code>git stash</code>, to stash your working directory elsewhere (we saw above), pull in the latest changes, and bring back the stashed ones.</li>
<li>Interrupted workflow - somebody asks you to do just this quick fix, even though you are in the middle of everything. Since you are a good person, you will accept, and before creating a quick fix - <code>git stash</code> to the rescue!</li>
<li>Testing partial commits - you want to test something out, but you don't want to include some change yet - no problem, stage those changes you need to, use <code>git stash</code> to put other (currently unwanted) changes away, and voila - you can continue on.</li>
<li>Saving unrelated changes for future use - wow, this code looks marvelous, you can definitely use it at some point. Not now, however, it's to mature. Well, <code>git stash</code>, and make sure you don't forget about it.</li>
</ul>
<h2>How does it look in practice?</h2>
<p>The following code snippet shows the basic usage of <code>git stash</code> - create entry, list entry, return entry, and delete an entry.</p>
<pre><code class="language-shell">## Creating a stash
$ git stash
Saved working directory and index state WIP on main; d7435844 Fix: re-configure graphql endpoint
## Listing stash
$ git stash list
stash@{0}: WIP on main: d7435844 Fix: re-configure graphql endpoint
## Retrieving entry
git stash apply stash@{0}
### Or you can use
$ git stash pop stash@{0} 
### The difference between pop and apply is that pop will delete the stash entry when it's applied
## Deleting a stash
$ git stash drop stash@{0}
</code></pre>
<h2>I accidentally dropped the stash, how to bring it back?</h2>
<p>This actually happened to me last month. I made some changes locally and didn't want to commit them yet, but there were some changes in the upstream that I wanted. So, I stashed everything, pulled upstream to clean the working directory, and instead of popping I dropped it.</p>
<p>My first thought - Oh shit! My second thought - somebody already had this issue, why don't I look it up? And <a href="https://stackoverflow.com/a/91795/12112036">there it was</a>. In short - you search the objects in the Git database with the following command.</p>
<pre><code class="language-shell">git fsck --no-reflog | awk '/dangling commit/ {print $3}'
</code></pre>
<p>This will print out hashes that are dangling. Fortunately, this list wasn't so big for my repository, so I went through each commit one by one until I found the one I dropped by mistake. I applied it and everything seemed so right again...</p>
<h2>The Wrap-Up</h2>
<p>Congrats, you've reached the end of yet another blog post. I hope you liked it! Below you can find some links I found useful when researching the <code>git stash</code> topic.</p>
<p>Thanks and see you in the next blog post!</p>
<h2>Useful Links</h2>
<ul>
<li><a href="https://www.git-scm.com/docs/git-stash">Git stash documentation</a></li>
<li><a href="https://www.atlassian.com/git/tutorials/saving-changes/git-stash">Git stash tutorial</a></li>
<li><a href="https://git-scm.com/docs/git-fsck">Git fsck</a></li>
</ul>
<hr>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Yes, there are more ways to take notes</title>
			<link href="https://wonderingchimp.com/posts/yes-there-are-more-ways-to-take-notes/"/>
			<updated>2022-06-04T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/yes-there-are-more-ways-to-take-notes/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>In this blogpost I will write about note-taking techniques I tried and those I ended up using. Stick with me until the end, you might find something useful and something that will just evoke your curiosity. That is one of the reasons why I do this...</p>
<p>When somebody asks me - if you could choose one super power, what would it be? My answer is always - the ability to take notes in the most efficient way! Just kidding, I always choose immortality, but the efficient note taking is the second best, for sure. I now imagine some super hero instead of immortality trying to achieve efficient note taking ability. Well, that can be done and you don't have to be a superhero for that. And yes, I for sure am still looking for that ability.</p>
<p>Jokes aside, the ability to take notes in the most efficient way is the crucial thing when it comes to learning some topic, exploring new things or trying to solve some issue. I tend to turn to paper quite often, however, that is not as often as I would want, but that's a different story.</p>
<p>The important thing to have in mind when exploring the vast amount of note taking techniques is that, as it's usually the case, <em>there is no silver bullet</em>. You pick the one that suits you or your situation the most, and you work out the best way to use it.</p>
<h2>Common note taking techniques</h2>
<p>When you would do a web search for the most effective note taking techniques, you will find numerous amount of articles and lists describing various note taking techniques (this one among them, yaaay!), and don't get me wrong, every note technique is good in one way or the other, but there are a lot of them! In this article I will concentrate just on those I have tried. Those include:</p>
<ul>
<li>outline note taking,</li>
<li>the Cornell note taking technique,</li>
<li>mind maps,</li>
<li>zettelkasten.</li>
</ul>
<h3>The outline note taking</h3>
<p>This is bullet or numbered points of text that you write while reading, taking some class or in general. It looks somewhat like this:</p>
<pre><code class="language-markdown">- note
    - sub note
    - sub note
1. numbered topic
    a. some note about it here and there
2. second note
</code></pre>
<p>I sometimes tend to use this way of taking my notes, combined with other methods. On paper, instead of bullet points and numbers, I tend to use arrows (don't ask), and in digital, I'm more in favor of dashes (-). I used this note taking technique a lot when I was in high school and college, but I always wondered is it good enough? It sometimes seemed that I wasn't returning to those notes enough, if any.</p>
<h3>The Cornell method</h3>
<p>The next up in line is the Cornell method of note taking. This method requires of you to split your page into three sections - one for questions and keywords, second for note taking, and the third one for summary. How it looks in practice, you can see in the image below.</p>
<p><img src="../images/posts/0013-notetaking-01.png" alt="A handwritten diagram of the Cornell Note-Taking Method, showing a page divided into three sections: a narrow left Cues column (2.5&quot;) for keywords and review notes, a wide right Notes column (6&quot;) for in-class notes using bullet points and abbreviations, and a bottom Summary strip (2&quot;) for a post-class recap." title="Cornell Note-taking template"></p>
<p>This is one of my preferred ones and I tend to use it a lot when writing on paper. I often combine it to the above one, so, my notes often are not so structured pile of writings... Which, often times I admit it is not so good. This method helps me return to the notes more often than in the previous method, because of the summary column where I tend to write the summary of the page or section that I've covered with notes.</p>
<h3>Mind maps</h3>
<p>The third one I've mentioned is the mind mapping method. It involves drawing keywords around central topic and connecting them with lines (branches). The central topic should be in the center of the paper (duh!), and everything else should go around it. Example of a mind map you can find on a picture below.</p>
<p><img src="../images/posts/0013-notetaking-02.png" alt="A colorful hand-drawn mind map centered on &quot;Mind Mapping&quot; with six branches: Benefits (overview, easy to memorize, simple/fast/fun), Planning (projects, goals, strategies), Creativity (ideas, innovation, thoughts), Productivity (more efficient, intuitive), Collaboration (teamwork, sharing, colleagues), each illustrated with small icons like a smiley face, lightbulb, lightning bolt, and stick figures." title="Mind Map template"></p>
<p>This is the one I guess that is the most popular - everyone loves mind maps! I like them too, but in my opinion, this type of note taking is only good when you are reviewing a topic, maybe brainstorming, or even presenting. They are not as good when you attend some class, lecture or conference, it takes some time to draw all of those branches and topics...</p>
<h3>The Zettelkasten</h3>
<p>Last but not least is the zettelkasten, or slip-box method. This method was used by German scientist Niklas Luhmann, and described in Sönke Ahrens' <a href="https://takesmartnotes.com/#book">How to take smart notes</a> book. The main thing about this method is to have three types of notes - temporary, permanent and literature notes. The literature notes are the notes you take during reading of some book, and you liked some idea, concept, technique. You note down that idea on a piece of paper, index card for example, and store it in some box with some specific identification, e.g. number, date, and so on. On the other side of an index card you write down the author and the book where you've read it.</p>
<p>The temporary notes are the ones that you take on a daily basis, and only some of them you choose to save, move into the slip box - the permanent notes. The point in both temporary and permanent notes is that they are only related to one concept or idea. The next big thing is indexing - you would need to find the best way of indexing these notes - either by date or some ID. Again, key here is to have one idea - one note. And then you store those notes in a slip-box.</p>
<p>But how? That's where the index comes handy - you would put that note somewhere in the slip-box wherever you think it's its place. Behind some other note you can relate. Next, when you go through these notes you might find various ideas and concepts tied around one another.</p>
<p>This method might be the most complex one to start with, but when you go a bit deeper into it you will find it really useful. Especially if you are a researcher, writer, thinker, or you like to learn.</p>
<h2>What techniques/methods do I use</h2>
<p>As I wrote above, I've tried many techniques, but only these four choose to mention. Sometimes I think I've spent more time trying some note-taking technique than actually reading or learning. I have this feeling because I have several notebooks half full lying around my home. I've also was in a physical Zettelkasten mode when I was creating index cards and storing them into some improvised box...</p>
<p>Anyhow, for paper note taking I use the Cornell method. I find it quite useful with the columns for summary and the left-most one I use for questions, ideas, and whatever comes to my mind. For digital note taking I use the Zettelkasten method. How? Well, there are a <a href="https://takesmartnotes.com/tools/">lot of apps</a> that you can use to write notes this way, some way expensive, some not, but complex. I chose the one that is most friendly to my income - <a href="https://code.visualstudio.com/">Visual Studio Code</a> in combination with <a href="https://foambubble.github.io/foam/">Foam plugin</a>. And that combo, when used after a while can look like this:</p>
<p><img src="../images/posts/0013-notetaking-03.png" alt="A dark-themed network graph centered on a blue node labeled &quot;published&quot;, connected by lines to surrounding white nodes representing blog post titles, including topics like &quot;Kubernetes - what and wh…&quot;, &quot;Certified Kubernetes Adm…&quot;, &quot;Training for climbing -…&quot;, &quot;Climbers and the fear of…&quot;, &quot;Do emojis have an impact…&quot;, &quot;Never judge a book by it…&quot;, &quot;Have I been using grep w…&quot;, &quot;Am I fooling around?&quot;, &quot;One Branch to Rule them…&quot;, and others, suggesting a visualization of published blog content and their relationships." title="Foam plugin in VS Code"></p>
<p>This is a Foam map that shows the blog posts I've published so far. I think it looks nice.</p>
<p>I have one file which is called <code>Inbox</code> where I store all of my &quot;temporary&quot; notes. I put the quotation marks because I don't make them permanent as often as I should. 🙈 All other files are automatically named with date/time combination. The things I use to connect them to one another are <code>Tags</code>. That's a feature of Foam plugin that you can use as a connection mechanism. Feel free to explore it and let me know in the comments what do you think.</p>
<h2>Paper vs. digital</h2>
<p>To end this rambling of mine with an answer to the epic battle of <em>Paper vs. Digital</em> note taking - both, based on my use case. I tend to write my notes on paper when reading a book, attending a conference talk or a presentation, or at a meeting I know that there will be something useful to note down. I use digital notes mostly for work, on those meetings where you need to gather as much info as you can (yes, I type fast), and for writing this blog.</p>
<p>Whatever your preferred way, the most useful thing I find when writing notes is to actually return to them often.</p>
<h2>To explore more</h2>
<ul>
<li><a href="https://www.mindmapping.com/mind-map">Mind maps</a></li>
<li><a href="https://www.uc.edu/campus-life/learning-commons/learning-resources/notetaking-resources/cornell-method-notes.html">The Cornell method</a></li>
<li><a href="https://takesmartnotes.com/">The Zettelkasten</a></li>
</ul>

			]]></content>
		</entry>
	
		
		<entry>
			<title>The 80/20 Principle by Richard Koch - should you read it?</title>
			<link href="https://wonderingchimp.com/posts/the-80-20-principle-by-richard-koch-should-you-read-it/"/>
			<updated>2022-05-28T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/the-80-20-principle-by-richard-koch-should-you-read-it/</id>
			<content type="html"><![CDATA[
				<p>Hi there!</p>
<p>With this blog post, I would like to start yet another section on my blog - reviewing non-fiction books, or in other words - <a href="https://www.wonderingchimp.com/tag/growth/"><em>should you read it?</em></a>. This section will include the non-fiction books I've read with some of the key points I've noted down when reading them, which you may find useful when deciding should you read or not read a certain book.</p>
<p>I also want to use this section in order to gather new insights from others as well and for my future reference, if I end up re-reading some of them. I'll start with the one I mentioned in the title.</p>
<p>This post is the first one to be sent in the <code>Wildcard</code> newsletter described in <a href="https://www.wonderingchimp.com/different-newsletters/">this post</a>. Grab yourself a cup of your prefered hot or cold beverage, and dive with me into this book.</p>
<h2>About the author</h2>
<p>First, a bit about the author. Written in 1998 by Richard Koch, a British management consultant, venture capital investor, and book author<a href="https://en.wikipedia.org/wiki/Richard_Koch">^1</a> thoroughly describes the Pareto principle which states that for many outcomes, roughly 80% of consequences come from 20% of causes. Other names for this principle are the 80/20 rule, the law of the vital few, or the principle of factor sparsity.<a href="https://en.wikipedia.org/wiki/Pareto_principle">^2</a></p>
<p>Per my knowledge, this book has three editions. I've read the third one - <a href="https://www.goodreads.com/book/show/46046316-the-80-20-principle"><em>The New, Updated Edition of the Business Classic</em></a>, so in this blog post, I will write my general opinion of the book and the 80/20 principle that it describes.</p>
<h2>Book overview</h2>
<p>It is written in four parts, each of them explaining one view or the other of the 80/20 principle.</p>
<p>The first part is an introduction to the principle, what it is, when was first discovered, the similarity with different principles, and two ways of applying it. The key points from the first part are:</p>
<ul>
<li>what it represents - the 80/20 states that more often than not, the 20% of the effort gives 80% results</li>
<li>different application approaches - experience and analysis driven.</li>
</ul>
<p>Experience-driven approach to applying this principle is to think about the things that are important to you and your happiness, both personally and professionally, and concentrate on them the most. However, this doesn't mean that the 20% of effort will yield 80% of the results, it just means to find out what is important to us and concentrate on it.</p>
<p>The analysis-driven approach is to actually measure the effort and results. This approach is a bit complex one and requires time and dedication, so it is not recommended by the author in general.</p>
<p>The second part is describing how to apply the 80/20 principle in your business. This section wasn't part of what I wanted to learn by reading this book, therefore I skipped it and decided to concentrate on forthcoming parts.</p>
<p>The third part is describing how to apply this principle to life. This was a bit thought-provoking part for me, however, I found out that some of the concepts repeat over and over. Some of the key points out of this part were:</p>
<ul>
<li>we have the time for everything that we choose is the most important to us</li>
<li>time is not the enemy, it is the way how we use it that is the problem, so we need to be really selective and determined about our time</li>
<li>time is not left to right, but cyclical, it keeps coming around, with new opportunities to learn and evolve</li>
<li>we need to have high value/satisfaction in both work and play, we shouldn't exchange one for the other.</li>
</ul>
<p>I would recommend skimming this part of the book, you may end up finding some additional points that may become useful to you.</p>
<p>Last but not least is the fourth part. I think this one is added in the edition I read, and in my opinion, the most interesting and the best part of the book. In this section, the author lists out some of the comments and criticism of the principle from the earlier editions and gives a new approach to them. The main concerns and critiques of this principle are:</p>
<ul>
<li>issue with cutting corners - e.g. getting 80% of results by only 20% of effort can be a bit simplistic and not an authentic way of approaching work and life</li>
<li>is it possible that applying this principle will not work in future</li>
<li>the question of balance - this is more related to applying the principle in life – that 80% of effort that brings 20% of results can somewhat be what is making us who we are.</li>
</ul>
<p>The author next goes on in explaining two dimensions of this principle:</p>
<ol>
<li>efficiency dimension - where we want to achieve things in the fastest possible way with the least possible effort</li>
<li>life-enhancing dimension - what is really important to us, work or life-wise.</li>
</ol>
<p>In the end, he offers the solutions to the above-stated concerns:</p>
<ul>
<li>cutting corners - aim to cut corners (if possible), doing things efficiently - in the best possible way, by saving time and effort, you shouldn't cut corners in the life-enhancing dimension of your life.</li>
<li>use of this principle requires a long-term view - in work, we should be aware of the potentially unintended consequences if we assume that the current position with the effort and reward will not change; in life - skills and relationships require investment, be selective about which abilities and people really matter, and take time and patience to build the foundation of lifetime commitment.</li>
<li>should we be balanced or unbalanced - both. Work can fall into both efficiency and life-enhancing category, the trick is to do less of the former and more of the latter. The same is in life - spend less and less time and vitality in your efficiency box, and more and more time in the life-enhancing box.</li>
</ul>
<h2>Summary</h2>
<p>Questions I had before reading this book were:</p>
<ul>
<li>what is the 80/20 principle?</li>
<li>how to apply it in life?</li>
</ul>
<p>Was I able to answer those questions? Yes. Although it has the same concept repeated several times, I would recommend you to read this book. Depending on the questions you have, I would for sure recommend that you read the first and the fourth part. So, in a sense, you can apply the 20/80 principle to reading this book - the 20% of content will give you 80% of value.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Climbers and the fear of falling</title>
			<link href="https://wonderingchimp.com/posts/climbers-and-the-fear-of-falling/"/>
			<updated>2022-05-14T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/climbers-and-the-fear-of-falling/</id>
			<content type="html"><![CDATA[
				<p>Hi there! After some time I finally decided to post something about rock climbing, my hobby and some kind of a passion one would say. First up in line is a short story about the fear of falling and how you can practice overcoming it. Enjoy!</p>
<p>Fear of falling. According to Wikipedia, this is a natural fear typical of most humans and mammals.<a href="https://en.wikipedia.org/wiki/Fear_of_falling">^1</a> I guess most humans, because they didn't include those solo climbers we see from time to time who are fearless. Or maybe they aren't?</p>
<p>Jokes aside, we all fear falling. One way or the other, as climbers we are often experiencing this fear. Usually when we get to that crux part of the route and see that the last quickdraw we clipped is below our knee, or god forbid, below our feet! This fear can be really damaging, not just to our climbing, but to the whole psyche. For example - you couldn't do some easy move that you can achieve without a problem in &quot;normal circumstances&quot; and you yell &quot;Take!&quot; to your belayer, or maybe even grab a quickdraw in order to save yourself from the complete fall and whatnot. Then, when you get down, you feel bad, and weak, start comparing yourself with others who you thought were weaker than you, but did that move, then you think of the new training exercises you need to add so you can become stronger, and spiraling continues on. All that in less than 5 seconds! Our mind is awesome, isn't it?</p>
<p>Feeling this way is something that is totally normal. You shouldn't be bummed about that. Everyone fears falling, everyone. The only thing that differentiates us from those fearless creatures is that they practice it! I fear falling, but I also like it, in some weird sense. When I'm in a pickle, if my mind goes spiraling about the possible outcomes I often fall and experience that rush of adrenaline, and I want to do it again, and again, and again. Then, when I try it again and don't let my mind slip, I usually succeed. Usually.</p>
<p>There are numerous examples of climbers talking about their fear of falling and how to overcome it. My favorite ones were from <a href="https://thenuggetclimbing.com/episodes/hazel-findlay">Hazel Findley</a> in a podcast episode, and from Arno Ilgner's book <a href="https://warriorsway.com/">The Rock Warrior's Way</a>. Both of them are professionals, trad climbers with years, decades of experience, and the fear they (still) feel is more realistic than a fear of a sport climber gumby like me - fear of falling on a trad equipment is numerous ways greater than fear of falling on sport climbing equipment. But nevertheless, the thing that is common to all of them in their conquering of that fear is practice!</p>
<h2>So, how to practice it?</h2>
<p>It can be as simple as going to the climbing wall and falling. There are several things to have in mind though:</p>
<ul>
<li>You need to have a belayer you can trust, you can have a hard time falling when you are belayed by somebody you don't trust.</li>
<li>You need to communicate your objective (falling practice) to the belayer, and make sure the belayer knows the art of <em>soft catching</em> (will discuss in some future blog post).</li>
<li>Make sure proper safety measures are in place - you are properly tied, with required equipment (yes, that means helmet as well).</li>
<li>Start small - try falling when you are a few centimeters above the quickdraw, then go bigger.</li>
<li>Try and practice this once or more times per week, ideally during your warm-up period.</li>
</ul>
<p>After you've done several sessions of practice falls, you can go even bigger and change a belayer. That usually can become really essential to your fear of falling and completely another dimension. When I started climbing, I never thought that having any impact until I went climbing with people other than my regular climbing partner(s).</p>
<p>We all have some fears, some are rational, some completely irrational. It is only us who decide how those fears will affect us.</p>
<p>See you in the next post!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Kubernetes - what and why?</title>
			<link href="https://wonderingchimp.com/posts/kubernetes-what-and-why/"/>
			<updated>2022-05-07T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/kubernetes-what-and-why/</id>
			<content type="html"><![CDATA[
				<p>Hi there! It's been a while since my last blog post, where I covered the <a href="https://www.wonderingchimp.com/the-cka-prep-review/">Certified Kubernetes administrator certification</a> and my approach to getting certified. To continue on in the same manner, this post will cover what the Kubernetes is and why we (not always) need it.</p>
<h2>What it is</h2>
<p>Recently, I've watched the Kubernetes documentary, a two-part story (<a href="https://www.youtube.com/watch?v=BE77h7dmoQU">first</a> and <a href="https://www.youtube.com/watch?v=318elIq37PE">second part</a>) about how Kubernetes was created, when and how it all started, and how it came to it becoming one of the building blocks of many applications and services we use in daily lives. I'm mentioning this because I really liked Kelsey Hightower's explanation of what Kubernetes is through the post office analogy, which I quote in full below.</p>
<blockquote>
<p>Yeah, so if you take some real world examples around us… Let’s say, the holidays are coming up and you want to ship something to a loved one as a present. So let’s invent the post office. And the post office says: ‘we can ship things, but we don’t want you to bring loose things to us. We don’t want loose books and jewelry and money. No no. You need to put in a box.’ So if we extend this analogy, let’s put an envelope. Now there’s going to be a cost for me to move this from one place to another, and depending on how far it is and how much it weighs, I’m gonna give you a price and you can think about that like your stamp. Now, whatever you put inside of it is up to you. So the container can support any programming language. Ruby, Python, Java, Golang, it doesn’t matter. To make Kubernetes efficient as the post office, we need to ask you to put it in the box. Now, the key is you now have to describe where it needs to go. We need to put an address on it. Where does it run, who’s it destined for and how long would you like to take, or are willing to wait for it to get there? And so, if you think about it, the post office abstracts all that away from you. You show up with your envelope and your stamp and the address, and they’ll tell you, ‘well, this will get there in two or three days’. Planes can break down. Cars can break down, but no one at the post office calls you when any of those things happen. They make a promise to you, right? They promise that this letter will get there in two days. How they do it is not a concern. So Kubernetes is built on this thing we call ‘promise theory’. Even though you have lots of machines in your Kubernetes cluster, any of them can break at any time, but Kubernetes’ job is to make sure that application is always running, just like the post office’s job is to make sure that letter keeps moving until it gets to its destination. Kubernetes does that for infrastructure.</p>
</blockquote>
<p>Let's dig deeper into this quote. First - the envelope. Your application needs to be in an envelope. That envelope in this context is a container. So you need to put your application within a container. Next up - the box. The box mentioned here, for the sake of simplicity, is the Pod. And what is a Pod? The pod is the smallest unit that you can deploy within Kubernetes. The important thing here to note is that the Pod can contain one or more containers. So it is a box for containers. The address of the envelope/box is actually the definition of the application itself - should it have some port opened, and persistence, should it be in one or several instances (replicas), etc. The cars and planes are the actual infrastructure behind it.</p>
<p>So in short, if you are a person developing some application, Kubernetes is there to help you concentrate more on the application itself, and not the actual infrastructure where the application is running. And if you are a person responsible for making that application operational and available to users, Kubernetes is there to make your life easier with various mechanisms like scaling up the application automatically in case the usage increases, automatically provision load balancer, etc. There are many options, which can be good, sometimes.</p>
<h2>Why we (not always) need it</h2>
<p>Okay, so we covered the &quot;what&quot; from the title, now we need to answer why do we need it? We already have the abstraction from the infrastructure which are VMs, various scaling mechanisms in the cloud such as auto-scaling groups in AWS, and other things, what is the point behind it?</p>
<p>From my point of view, and forgive me if it is a narrow one, but I'm trying to be as objective as I can, it makes our life easier when deploying applications - we can run our applications wherever we want – on-premise Kubernetes cluster or cloud provided Kubernetes clusters, and not worry about the underlying infrastructure, as long as we stick to the Kubernetes application definitions (for example pods). However, there is also another side of that coin that may not make it as easy as the previous side - the learning curve is a bit on a steeper side, it is a hyped technology that everybody wants in their setup, but it is not, and I cannot stress this enough - <em>it is not always applicable</em>. Also, there is a layer of complexity introduced which sometimes cannot be so good, or your application is more of a monolithic instead of a microservice-based application, so the migration to Kubernetes would not be as sensible.</p>
<p>There are always pros and cons for each setup, be it one running on Kubernetes or not. If the pros outweigh the cons in the evaluation process of Kubernetes, go for it. If on the other hand cons outweigh the pros, well, you might not like it, but your setup will not be as fit for Kubernetes as you might think. This will result in a lot of headaches and workarounds in the process of developing an application or migrating to Kubernetes, so I wouldn't recommend it.</p>
<p>In some of the next blog posts, I will dive deeper into what are some of the architectures you can easily run on Kubernetes, others that you will have trouble with, and how to evaluate if the application can be deployed or migrated to Kubernetes.</p>
<p>Now, to summarize the current story - even though the technology is hyped, new, and flashy, it doesn't mean that it is good for your use case. This applies to all of them, not just Kubernetes. A good evaluation process and understanding the technology are key.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Certified Kubernetes Administrator - my approach</title>
			<link href="https://wonderingchimp.com/posts/certified-kubernetes-administrator-my-approach/"/>
			<updated>2022-04-23T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/certified-kubernetes-administrator-my-approach/</id>
			<content type="html"><![CDATA[
				<p>The importance of getting certified in today's world is big, not just in IT, but in the whole industry. I always approached them with the idea of learning the thing that is being tested, not just to pass the exam. Then again, I always opted for the certifications that I was highly interested in at the time. Some of them helped me learn a lot, others, well, they were there just to test my knowledge. I will not address here my passing rate, that is not important. 😅</p>
<h2>How long it will take you to prepare for it?</h2>
<p>It depends on what is your objective. If your goal is to learn, I recommend that you approach it with patience, then passing the exam is only going to be an entry to many more great things. If, on the other hand, you just want to pass the exam quickly with no intention of getting into topics a bit deeper - well, you might re-think your approach, because the knowledge you get out of this approach might end up being scarce.</p>
<p>When I first came in contact with Kubernetes I was overwhelmed with different topics, materials, pods this, and pods that, containers, always containers, and so on... Then I took a breather, or how I like to call it – a piece of advice from my colleague and friend, and with it, I slowly started walking on the Kubernetes path. I thought about the end goal, and the certification didn't come up even in my web searches. My main goal was and still is (although a bit revisited) to learn about Kubernetes in general, what it is, what it does, how it works, etc. And I think I did a good job on the learning part, but, it is a never-ending journey, so I'm not even halfway done, and will probably never be. As you can see, I can try giving motivational talks, I would be great at them. Not!</p>
<p>To put it in not so exact time perspective - if you work daily with Kubernetes, and you want to learn, it will take up to a month to prepare, depending on your motives, other things you want to achieve and so on. In case you are a Kubernetes newbie as I were couple of years ago, prepare yourself for a longer period, and be patient.</p>
<h2>The start of my Kubernetes journey</h2>
<p>So, how I started? Well, as most of us do when getting acquainted with something (or somebody) new - I searched the internet, a lot... This didn't help much, so as mentioned above, I did a bit of restructuring of my goals (in other words, I set them). The next thing was to go a bit deeper and read about it since I'm more of a book rather than a video/audio person.</p>
<p>The book that helped me a lot in getting to know Kubernetes is <a href="https://www.manning.com/books/kubernetes-in-action">Kubernetes in Action</a>, by Marko Lukša. This is a well-written and well-structured book, with a lot of great examples and explanations of abstract and not so abstract concepts. I read it cover to cover, as actively as possible.</p>
<p>The reading itself wasn't the only thing I did. While I read it, I created a Kubernetes playground on some virtual machines and played along with the exercises from the book there. I tackled many things, from creating and configuring the cluster, to configuring pod security policies, and so on. The book and the hands-on that I got from the playground helped me get a grasp of the Kubernetes and its whole ecosystem.</p>
<p>This whole process took me months. I wasn't in a hurry, and I also had different obligations both personal and work-wise. In the end, it helped me understand the big picture, and learn a lot of the Kubernetes, both internals and externals. And I didn't know a thing about it in the beginning!</p>
<h2>Decision to get certified</h2>
<p>Next up - the certification. I decided on it after a year or so of working with Kubernetes. As I already mentioned, it wasn't my first goal, hence the period. I quickly went through the topics which are going to be on the exam and applied for it. The exam date was in two weeks or so. Then, I went through the topics more deeply - okay, I can handle it, I know this, and this, what the hell is this thing, and this?! I then spiraled a bit, but in the end, I created a plan and started learning the things I didn't know and reviewing the things (I think) I knew. During that journey, <a href="https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/">CKA course on Udemy</a> helped me immensely! My lab cluster was long gone (destroyed, but always remembered), so the virtual lab environment provided by this course was the best thing. I started going through the course and the practice exams, and soon enough, I was ready.</p>
<p>Then, the exam day came, and with it, or a week before it, an e-mail with the things I need to complete in order to take the exam remotely. I did all of them, quite easily since I'm not a Google Chrome user, I didn't have any personal info there, so I kept it vanilla. The thing with the exam is that you are allowed to use the official Kubernetes documentation since there is a lot of material covered by the exam. I've created a list of bookmarks from the Kubernetes documentation in Google Chrome and I practiced searching through it. You are allowed to have two tabs open during the exam, one with the exam, the other one being the Kubernetes documentation, so I planned to use the documentation as a reference whenever I could. I passed the exam in the end, and the whole experience during the exam was quite good, I haven't had any issues, challenges, or similar. Then the best thing about getting certified came - bragging about it on social media! Just kidding, but it was nice to share your achievement with people around you.</p>
<h2>The Wrap-Up</h2>
<p>So, to summarize, the steps I took, that may be of help to you were the following:</p>
<ol>
<li>take your time and be patient</li>
<li>read the relevant book(s), the one which helped me is <a href="https://www.manning.com/books/kubernetes-in-action">Kubernetes in Action</a></li>
<li>apply for the <a href="https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/">CKA exam</a></li>
<li>review the topics alongside a video or book course, my recommendation - <a href="https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/">CKA course on Udemy</a>.</li>
</ol>
<p>A neat &quot;trick&quot; about step 3 is that you can get a vast amount of discount if you attend some of the conferences related to Kubernetes, such as <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/">KubeConEU</a> or <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/">KubeConNA</a>, and also learn a lot. I applied and attended both of them remotely, and the discount for the exams was not the only thing I got - I was able to listen to so many great talks, had some new ideas along the way, and also received some swag in the mail, so in essence - great experience so far!</p>
<p>What are the next steps on my journey of learning the Kubernetes? Well, I plan to deepen my knowledge by working with it daily, exploring new things, reading and writing articles about it, listening to podcasts, etc.</p>
<p>There are endless possibilities when you have information in the palm of your hand, you just need to filter it out.</p>
<p>See you in the next week's issue, and in the meantime, feel free to share this post or provide feedback in the comments below.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Never judge a book by it&#39;s cover, or by the worst review on Amazon</title>
			<link href="https://wonderingchimp.com/posts/never-judge-a-book-by-its-cover-or-by-the-worst-review-on-amazon/"/>
			<updated>2022-04-09T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/never-judge-a-book-by-its-cover-or-by-the-worst-review-on-amazon/</id>
			<content type="html"><![CDATA[
				<p>This is often good to apply to most of the books that I like to call <em>one-idea wonder powered by a loop</em>. This is when an author takes one idea and presents it in several different, often totally not related, and sometimes even opposing contexts, so they will have enough material for a book. In short, they need to justify the reason for writing a book for something that could have been a, sometimes even not that long blog post, or a newspapers article. I get it, we live in a capitalist society, where almost everything is valued by the revenue, therefore the need for the vast amount of (shitty) content.</p>
<p>Why I started this? I mean the blog post, not the blog itself (the latter still being in the discovery phase). Well, because I would like to find a better way to read books, and not just read them - to actually interact with them, to use them as a tool to broaden my horizons. Sorry if that sounds a bit selfish, but I guess, books will understand. Isn't that the <a href="https://www.theguardian.com/childrens-books-site/2015/aug/14/why-do-books-still-exist-asks-a-teenager">reason why they exist</a>, or at least one of the reasons?</p>
<h2>How do I usually read books?</h2>
<p>Well, first - depends on the book. If it is fiction, I usually go by this calculation - if a book is of sci-fi or epic fantasy genre, I'll often go and read it, sooner or later. If it's not, well I'd need a good argument on why should I read it, then I will put it on my read list. I remember somebody said those reading lists should be short, yeah, right?! Then, I'll go and immerse myself into the book, and sometimes I finish it, and sometimes not. Usually, when I don't connect with the book I tend to leave it unfinished.</p>
<p>If the book is a non-fiction book, I tend to have a different approach - get all the ideas from the book and get them now! This leads to me scrolling through the vast amount of reviews on why should or shouldn't one read this book, trying to find out opposing views, big ideas from the book without reading it... This is not a wrong approach per-se, but maybe it's an ill-timed one.</p>
<p>Why? Now that I think of it, maybe this is better, and more useful when you already read the book, or during the process of reading, when you discover some new idea or interesting thought? It definitely is a good approach, especially if you want to develop critical thinking. I most definitely don't do this, at least not during the reading of a non-fiction book.</p>
<h2>What's my approach to non-fiction books now?</h2>
<p>Just recently, I started with experimenting how to read non-fiction books better. As mentioned above, being impatient cannot help, so I decided on taking things slowly.</p>
<p>In the first step, I usually go into some kind of a meditative state so I don't get overwhelmed by different things, facts, reviews, and everything related to the book, besides the reading itself. Next, I try to find out excerpts from the book, or the table of contents to see if I find something appealing. If yes, then I try to find that book in the library or online. I prefer the e-book format rather than paper because I don't have to wait for it to be delivered. And also the environment, yes... 😬</p>
<p>Then I start reading the book. I had a lot of trouble teaching myself that I don't need to start from the beginning and go chapter by chapter until the end. I tended to lose motivation or focus not even halfway through the book so many times... So, when I get the book, I'll first go through the table of contents and read the summary, or the first part of those chapters whose titles I found interesting. I then move to other chapters. I do this in order to decide if the chapter appeals to me or not. If none of the chapters don't seem to have any impact on me, hello Amazon return policy! If some of it or all did, I go ahead and read those that did. Then I dive into the next, and the next, until I either read the whole book or leave the book lying somewhere, in some deserted <em>archive</em> directory (since it's an e-book).</p>
<p>When I finish reading, I often go through my notes... These notes often become something to explore or even write. And this is actually the most important thing you should do when reading a book (besides actively reading) - take notes! I will not cover the story of note-taking here, I'll leave it for some future blog post. Until then, I just want to mention this - always, just always, read with a pen in your hand. This will teach you to actively participate and really engage with the book you have in front of you.</p>
<p>This was a short one about my way of reading (non-fiction) books. Let me know in the comments below if you have any recommendations for me, what should I consider adding or removing from my current reading process, or leave your general feedback.</p>
<p>See you in two weeks with something new and (hopefully) interesting and thanks for staying until the end!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Have you ever wanted to change history? - The story of Git - part five</title>
			<link href="https://wonderingchimp.com/posts/have-you-ever-wanted-to-change-history-the-story-of-git-part-five/"/>
			<updated>2022-04-02T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/have-you-ever-wanted-to-change-history-the-story-of-git-part-five/</id>
			<content type="html"><![CDATA[
				<p>History. When we look at it we see a story. A story of how something happened, how it evolved, how it ended up being the thing it is today... It is the same with the code - when we look at the code history we would like to see a story of how your code evolved, from the first commit to the first deploy to production, and in the end the release to your users. I say <em>would like</em>, because we often times end up seeing a tangled spider web of how things evolved, and trying to determine what happened when might become hurtful to our health. That is, if you didn't use <code>rebase</code>.</p>
<h2>Joining the code</h2>
<p>Let's start from the beginning - the way you can join your code with the <code>main</code> branch is called <em>merging</em>. In Git, you have two ways to do that - <em>merge</em> and <em>rebase</em>.</p>
<p>The first up in line is merge. It is also called a 3-way merge, and you'll see why in this paragraph. This type of joining will happen usually when you branched out from trunk, added changes to your branch, and when you want to merge back your changes into it, you see that the trunk has moved forward in time (somebody merged something to it before you did). Since it's not a race of who will be the first to merge, Git will first look at the common parent of the trunk and your branch (in other words - a commit before we diverged from trunk), second - the last commit from the trunk, and third - the last commit from your branch, and it will merge those two branches together. That is why it is called a 3-way merge - Git looks at three different commits. And as the result of this, Git will create a <em>merge commit</em>, the commit that has two parent commits. A bit complex, but that is the default merge in Git. The following snippet will show you how merge looks in practice.</p>
<pre><code># before merge
main A------B------C------D------E
                    \
your-branch          C1------C2------C3     

$ git checkout main
$ git merge your-branch

# after merge
main A------B------C------D------E------F
                    \                  /
your-branch          C1------C2------C3     
</code></pre>
<p>In the example above, the commit 'F' is a <em>merge commit</em> - it will have two parents - 'E' and 'C3'.</p>
<p>The simpler merge is the merge with fast-forward. It happens when you create a branch from the trunk, you then add some changes, and when you finish and join the branch back to trunk, you see that in the meantime trunk didn't move forward (didn't receive any commits). Git will just put your commits onto the trunk without any additional commit, and move the trunk forward. That is why it's called fast-forward. The example below shows the fast-forward merge, where there is no merge commit.</p>
<pre><code># before merge 
main A------B------C
                    \
your-branch          C1------C2------C3     

$ git checkout main
$ git merge your-branch

# after merge
main A------B------C------C1------C2------C3
</code></pre>
<p>Last, but definitely not the least, the notorious one - rebase. Why notorious? In short - it changes the Git history. This is nothing to worry about, changing of Git history is not that dangerous and hard. There are groups that are for and against it, but we'll see about that later in the text.</p>
<h2>A bit about Rebase</h2>
<p>A rebase means, in one sentence - changing a base of your commits. A bit longer explanation - when you rebase, imagine a hand that will take all of your commits and put them to the last one of the branch you've decided to rebase on. Let's see it in an example below:</p>
<pre><code># before rebase 
main A------B------C------D------E
                    \
your-branch          C1------C2------C3     

$ git checkout your-branch
$ git rebase main

# after rebase
main A------B------C------D------E
                                  \
your-branch                        C1'------C2'------C3'

# perform fast-forward merge into main
$ git checkout main
$ git merge your-branch

main A------B------C------D------E------C1'------C2'------C3'
</code></pre>
<p>Why do these commits from <code>your-branch</code> after rebase have single quote on them (the <code>'</code> sign)? Were they changed? In a way they were. No, you didn't lose any of your work, the only thing that changed is the parent of the <code>C1</code> commit. And this will be a new hash in git, therefore a new object. And we now know that you <a href="https://www.wonderingchimp.com/git-story-part-one/">shouldn't be afraid of the hash</a>.</p>
<p>Now if you go on and merge <code>your-branch</code> to <code>main</code> it will look like a fast-forward merge, with all the commits in linear order, easy to follow. Or maybe not so easy?</p>
<h2>To change or not to change the (Git) history?</h2>
<p>There are two views on this. Should you, or you shouldn't change the Git history? In order words - should you use rebase or merge? First view is that the Git history should reflect how your project developed, how it all actually happened, never mind if it is a messy thing. From this point of view it makes no sense to change the Git history, it would be like &quot;lying&quot;.</p>
<p>The second point of view - the Git history should be a story of how a project was made. It is sort of like publishing a book - you wouldn't publish your first draft of the book, but the end result.</p>
<p>What is better - there is no easy answer. What is most suitable for you is the best. The only thing to follow is to <em>not rebase public branches</em>, it will mess up other people's work. Only rebase branches you haven't published or you know that nobody other than you uses.</p>
<h2>To wrap things up</h2>
<p>You can always use rebase locally before merging. In that way you have both of the two worlds.</p>
<p>My personal preference is to do the rebase of commits before merging to main branch, with an addition of <em>squashing</em> the commits occasionally. And what is squashing? Well it's combining all or some of your commits into one. And because I tend to get a bit chatty and commit often, I use squashing. The thing that I like with rebase, besides of keeping the history clean and easy to track, is that it allows you to do the interactive rebase, the mode where you can choose which commits to pick, squash or even remove, when rebasing. Pretty nice, isn't it?</p>
<p>To do it, go to your branch that has diverged from the <code>main</code> and type in <code>git rebase -i main</code>. That will start the rebase in interactive mode, similar to the one below.</p>
<pre><code class="language-shell">pick 6c0746b Test commit
pick d7e3d94 Adding third file

# Rebase 4d3da14..d7e3d94 onto 4d3da14 (2 commands)
#
# Commands:
# p, pick &lt;commit&gt; = use commit
# r, reword &lt;commit&gt; = use commit, but edit the commit message
# e, edit &lt;commit&gt; = use commit, but stop for amending
# s, squash &lt;commit&gt; = use commit, but meld into previous commit
# f, fixup &lt;commit&gt; = like &quot;squash&quot;, but discard this commit's log message
# x, exec &lt;command&gt; = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop &lt;commit&gt; = remove commit
# l, label &lt;label&gt; = label current HEAD with a name
# t, reset &lt;label&gt; = reset HEAD to a label
# m, merge [-C &lt;commit&gt; | -c &lt;commit&gt;] &lt;label&gt; [# &lt;oneline&gt;]
# .       create a merge commit using the original merge commit's
# .       message (or the oneline, if no original merge commit was
# .       specified). Use -c &lt;commit&gt; to reword the commit message.
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
</code></pre>
<p>As you can see, you've got plenty of options. Choose them wisely. Because, similar to when you first get that long awaited <code>root</code> permissions...</p>
<p><em>With great power, comes great responsibility.</em></p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Do emojis have an impact on us and the understanding of what is written?</title>
			<link href="https://wonderingchimp.com/posts/do-emojis-have-an-impact-on-us-and-the-understanding-of-what-is-written/"/>
			<updated>2022-03-26T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/do-emojis-have-an-impact-on-us-and-the-understanding-of-what-is-written/</id>
			<content type="html"><![CDATA[
				<p>How we ended up here? Well, I guess we live in an endless cycle - we were trying to express ourselves with drawings and sketches ages before we have invented written word, and we are getting back to the roots, those primal instincts - to express ourselves with sketches and drawings, but in a new, digital, sense, with emojis...</p>
<h2>A brief history</h2>
<p>We all remember emoticons, at least some of us do. In the chatrooms during the 90s you could find vast amount of them - :-), :-/, ;-)... I remember using shortened emoticons like :), ;), and :/ because, some time ago you hadn't had the internet all around you, and you needed to communicate with SMS, which you didn't get for free, and with only 160 characters (with Serbian Cyrillic and Latin letters even less) you needed to be brief, and often send some wink, a smile or a sad face...</p>
<p>First emojis were created in Japan (as well as many other interesting things including manga, anime, origami...), in 1999 by artist Shigetaka Kurita. He worked for DOCOMO, Japan's mobile carrier, and he designed a new way to send information, with picture characters, which is actually the meaning of the word <em>emoji</em>. It comes from Japanese (of course), <em>e</em> meaning picture and <em>moji</em> meaning character.</p>
<p>They quickly became popular in Japan, and during the mid 2000s they started to get popular in the rest of the world. In 2007 a team from Google decided to lead the petition for emojis to be recognized by <a href="https://home.unicode.org/">Unicode</a> (sort of like UN for text standards across computers). In 2010, Unicode recognized emojis, which made them accessible everywhere. Literally everywhere!</p>
<p>Fast forward to 2015, the 😂 emoji was recognized as <a href="https://time.com/4114886/oxford-word-of-the-year-2015-emoji/">the word of the year by Oxford's dictionary</a>. Fun fact - the same year the word <em>lumbersexual</em> was also recognized by Oxford's dictionary (yay hipsters!). Two or three years ago they were even considered as characters on license plates in <a href="https://www.brisbanetimes.com.au/national/queensland/queensland-drivers-can-soon-add-emojis-to-their-personalised-plates-20190219-p50yor.html">Queensland, Australia</a> and allowed in <a href="https://www.republicworld.com/world-news/us-news/vermont-becomes-the-first-state-in-the-us-to-allow-emoji-license-plate.html">Vermont, US</a>.</p>
<p>But, you haven't heard all. There is even an <a href="https://www.vice.com/en/article/434pdm/meet-the-worlds-first-emoji-translator">emoji translator</a>. Even though it maybe sounds funny, following part of this blogpost shows that we may actually need them.</p>
<h2>(Miss)understanding of emojis</h2>
<p>The emojis were doubted as the world's first truly universal form of communication, but, as I dug deeper in the story about them, I found out that some of them have different meaning in different cultures. It's not that simple as you might think. For example - thumbs-up emoji is considered in the West as the sign of approval, but it has offensive and vulgar connotation in Greece and the Middle East. In China, angel emoji is used as the sign of death, and is considered threatening, and applause emojis are symbol of making love.</p>
<p>In 2018, there was also a case in Israel where after viewing an apartment, future tenants sent celebratory emojis to the landlord, which made the landlord take down the property from the market. Later they backed down from renting the place, for which landlord sued them. A <a href="https://qz.com/987032/emojis-prove-intent-a-judge-in-israel-ruled/">judge later ruled that the emojis were themselves enough to imply their intent to rent</a>, and fined the tenants with around 2000 USD fine, or in other words - 4 months rent in some decent apartment in Belgrade (not sure about Isreal though).</p>
<h2>Conclusion</h2>
<p>I like emojis and use them a lot, both on and off work. My favorite ones are 😅 and 😉. Why, I'm not really sure, but I guess there is a personality test that tells you what kind of person you are based on emojis you like. If there isn't such a test, there should be.</p>
<p>When communicating with people that don't use them, I opt out of using them as well, because I don't want to send something that others may find offensive. However, when communicating with people that use emojis, but they are not using them in that one conversation, I always get a bit worried, and ask them if everything is okay, because I've not seen the emojis... I guess this happens to all the people who like and use emojis, right?</p>
<p>To return to the question from the title - Do emojis have an impact on us and the understanding of what is written? In short - yes. Whether you are for, meh or against emojis, use them wisely, sometimes they are appropriate, and sometimes not. And how to know that? You need to figure that out by yourself.</p>
<p>In the end, <em>it isn't about the emoji you've added, but what you wrote with words.</em>[^1]</p>
<h3>To learn more</h3>
<p><em>Disclaimer - a lot of sources came from Wired magazine</em></p>
<ul>
<li><a href="https://www.wired.com/story/guide-emoji/">Emoji guide</a></li>
<li><a href="https://www.bbc.com/future/article/20181211-why-emoji-mean-different-things-in-different-cultures">Emojis in different cultures</a></li>
<li><a href="https://www.wired.com/2016/04/the-science-of-emoji/">The science of emoji</a></li>
<li><a href="https://www.wired.com/story/newest-emoji-unicode-11/">Newest emoji unicode</a></li>
<li><a href="https://www.wired.com/2015/11/emoji-diversity-politics-culture/">Emoji and diversity</a></li>
<li><a href="https://www.wired.com/story/the-delicate-art-of-creating-new-emoji/">Creatign new emoji</a></li>
</ul>
<p>[^1]: I heard this quote during some talk I've attended, I noted it down, but I can't remember the source. I wanted to finish with it, because I find it as an appropriate end of this post.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Is it everything about the remote these days? - The story of Git, part four</title>
			<link href="https://wonderingchimp.com/posts/is-it-everything-about-the-remote-these-days-the-story-of-git-part-four/"/>
			<updated>2022-03-19T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/is-it-everything-about-the-remote-these-days-the-story-of-git-part-four/</id>
			<content type="html"><![CDATA[
				<p>It was two weeks since my last post, it wasn't a long time, I guess... I'll try to maintain this sequence - one post every Saturday around noon, and after several weeks of that, I'll take a two weeks long break, just to gather some thoughts, and ideas. I hope it doesn't mess up your schedule, but I'm still trying to navigate these <em>online creator waters</em>. ;)</p>
<p>You probably opened this blogpost and thought - okay, let me see what he has to say about remote work, and all that stuff, but how does Git fit into this discussion? Well, this article is not about the remote work and however it may or may not affect us, it is related to the Git remotes, and how to configure and play with them.</p>
<p>This is the fourth part of my <a href="https://www.wonderingchimp.com/tag/git/">blog post series about Git</a>. In the first part we've covered git objects - blobs, trees and commits. In the second part we've talked about branches, tags and HEAD. Third part introduced the TBD - Trunk-Based Development branching strategy... Now, sit down, relax, take a cup of hot or cold beverage, whatever your preference, and dive with me into the world of git remotes.</p>
<p>Long story short - in Git, you were able to use remotes long before it was cool! ;)</p>
<p>Now to some &quot;serious stuff&quot;.</p>
<h2>What are Git remotes?</h2>
<p>Remotes, or remote repositories are versions of your project that are hosted on the Internet, somewhere on the network, or even on your machine, but in a different location - elsewhere, rather than remote.<a href="https://git-scm.com/book/en/v2/Git-Basics-Working-with-Remotes">^1</a> Why would you need to host your code somewhere? Main reason - collaboration. You are able to collaborate with others by using and/or creating remote repositories. Those repositories can be private or public - you can share them with specific people, or the whole world. You decide that when you create your remote repository somewhere on the internet. Some of the platforms where you can easily create and store remote repositories include github.com, gitlab.com, bitbucket.org, etc. There is no best platform for hosting git repositories, all of them have some pros and cons. On some of my projects I prefer using gitlab.com, and on the other, github.com is my go to platform. It really depends on your use case. But that is not the point now. For the sake of this example, I'll use several repositories that I've created on the github.com.</p>
<p>The main thing to remember about remote repositories is that there can be one or more remotes configured for your repository. Usually we end up working with one remote repository - the famous <code>origin</code>. What if we have more than one, how can we take on the management of those repositories?</p>
<h2>Working with remotes</h2>
<p>Firstly, to view remote repositories, you can go and run <code>git remote</code> or <code>git remote -v</code> to see the complete list. The output will be somewhat similar to below one.</p>
<pre><code class="language-bash">$ git remote -v
origin  git@github.com:alternaivan/vigilant-couscous.git (fetch)
origin  git@github.com:alternaivan/vigilant-couscous.git (push)
</code></pre>
<p>In the example above, I'm using one that I've created on github.com for testing purposes. Now, what would it be like to add the another remote? The use case behind another remote could be if you, for example, have chosen to fork some open source project, you can add an upstream remote, which will point to the original repository, and your <code>origin</code> remote will point to the one you have forked. Example on adding another remote is below.</p>
<pre><code class="language-bash">$ git remote add upstream git@github.com:alternaivan/laughing-robot.git
$ git remote -v
origin  git@github.com:alternaivan/vigilant-couscous.git (fetch)
origin  git@github.com:alternaivan/vigilant-couscous.git (push)
upstream        git@github.com:alternaivan/laughing-robot.git (fetch)
upstream        git@github.com:alternaivan/laughing-robot.git (push)
</code></pre>
<p>As you can see in the output above, we now have a new remote called <code>upstream</code> with both <code>fetch</code> and <code>push</code> set to the url of the new repository. This second repository is also a testing one I've created for this post.</p>
<p>Let's say somebody made a change on <code>upstream</code> and we want to apply it to the our copy of the repository and push it to our remote. In order to do that, we do the following.</p>
<pre><code class="language-bash">$ git checkout main

# Showing the logs
$ git lg
* e09d81a - (HEAD -&gt; main, upstream/main, origin/main) Adding upstream (2 hours ago) &lt;Test&gt;
* e7906f8 - my second commit (6 days ago) &lt;Test&gt;
* 6f4b9fd - my first commit (6 days ago) &lt;Test&gt;

# Fetching changes from the upstream
$ git fetch upstream 
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 3 (delta 0), pack-reused 0
Unpacking objects: 100% (3/3), 296 bytes | 296.00 KiB/s, done.
From github.com:alternaivan/laughing-robot
   e09d81a..f19b14c  main       -&gt; upstream/main

# Showing the logs
$ git lg
* e09d81a - (HEAD -&gt; main, origin/main) Adding upstream (2 hours ago) &lt;Test&gt;
* e7906f8 - my second commit (6 days ago) &lt;Test&gt;
* 6f4b9fd - my first commit (6 days ago) &lt;Test&gt;

# Merging from upstream/main to main
$ git merge upstream/main main
Updating e09d81a..f19b14c
Fast-forward
 upstream.md | 2 ++
 1 file changed, 2 insertions(+)

# Showing the logs
$ git lg
* f19b14c - (HEAD -&gt; main, upstream/main) Adding to upstream repo (2 minutes ago) &lt;Test&gt;
* e09d81a - (origin/main) Adding upstream (2 hours ago) &lt;Test&gt;
* e7906f8 - my second commit (6 days ago) &lt;Test&gt;
* 6f4b9fd - my first commit (6 days ago) &lt;Test&gt; 

# Pushing to our (origin) remote repository
$ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 316 bytes | 316.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
To github.com:alternaivan/vigilant-couscous.git
   e09d81a..f19b14c  main -&gt; main

# Showing the logs
$ git lg
* f19b14c - (HEAD -&gt; main, upstream/main, origin/main) Adding to upstream repo (3 minutes ago) &lt;Test&gt;
* e09d81a - Adding upstream (2 hours ago) &lt;Test&gt;
* e7906f8 - my second commit (6 days ago) &lt;Test&gt;
* 6f4b9fd - my first commit (6 days ago) &lt;Test&gt;$ git push 
</code></pre>
<p>Now, to explain the output of the command above. First, we check for logs before getting changes from upstream. In the output of the <code>git lg</code> command we can see that our <code>HEAD/main</code>, <code>upstream/main</code> and <code>origin/main</code> all point to the same commit.</p>
<p>Why <code>git lg</code> and not <code>git log</code>? The first one is the alias which will pretty-print the logs and info you'll need, and the second one is the native Git command. The alias looks like this (you can paste this in your git repository directory): <code>git config --global alias.lg &quot;log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)&lt;%an&gt;%Creset' --abbrev-commit --&quot;</code>.</p>
<p>Second, in order to see what has changed on the remote <code>upstream</code> repo, we need to get the changes from there. We can do that in two ways - <code>fetch</code> or <code>pull</code>. First one is the &quot;safer&quot; one - it will only fetch the latest changes from the remote repository, while the second one will fetch and will try to merge those changes. <em>Why is the <code>fetch</code> safer one?</em> Because <code>pull</code> will always try and merge changes, and if it fails, git will scream at you. I always go for the <code>fetch</code> option, even though when I started tinkering with git I almost always used <code>pull</code> and on some occasions I even got headaches from git &quot;screaming&quot; at me that it's not able to merge changes in my current working directory... but more on that in some other post.</p>
<p>After that we again show logs, and you can see that now the <code>upstream/main</code> is not pointing to any commit we have in the logs. Why? Because it was updated and it went forward, so we don't have it in our git log.</p>
<p>Fourth step shows the merging of <code>upstream/main</code> into <code>main</code> branch. After that we again show the logs and can see now how everything except <code>origin/main</code> got updated. Now the last step is to push our changes to <code>origin/main</code> branch, as seen above. With that we actually finish with the updating our remote repo with the changes from another remote.</p>
<h2>Additional actions on Git remotes</h2>
<p>Below, you can find some of the additional actions on remote repositories you can perform:</p>
<ol>
<li>show information about the remote</li>
</ol>
<pre><code class="language-bash">$ git remote show upstream
* remote upstream
  Fetch URL: git@github.com:alternaivan/laughing-robot.git
  Push  URL: git@github.com:alternaivan/laughing-robot.git
  HEAD branch: main
  Remote branch:
    main tracked
  Local ref configured for 'git push':
    main pushes to main (up to date)
</code></pre>
<ol start="2">
<li>renaming the remote</li>
</ol>
<pre><code class="language-bash">$ git remote rename upstream up-stream

$ git remote -v
origin  git@github.com:alternaivan/vigilant-couscous.git (fetch)
origin  git@github.com:alternaivan/vigilant-couscous.git (push)
up-stream       git@github.com:alternaivan/laughing-robot.git (fetch)
up-stream       git@github.com:alternaivan/laughing-robot.git (push)
</code></pre>
<ol start="3">
<li>removing the remote</li>
</ol>
<pre><code class="language-bash">$ git remote remove up-stream

$ git remote -v
origin  git@github.com:alternaivan/vigilant-couscous.git (fetch)
origin  git@github.com:alternaivan/vigilant-couscous.git (push)
</code></pre>
<h2>Conclusion</h2>
<p>In order to share your work, or save it somewhere other than the local machine, use git remotes. If you want to contribute to some open-source project, follow their guidlines, usually that would require of you to fork the repo, add the main repo as an upstream in your remotes and interact with it in the described way. Choice is yours (or theirs, in this sense).</p>
<p>This was all for now. I hope you found this, or some - Git related or unrelated blogpost of mine useful. Feel free to share the content I'm creating and also to subscribe, if you haven't already, it's free, and it would mean a lot that somebody is interested in these scribblings.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>One Branch to Rule them all! - The story of Git, part three.</title>
			<link href="https://wonderingchimp.com/posts/one-branch-to-rule-them-all-the-story-of-git-part-three/"/>
			<updated>2022-03-05T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/one-branch-to-rule-them-all-the-story-of-git-part-three/</id>
			<content type="html"><![CDATA[
				<p>When I was young, some of the things that amazed me the most was looking at the sky through tree branches. Following the sun, seeing the leaves mixing with the blue sky... That scene always left me speechless for a while, even today, when going hiking, I end up staring up at those magnificent treetops with all their branches spread like webs...</p>
<p>First thought when I hear something about branches is the above one. It is a really nice, one I would say. I just made sure to connect those thoughts when I was first learning about Git and its branches. However, the first version control system that I used was SVN, and the thoughts on branching there were a bit frustrating. Luckily, I soon switched projects and started learning Git, so the branching nightmare was over. Or did it just started?</p>
<p>Jokes aside, this story is going to be about branching strategies in Git, actually, one in particular - Trunk-Based Development. I'll also mention other strategies or flows as you may know them, but I will not go deep into each and every one.</p>
<p>So what is actually a branching strategy? The answer to that is rather simple - it is a way to organize branches within your version control system. Simple as that. Some of the common, newer, &quot;chic&quot; branching strategies include - Git flow, GitHub flow, GitLab flow... As you can see, new way of calling branching strategies is <em>flow</em>. Okay, no problem, but what about Trunk-Based Development? Well, first of all, it's not a new thing. It was here for more than 30 years, and the reason it is not so popular, and &quot;chic&quot;, in my opinion is, well - it works! And here's how...</p>
<p>First off - what is Trunk-Based Development? It is a source-control branching model, where developers collaborate on code in a single branch called <em>trunk</em>, resist any pressure to create other long-lived development branches by employing documented techniques. They therefore avoid merge hell, do not break the build, and live happily ever after.<a href="https://trunkbaseddevelopment.com/#one-line-summary">^1</a></p>
<p>There are two ways of using Trunk-Based Development strategy:</p>
<ol>
<li>If you are a smaller team - each committer (preferably pair programming duo) should stream small commits straight into the trunk with a check step that runs build before integrating with trunk.</li>
<li>If you are a bigger team - each committer (one person) creates a short lived topic or feature branch (<strong>alive for maximum of couple of days</strong>) and going through Pull-Request style of code-review &amp; build automation before merging changes into the trunk.</li>
</ol>
<p>With this strategy you satisfy the core requirement of Continuous Integration - all team members should commit to trunk at least once every 24 hours. And this setup also ensures that Continuous Delivery becomes a reality with having a codebase releasable on demand.</p>
<p>When does the team stop being small and becomes a bigger one? Depends on the number of people and number of commits. It is a subject to debate. However, the one thing that needs to happen is pre-integration build, before committing/pushing for others to see. The pre-integration build can have stages like - compile, unit tests, integration tests. Ideally, those steps should be done on developer's workstation.</p>
<p>There are several things to have in mind when considering Trunk-Based Development:</p>
<ul>
<li>Feature branches need to be short-lived, small, and used only for code-review and (CI). There shouldn't be any artifacts created or published by them. The whole artifact creation and publication needs to happen after integrating to trunk. In case of smaller teams, team members can commit directly to trunk.</li>
<li>Depending on when you choose to release, you may opt for creating release branches from trunk, just before the release, and they should be deleted after some time. This creation of release branch from trunk shouldn't be a team activity. Different strategy is to release from trunk and opt for fix-forward in case of bug fixes.</li>
<li>If there is some change that would take longer to complete use branch by <a href="https://www.branchbyabstraction.com/">abstraction technique</a>, and feature flags in order to allow releases to be independent from one another.</li>
<li>In case of smaller teams working directly on trunk, it is of importance to have a hook on a build server which will ensure that their commits do not break the trunk. And in case you are using short-lived feature branches, there should also be a hook there to ensure that the merge back to trunk will not break it.</li>
<li>Development teams can increase and decrease in size without having any impact on quality of the code. Trunk-based development is one of a set of capabilities that drive higher software delivery and organizational performance. These capabilities were discovered by the DORA State of DevOps research program, an independent, academically rigorous investigation into the practices and capabilities that drive high performance.<a href="https://cloud.google.com/architecture/devops/devops-tech-trunk-based-development" title="page 27">^2</a> <a href="https://services.google.com/fh/files/misc/state-of-devops-2021.pdf">^3</a></li>
</ul>
<p>What is its relation to other strategies, or flows? Some may say that Trunk-Based Development strategy is similar to GitHub flow - they have the same idea, however, feature branches in GitHub flow are not as short lived as in Trunk-Based Development, pull requests are actually tested in an environment before merging them to main branch<a href="https://docs.microsoft.com/en-us/devops/develop/how-microsoft-develops-devops">^4</a>, and the GitHub flow it is more suitable for open-source projects, where contributors are not working on them full time.<a href="https://jan.schnasse.org/blog/2021/04/trunk-based-development/">^5</a></p>
<p>And when it comes to Git flow, well, the difference is a bit obvious - the Trunk-Based Development is the simpler one, which is the way we want to go.</p>
<p>To summarize (this was a short one), I hope the things I mentioned in this blog post will shift your mind towards Trunk-Based Development strategy. If not, well, that is not a problem - you should use the strategy which suits you the most.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Is there Life in the Pipelines?</title>
			<link href="https://wonderingchimp.com/posts/is-there-life-in-the-pipelines/"/>
			<updated>2022-02-25T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/is-there-life-in-the-pipelines/</id>
			<content type="html"><![CDATA[
				<blockquote>
<p>Under the streets of London there's a place most people could never even dream of. A city of monsters and saints, murderers and angels, knights in shiny armor and pale girls in black velvet. This is the city of the people who have fallen between the cracks.<a href="https://www.goodreads.com/book/show/14497.Neverwhere">^1</a></p>
</blockquote>
<p>This is the short description of the book &quot;Neverwhere&quot; by Neil Gaiman. I've read it long time ago and I really liked it. I don't remember the book well, but I remember this short description and it had an impact on me, both at the time when I was starting to read the book, and now, when I think of it. Neil Gaiman is one of my favorite writers and I think his imagination is really something, out of this world perhaps. This quote from above will not be a preface to the review of his book, nope. It will be a sum up story and a description of how I see the world of DevOps. I will use this quote to give DevOps another, a bit fantasy, note.</p>
<p>DevOps is the thing we can't actually see - &quot;Under the streets... a place most people could never even dream of...&quot; It is there, or at least it should be there, and it cannot be seen with the naked eye. Those things, although somewhat and sometimes invisible are present and exist, and those things are - people, processes and tools.</p>
<p>And this is where we kind of part our ways from the quote at the beginning. I just wanted to emphasize the fact that DevOps is often not seen, not one thing, or just a role in the team, it is much more, a city full of various, interesting and sometimes breathtaking things, things that we're not aware of, at least most of the time.</p>
<p>So what is exactly DevOps? In essence, and forgive me if I repeat the same things as many resources online - it is a combination of development (Dev) and operations (Ops) - DevOps is the union of people, process, and technology to continually provide value to customers. And yes, the goal of this somewhat unseen world is to provide value to people that have some interest in the product we are creating, end-users. <a href="https://azure.microsoft.com/en-us/overview/what-is-devops/#devops-overview">^2</a></p>
<p>How do we provide those values? First, by implementing processes which will make sure that the flow is continuous, from defining the things to be built (requirements), building of those things (development), and making those things available to the end users (operations). Next is by creating feedback loops from end users or the application itself (right) to the team (left) about changes that need to be done, improvements, bugs, etc. And last, but not the least - by adopting the culture that empowers continual learning, experimentation and learning from failure.<a href="https://itrevolution.com/the-three-ways-principles-underpinning-devops/">^3</a></p>
<p>I admit, the above line is a mouthful. But let's see it in practice - let's see how the day in life of a DevOps engineer (yes, I said it) looks like.</p>
<p>TL;DR - many things to take care of and to think about.</p>
<p>The longer version - I usually start my day with a cup (or a bottle) of tea, then quickly go through... Just kidding, I'm not going to write here about my daily routine, at least not yet...</p>
<p>If you didn't guess it by now - I am working as a DevOps engineer, and here is an outline of some sorts - my main tasks on projects are to make developed applications available to the end user. How I do that? By setting up the processes and pipelines for building, testing, and deploying the application(s) to the adequate environment - this is the <em>processes</em> part. Those processes are automated with the appropriate tools, depending on the process, we use different tools - and you guessed it, this is the <em>tools</em> part. Do I do this alone? No - the whole team is involved in creating and automating certain part(s) of the process(es), experimenting with different ones, learning from failed ones, and keeping those chosen up to date - and this is the part of the DevOps that is related to <em>people and continuous experimentation and learning</em>.</p>
<p>Some, I specifically say some, of the processes and practices include Continuous Integration, Continuous Delivery or Continuous Deployment. Usually, we (the DevOps engineers) in collaboration with developers setup the CI pipelines, and (later) CD pipelines, make sure that the infrastructure for different application stages (development, test, production) is available, is similar if not the same to one another, and easily ready and re-deployed if something goes south. And what if something goes south? One thing to remember is that this will definitely happen at some point, but that shouldn't be the problem, because that is how we learn.</p>
<p>Now the tools part. This is the thing that is the most confusing and often thought of as the only part of DevOps. Well, it is <em>a</em> part, not the only one for sure. We have a plethora of different tools for each part of the process, and yes - there is not <em>one tool to rule them all</em> - meaning - it is not often seen that people are using only one tool to accomplish all of the things (yes, I'm looking at you bash). The thing is to use all of these tools in combination which will have most sense and be the simplest one for you. Usually, the simplest solution is the best one.</p>
<p>There is also somewhat new(er) concept of DevOps platforms... What is the deal with them? They are there to make our life easier (or at least that is their selling point), by providing us one location where we can easily configure all of our project needs. Don't get me wrong, I don't have anything against them, they are quite useful, if you know what you are doing and what are your needs. I'm quite fond of some of them to be honest. The thing to remember here is - there are no best platforms or combination of tools, there are only those that are the most suitable for you and your project.</p>
<p>To wrap this up, there isn't only one way of doing almost anything in the world, not even DevOps. You just need to think of it as a combination of people, processes and tools and treat it in that sense, not only tools, or processes. The whole combination of all of these three things needs to be present. I like the whole DevOps culture, state of mind, for what it represents - tearing down the barriers between different roles. I like living in these <em>pipelines</em>, not seen by the naked eye. To go a bit further, you can even think of yourself as some kind of a ninja turtle...</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Training for climbing - to be coached or not?</title>
			<link href="https://wonderingchimp.com/posts/training-for-climbing-to-be-coached-or-not/"/>
			<updated>2022-02-18T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/training-for-climbing-to-be-coached-or-not/</id>
			<content type="html"><![CDATA[
				<h2>Climbing newbie</h2>
<p>First things first - if you recently started climbing, I would always recommend having a good coach. In my opinion, it will have a massive impact on your abilities, especially because you are new to the sport and don't know what can be beneficial or not. Here are some benefits of having a good coach when you are starting: well-structured training, concentration on all aspects of climbing (strength, technique, endurance, power endurance), balanced training, starting small, learning a lot...</p>
<p>How can you tell if somebody is a good coach or not? First, you can always ask around the gym, especially the people in your training session about the coach. Usually, new climbers have grouped climbing sessions, and people who really like the coach and his or her work will stay with them longer. Next, watch if the coach has time for each and every person in the training. Holding group training is tough, and if there are too many people and only one coach, it can lead you or others to be neglected. Last but not least, (climbing) experience of the coach.</p>
<h2>Climbing vet</h2>
<p>Okay, but what if you're not that new to climbing, should you have a coach or plan the training yourself? That depends on various things - the time you have, what are your goals, current achievements, and performance, is there an improvement in your current effort, and so on.</p>
<p>If you don't have the necessary time, but you need structured and well-organized training, yes, definitely find a coach. On the other hand, if you want to explore more on your own and you have time to spare, that is also great, it will help you learn a massive amount of stuff regarding exercises. The list of resources can be a bit overwhelming though. These are numerous books and materials I had a look and they were helpful on one hand or the other. Below you can find some of them.</p>
<ul>
<li>Dave MacLeod's <a href="https://www.goodreads.com/book/show/7489836-9-out-of-10-climbers-make-the-same-mistakes?ac=1&amp;from_search=true&amp;qid=JtX7CMCCH6&amp;rank=1">9 out of 10 climbers</a>, a book containing useful bits of advice on training focus, technique, and fear management.</li>
<li>Eric Hörst's <a href="https://www.goodreads.com/book/show/627297.Training_for_Climbing">Training for Climbing</a> and <a href="https://www.goodreads.com/book/show/7904970-maximum-climbing?ac=1&amp;from_search=true&amp;qid=VQGv8YKNEH&amp;rank=5">Maximum Climbing</a>, books providing a vast amount of different exercises, both on and off the wall, these were really useful when I started reading climbing training books.</li>
<li><a href="https://www.goodreads.com/book/show/501200.The_Rock_Warrior_s_Way">The Rock Warrior's Way</a> by Arno Ilgner - this book is really instruction on how to manage fear in climbing, and it is one of my favorites.</li>
<li>Steve Bechtel's <a href="https://www.goodreads.com/book/show/34587110-logical-progression?ac=1&amp;from_search=true&amp;qid=8IwwOrkl4m&amp;rank=1">Logical Progression</a>, a book about non-linear type of training, really interesting read.</li>
<li><a href="https://latticetraining.com/blog/">Blog from Lattice</a>, useful tips and tricks, training examples, interviews, and so on. They also have a really good <a href="https://www.youtube.com/c/LatticeTraining">YouTube channel</a>.</li>
</ul>
<h2>The Important Stuff</h2>
<p>Some of the things to have in mind when thinking about climbing training, both when hiring a coach or choosing to create your own training plan, are:</p>
<ul>
<li>Have a goal in mind. Good <a href="https://www.youtube.com/watch?v=Fz-Xs8c4PgE">goal setting will do you wonders</a>.</li>
<li>Listen to your body. If you plan to train 5 or 6 days a week, that probably won't work in the long run.</li>
<li>Stick to your plan as much as you can. Whether you created it yourself or had someone else create it for you, you need to make sure that you follow it through.</li>
<li>Measure before, during, and after the training cycle. Usually, the cycle lasts either 4, 8, 12, or 16 weeks. The best way to see if the plan is working is to test yourself every 4 weeks or so. You can test different grip positions, the number of kilos you can lift from push-ups, for example, determine on sight or maximum route grade before starting the training, and measure every four weeks if there is some improvement. Possibilities are endless.</li>
<li>Keep yourself from new exercises. When you are on a training plan, don't try to experiment with other exercises that you see others do just because they are new, and you might think they can help you. Be patient.</li>
<li>Prepare yourself for failure. Climbing is 95% failing to do something, make sure to learn as much as you can from that.</li>
</ul>
<p>The best thing about climbing training and climbing, in general, is that it's highly individual - something may work for you, but that doesn't mean that it will work for others and vice-versa, and because of this we should be drawn to it more and more.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Should I worry if my head got detached? - The story of Git, part two</title>
			<link href="https://wonderingchimp.com/posts/should-i-worry-if-my-head-got-detached-the-story-of-git-part-two/"/>
			<updated>2022-02-18T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/should-i-worry-if-my-head-got-detached-the-story-of-git-part-two/</id>
			<content type="html"><![CDATA[
				<p>If your head, god forbid, got detached, you wouldn't have with what to worry about anything, right? Well, in Git, this is a lot more different thing, and it is not a problem, even though git &quot;screams&quot; at you with a bunch of warnings. I will show you what this means, how to fix it, and a bit more about it.</p>
<p>This is a second article in the series about Git, and with it we will cover <em>head, branches, and tags</em>. If you want to start from the beginning, checkout <a href="https://www.wonderingchimp.com/posts/am-i-fooling-around-or-the-story-of-git-part-one/">the first part of the story</a> where we go through some of the git internals.</p>
<p>Unlike some warm-up videos, we will not start from the top-down (head-toes) approach, we will start with branches first, and then continue on explaining tags and in the end, last but not the least - the head.</p>
<p>What is a <strong>branch</strong>? To put it simply - it is a named reference to a certain commit. And that is just it. The data about branches is stored in <code>.git/refs/heads</code> directory. This directory contains the file with the branch name and the content of that file is the commit hash that it points to. It will look something similar to the below snippet:</p>
<pre><code class="language-shell">$ ls .git/refs/heads/
master

$ cat .git/refs/heads/master 
e7906f8376b35321d19306260cdbf38f3b071ae8

$ git cat-file -p e7906f8376b35321d19306260cdbf38f3b071ae8
tree fb4e6dfcd4e11cdf2391b817d1e0a463e9dbe2dd
parent 6f4b9fdb4169e8f906f58670fd73a8801a5c0473
author Test &lt;test@example.com&gt; 1644596933 +0000
committer Test &lt;test@example.com&gt; 1644596933 +0000

my second commit
</code></pre>
<p>That is why creating branches in git is cheap. Cheap in a sense of resources that are used in order to create one - just a text file with a commit hash in it, not the snapshot of the whole working directory, for example. You can see the example below.</p>
<pre><code class="language-shell">$ cat .git/refs/heads/master 
e7906f8376b35321d19306260cdbf38f3b071ae8

$ git branch
* master

$ git checkout -b new-branch
Switched to a new branch 'new-branch'

$ cat .git/refs/heads/new-branch 
e7906f8376b35321d19306260cdbf38f3b071ae8
</code></pre>
<p>What happens when we add a commit to the <code>new-branch</code>? Let's see.</p>
<pre><code class="language-shell">$ cat .git/refs/heads/new-branch 
e7906f8376b35321d19306260cdbf38f3b071ae8

$ echo &quot;Hello from test!&quot; &gt;&gt; test.md
 
$ cat test.md 
Hello from test!

$ git add test.md
$ git commit -m &quot;commit on new-branch&quot;
[new-branch 6dd7fdc] commit on new-branch
 1 file changed, 1 insertion(+)
 create mode 100644 test.md

$ cat .git/refs/heads/new-branch 
6dd7fdc78ebdf7a3d4447cc19709a1d1a8805155
</code></pre>
<p>From the output above we can see that the <code>.git/refs/heads/new-branch</code> file was updated with the new hash commit. That now means that the <code>new-branch</code> has diverged from <code>main</code> for one commit.</p>
<p>Deleting a branch is also cheap in git, however, if we would try to delete <code>new-branch</code>, git will raise a warning and not let us. We can always force it and don't care about the warning, but <em>why will it not let us in the first place?</em></p>
<p>Because when deleting a branch <code>new-branch</code>, the commit with hash <code>6dd7fdc78ebdf7a3d4447cc19709a1d1a8805155</code> will become orphaned. Wait, what is orphaned commit now? To put it simple - it is a commit that has no reference. And what will happen to that orphaned commit if we would end up deleting the branch? Nothing, at first at least. It would remain orphan for around 30 days and then it will be picked up by git garbage collection and it will be deleted, and lost forever. Having orphaned commits is not a problem per-se, but they can be deleted at some point, and without reference they are not really easy to track, especially because of the hash value not being so readable. There is a way to track that, but we will write about it later.</p>
<p>Hopefully, from my rambling above, you've understood how branches work. So, let us know continue to <strong>tags</strong> and what they represent.</p>
<p>Similar to branches, <strong>tags</strong> also point to a specific commit, however, unlike branches - tags do not move! That means, when we switch to a new branch, we can commit to it, and the branch will move to that commit. That will happen with each commit. However, if we tag a commit with a certain value - e.g. <code>v1.0.0</code>, that means that the tag will not move from that commit. You can say that <em>tags are somewhat like immutable branches</em>. They are saved in the <code>.git/refs/tags</code> directory. Here is just an excerpt from the <code>git log</code> command and how tags are represented there.</p>
<pre><code class="language-git">$ git log
commit 27cbac466f04f6d7bde9e2ac61e1da6f03861f45 (HEAD -&gt; new-branch)
Author: Test &lt;test@example.com&gt;
Date:   Mon Feb 14 11:03:23 2022 +0000

    Another commit to a branch

commit 6dd7fdc78ebdf7a3d4447cc19709a1d1a8805155 (tag: v1.0.0)
Author: Test &lt;test@example.com&gt;
Date:   Mon Feb 14 10:47:26 2022 +0000

    commit on new-branch

commit e7906f8376b35321d19306260cdbf38f3b071ae8 (master)
Author: Test &lt;test@example.com&gt;
Date:   Fri Feb 11 16:28:53 2022 +0000

    my second commit

commit 6f4b9fdb4169e8f906f58670fd73a8801a5c0473
Author: Test &lt;test@example.com&gt;
Date:   Fri Feb 11 16:27:26 2022 +0000

    my first commit
</code></pre>
<p>You can see how with the new commit the branch moved, however, tag <code>v1.0.0</code> stayed on the same commit. That's why tags are great for labeling a version for example. There are also some of the best practices for manipulating tags - you should never name a tag with the same name as your branch, and you should not move or delete a tag when you pushed the code to remote, it can mess up your repository.</p>
<p>And now let's talk about <strong>HEAD</strong> and how to lose it in git!</p>
<p><strong>HEAD</strong> is also a pointer of some sorts - it points to a checked out branch, tag or commit. It is sort of like our compass. When we are on a branch, the <code>.git/HEAD</code> file contains the reference to that branch, for example - <code>ref: refs/heads/new-branch</code>. However, if we'd checked out a certain commit, and not a branch or tag, that file gets updated and it contains a hash of the commit we are on. And that is when we get the famous <code>You are in 'detached HEAD' state.</code> message, or maybe a scream from git, so you might act like - go back, go back, go back! Undo, undo, undo!</p>
<p>I remember my first time when I saw this, years ago, I thought I messed up everything! I was really stressed, because I linked detached head to something bad (sorry for being a living organism with a head). Then I quickly returned to the main branch, or maybe I've even cloned a repo in a new directory, as we have all done at some point and saw that nothing was messed up. Then, quick web search told me that this is nothing to worry about, and I'm now here to let you know that it really isn't something you should worry about.</p>
<p>The reason git &quot;screams&quot; at you is because anything you change and commit, it will end up orphaned. So git advises you to, if you end up adding some commits, move them to a different branch. The well-known warning looks somewhat like below snippet.</p>
<pre><code class="language-shell">$ git checkout 6f4b9fdb4169e8f906f58670fd73a8801a5c0473
Note: switching to '6f4b9fdb4169e8f906f58670fd73a8801a5c0473'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c &lt;new-branch-name&gt;

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 6f4b9fd my first commit
</code></pre>
<p>So what should you do if you end up in <code>detached HEAD state</code>? First - don't panic, second - either return to the branch you've been or if you want to add changes, just read the git warning slowly, even though it might feel terrible, it really is not. Git will show you how to &quot;fix&quot; things. And that's it!</p>
<p>Before I finish, there is one last thing I want to mention, or to give an answer to (in case you were wondering) - what to do in case you've deleted a branch with some unmerged commits or if you've commited something while in <code>detached HEAD state</code> and you've lost track of those commits? <strong>reflog</strong> to the rescue!</p>
<p>You can look at <strong>reflog</strong> like a blue dot on a gps, it shows you where you are and how you got there. It is a log which updates every time the HEAD moves. The log is structured as a last in - first out. That means - entries are from the newest to the oldest. You can go to reflog whenever you think you've messed something up, want to recover orphaned commit (if not deleted), or recover a deleted branch. Every clone of the repository has it's own reflog so you can trace your steps from the moment you've cloned the repo, up until the present. Pretty cool, isn't it?! Here is the output of how the reflog looks.</p>
<pre><code class="language-shell">$ git reflog
e7906f8 (HEAD -&gt; master) HEAD@{0}: checkout: moving from 27cbac466f04f6d7bde9e2ac61e1da6f03861f45 to master
27cbac4 HEAD@{1}: checkout: moving from master to 27cbac4
e7906f8 (HEAD -&gt; master) HEAD@{2}: checkout: moving from new-branch to master
27cbac4 HEAD@{3}: checkout: moving from 6f4b9fdb4169e8f906f58670fd73a8801a5c0473 to new-branch
6f4b9fd HEAD@{4}: checkout: moving from new-branch to 6f4b9fdb4169e8f906f58670fd73a8801a5c0473
27cbac4 HEAD@{5}: commit: Another commit to a branch
6dd7fdc (tag: v1.0.0) HEAD@{6}: commit: commit on new-branch
e7906f8 (HEAD -&gt; master) HEAD@{7}: checkout: moving from master to new-branch
e7906f8 (HEAD -&gt; master) HEAD@{8}: commit: my second commit
6f4b9fd HEAD@{9}: commit (initial): my first commit
</code></pre>
<p>As you can see, it shows my movements in the repository. Really useful option!</p>
<p>And we finally reached the end of the blog post. At last - some of you might think. Well, I thank you for sticking with me until the end of this post and I'm looking forward to see you in the next one!</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Am I fooling around? - Or the story of Git, part one</title>
			<link href="https://wonderingchimp.com/posts/am-i-fooling-around-or-the-story-of-git-part-one/"/>
			<updated>2022-02-11T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/am-i-fooling-around-or-the-story-of-git-part-one/</id>
			<content type="html"><![CDATA[
				<p>Well, this is not going to be a story from your childhood, although, it has everything a childhood story would have, or some of the things at least. And if you were that sort of a child who learned versioning control systems when growing up (not judging), well, I hope you like how I wrote my understanding of it.</p>
<p>This is the story, or, the series of stories about some internals of Git, e.g. how Git data structure works, and some of the things I found interesting to describe and explore. Motivation for this came from some git internals workshop I recently took and the curiosity to know git better, and explain it better, to me and to everyone who might want to read.</p>
<p>Topics that we're going to cover in the first part are <em>blobs, trees and commits</em> - what they are, how they interact with each other, what is their content, etc. I'm going to assume that you know basics of git - how to initialize or clone repo, create a branch, add files to staging, commit those files, push to remote repository, etc. For some basic stuff, checkout <a href="https://git-scm.com/book/en/v2/Git-Basics-Getting-a-Git-Repository">the basics of git</a> in the docs.</p>
<p>Let us start with <strong>blobs</strong>.</p>
<p>What do you think when you hear blob? To me, this always sounded like a drop of some thick fluid, and, well, the Merriam-Webster dictionary defines this in kind of a similar sense - a small drop or lump of something viscid or thick.<a href="https://www.merriam-webster.com/dictionary/blob">^1</a></p>
<p>In case of Git, it is a bit different. A <strong>blob</strong> is a git object type which stores the content of the files you are staging (adding) to git repository. The important thing is that this content is hashed with SHA-1 algorithm, and it is saved in the <code>.git/objects</code> directory under two characters directory, something similar to below:</p>
<pre><code class="language-shell">.git/objects/
├── 73
│   └── 709ba6866a30a566a38ca40aa81d5f0928bce0
</code></pre>
<p>What are all those digits? Well those are hashed values of the content. The directory name is the first two digits of the hash, and the name of the file is the rest of the hash, but more on that later. If you would look on this hash value, it will show you something like this:</p>
<pre><code class="language-shell">$ git cat-file -p 73709ba6866a30a566a38ca40aa81d5f0928bce0
Testing
</code></pre>
<p>So, that is a blob. It is an object which represents the content of the file stored in git. What would happen if we would edit this file and add another line, and stage (add) it to git? Well, git will create another blob, with the new content, not the difference between the old and new file. It's all about the content! This way of storing files is called <strong>content addressable filesystem</strong>, or CAF. The content itself dictates what value will be stored in a file. This also means that if you would create another file with the same content and stage it in git, there will be no new blob created under <code>.git/objects</code> directory. Why? Well, with blobs it is all about the content!</p>
<p>What happens next? How can we know about the file names, permissions, location, etc? This is all done when we <strong>commit</strong>. The important function which is triggered before the commit however is <strong>write-tree</strong>. What does this function do? This function will write the current state of the working directory in another git object called <strong>tree</strong>. This object is somewhat similar to the UNIX directory entries - it contains one or more entries, each of which is a hashed value of a blob and/or a subtree (subdir) with its associated mode (permissions), type (blob or a tree), and filename. It will look somewhat similar to this:</p>
<pre><code class="language-shell">$ git write-tree
8894cd99d735c5f89d8c1affbb744f074f47bf79

$ git cat-file -p 8894cd99d735c5f89d8c1affbb744f074f47bf79
100644 blob 73709ba6866a30a566a38ca40aa81d5f0928bce0    readme.md
040000 tree 3c92a605431c9538952ae053957ffd4a0ce6590f    temp
</code></pre>
<p>So we first have the mode, or permissions, next there is a type, followed by the hashed value and the name of the file. The important thing to have in mind is that the tree is an object that contains only data about one directory level. If we would want to see what is under <code>temp</code> directory, we can see it by cat-file-ing the content below:</p>
<pre><code class="language-shell">$ git cat-file -p 3c92a605431c9538952ae053957ffd4a0ce6590f
100644 blob 73709ba6866a30a566a38ca40aa81d5f0928bce0    tst
</code></pre>
<p>And those are the trees. A bit more complex than a regular tree, with branches and fruit you might think. Well, to tell you the truth, it took me a while to understand it, especially this one-level thing.</p>
<p>Okay, we can now move on safely to the commit part. What <strong>is</strong> a <strong>commit</strong>? It is a way of recording the state of the current directory with all of the files and changes that you decided to store in git. What will it do? It will create some kind of a snapshot of the working directory. This snapshot will be yet another hashed object, stored in the <code>.git/objects</code> directory and it will have the information about why the snapshot was created (the commit message), who created it (author and commiter) and when it was created. It will look somewhat similar to this:</p>
<pre><code class="language-shell"># Creating the commit
$ git commit -m &quot;My first commit&quot;
[main (root-commit) 2d83752] My first commit
 2 files changed, 2 insertions(+)
 create mode 100644 readme.md
 create mode 100644 temp/tst

# Showing the content of the commit hash
$ git cat-file -p 2d83752
tree 8894cd99d735c5f89d8c1affbb744f074f47bf79
author Test &lt;test@example.com&gt; 1644511932 +0000
committer Test &lt;test@example.com&gt; 1644511932 +0000

My first commit
</code></pre>
<p>So this is basically what a commit is. Each and every time the commit happens, two hashes are created - one for the tree, and the second one for the commit itself, and those are stored in the <code>.git/objects</code> directory.</p>
<p>All of the objects mentioned above - blobs, trees and commits, are <strong>immutable</strong>. If they are created, they cannot be changed. But, <strong>how Git stores those objects?</strong></p>
<p>When git wants to save an object, it creates a header. That header starts by identifying the type of the object it wants to save (blob, tree or commit). To that first part of the header, Git adds a space, followed by the size in bytes of the content, and adds a final null byte. Then, it concatenates that header and the original content of the file and calculates the SHA-1 checksum of that new content. Git then compresses that new content with the zlib and writes the compressed content to disk, in a subdirectory with first two characters of SHA-1 value as it's name and last 38 characters being the filename within that directory.<a href="https://git-scm.com/book/en/v2/Git-Internals-Git-Objects">^2</a> And that's basically it. Pretty neat, isn't it?</p>
<p>When you come to think of it, a lot of things happen under that directory that we are not aware of. Don't be scared by all of the hashes and everything (don't fear the SHA-1<a href="https://www.youtube.com/watch?v=P6jD966jzlk">^3</a>), when you get the gist of it, it really is easy to understand. And when you have the <code>git cat-file -p &lt;hashed-value&gt;</code> by your side, anything is possible!</p>
<p>This is it for now. If you want to add your comment to this, or provide feedback, feel free to <a href="https://www.wonderingchimp.com/contact/">contact me</a>, else I hope you enjoyed it and see you next Friday with the next, hopefully interesting, article.</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Have I been using grep wrong this whole time?</title>
			<link href="https://wonderingchimp.com/posts/have-i-been-using-grep-wrong-this-whole-time/"/>
			<updated>2022-01-30T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/have-i-been-using-grep-wrong-this-whole-time/</id>
			<content type="html"><![CDATA[
				<p>At some point in our lives we ask ourselves - are we doing the right thing? I've asked myself that question numerous times, most recently - am I using the grep wrong?</p>
<p>Let's start from the beginning - what is grep?</p>
<p><code>grep</code> is a pattern searching command line tool in linux which goes into the file and searches for pattern you have provided and prints out the result. I use it quite a lot, however, rather simple, nothing fancy. Here is its description from the man pages<a href="https://www.man7.org/linux/man-pages/man1/grep.1.html">^1</a>:</p>
<pre><code>DESCRIPTION
       grep searches for PATTERNS in each FILE.  PATTERNS is one or more
       patterns separated by newline characters, and grep prints each
       line that matches a pattern.  Typically PATTERNS should be quoted
       when grep is used in a shell command.

       A FILE of “-” stands for standard input.  If no FILE is given,
       recursive searches examine the working directory, and
       nonrecursive searches read standard input.
</code></pre>
<p>Okay, that was straight forward. Now the reason for me questioning my decisions - I often times use <code>grep</code> in combination with <code>cat</code>. What is <code>cat</code> now?</p>
<p><code>cat</code> is, despite being a domestic species of a small carnivorous mammal<a href="https://en.wikipedia.org/wiki/Cat">^2</a>, also a popular command line tool in Linux which is used primarily (at least, this is how I use it) for showing the output of some file. Here is the short description from man pages<a href="https://www.man7.org/linux/man-pages/man1/cat.1.html">^3</a>:</p>
<pre><code>DESCRIPTION
       Concatenate FILE(s) to standard output.

       With no FILE, or when FILE is -, read standard input.
</code></pre>
<p>How do I use them together you may ask?</p>
<p>I use them in combination - <code>cat</code> some file and piping it to <code>grep</code> to search for some pattern. Something like the below snippet:</p>
<pre><code class="language-bash">$ cat some-file.txt | grep &lt;pattern&gt;
</code></pre>
<p>And what is wrong with the above command? Well, if we don't take into account that we spin up additional process, and type a bit longer, no, nothing is wrong with the above command, at least I think it isn't.</p>
<p>This got me thinking - why do I not just use the <code>grep</code> instead? Was this the faster way to do the searching? As it turns out, this question of mine was mentioned in this reddit post <a href="https://www.reddit.com/r/linux/comments/b1fqp/stop_piping_cat_into_grep/">^4</a>, and person who started this thread was quite annoyed that people were using it wrong - the <code>cat | grep</code> way instead of <code>grep</code> alone. The blogpost was not available, so I needed to consult the wayback machine to get to the source article<a href="https://web.archive.org/web/20130402064017/http://www.rootninja.com/stop-piping-cat-into-grep/">^5</a>. Okay, seems that I've been doing it wrong this whole time. Nevermind that, however, maybe I can actually perform some testing and find out for sure if I was doing it wrong.</p>
<p>Let's find out. Below, is the output of the first testing I've performed on my CentOS 7 machine:</p>
<pre><code class="language-bash">[user@host]$ du -sh file.log
1.4G    file.log
[user@host]$ time cat kubelet.log | grep &quot;E0111&quot; &gt; cat_grep.log

real    0m24.291s
user    0m3.778s
sys     0m7.941s
[user@host]$ time grep &quot;E0111&quot; kubelet.log &gt; grep.log

real    0m22.256s
user    0m0.676s
sys     0m10.507s
[user@host]$ wc -l cat_grep.log
288355 cat_grep.log
[user@host]$ wc -l grep.log
288355 grep.log
</code></pre>
<p>First command shows the file size of the log file. As you can see, the file is a big one, taking 1.4G on the machine. The second command measures the time of the process<a href="https://man7.org/linux/man-pages/man2/time.2.html">^6</a>.</p>
<p>In the first part, I'm running <code>cat</code>, piping it to <code>grep</code> and outputting everything into <code>grep.log</code> file. Why? Because I want to see the number of lines that the <code>grep</code> command found for the comparison sake.</p>
<p>Second part is almost the same as previous, but instead of running <code>cat</code>, I'm running <code>grep</code> directly. The last commands <code>wc -l</code> just outputs the number of lines in a file<a href="https://man7.org/linux/man-pages/man1/wc.1.html">^7</a>.</p>
<p>The thing with the above test is that it might not be the appropriate one, because it writes the lines into the separate file, which can be different from time to time, based on the disk IO. I've tested the above part several times and each time I've got different numbers, sometimes <code>cat | grep</code> was better, and the other times <code>grep</code> alone showed better times.</p>
<p>However, if we exclude the writing to disk part, and just pipe the output into a <code>wc</code> command, the numbers are a bit different:</p>
<pre><code>[user@host]$ time cat kubelet.log | grep &quot;E0111&quot; | wc -l
288355

real    0m10.072s
user    0m2.320s
sys     0m6.277s

[user@host]$ time grep &quot;E0111&quot; kubelet.log | wc -l
288355

real    0m15.221s
user    0m1.224s
sys     0m7.518s
</code></pre>
<p>Each time I've run this test, the <code>time</code> command showed better processing time of <code>cat | grep</code> command. That was really interesting to me, especially because I've expected that the <code>grep</code> alone will be faster. Maybe, the reason for this is that I've piped everything into <code>wc</code> command, for easier output. Okay, lets run it last time, but this time without last pipe:</p>
<pre><code>[user@host]$ time cat kubelet.log | grep &quot;E0111&quot; 
...
...
real    6m37.486s
user    0m43.496s
sys     0m18.369s

[user@host]$ time grep &quot;E0111&quot; kubelet.log 
...
...
real    6m58.121s
user    0m44.362s
sys     0m23.814s
</code></pre>
<p>The last test shows that the <code>cat | grep</code> option is faster, however, I understand that many more things are going below the surface when we run each and every command from the above. As to why the <code>cat | grep</code> option is faster? I cannot give appropriate answer now, because I don't know. I might explore this in some other post(s).</p>
<p>For now, I'm going to keep using my pattern <code>cat | grep</code> and maybe use <code>grep</code> from time to time, when I actually get bored with typing, and I'm totally okay with that, because I feel that in this case - there is no right or wrong! :)</p>

			]]></content>
		</entry>
	
		
		<entry>
			<title>Should we decline assignments we don&#39;t like when we&#39;re just starting our journey?</title>
			<link href="https://wonderingchimp.com/posts/should-we-decline-assignments-we-dont-like-when-we-re-just-starting-our-journey/"/>
			<updated>2022-01-28T00:00:00Z</updated>
			<id>https://wonderingchimp.com/posts/should-we-decline-assignments-we-dont-like-when-we-re-just-starting-our-journey/</id>
			<content type="html"><![CDATA[
				<p>This is kind of a tricky question, and if you want to know my thoughts on this, keep on reading... If not, well, no hard feelings, maybe you're missing out, maybe you aren't.</p>
<p>Okay, let me know put this in some context - several days ago, I had a conversation with my friend about how a junior colleague of theirs has declined participation on a project because they didn't like the technology stack of it. Both of us, me and my friend, work in IT industry, so we kind of were on the same page. And to both of us this decision was a bit strange. We felt a bit condescending to their colleague, wondering if the person in place to decline this for the reason of simply not liking it?</p>
<p>Was that even possible? Are you able to decline something at work and not bear (some) consequences? Those might be some of your questions... As it turned out, it was possible, and it should be possible without bearing any consequences. This thing being possible is another question, which I'll not address in this text. It depends a lot of the company culture you are working in, your confidence, the way you state the issue you have with the assignment, etc.</p>
<p>General thought of this text is - should you or should you not decline the project if you don't like it? And below are my two cents on this, if two cents were some random text on some random blog that is.</p>
<p>This might be one train of thoughts - that task is not for me! I am aware that I recently started here, but, you know what schools I've finished, and just to be given that task? If this is the case, you are in the wrong train and you must consider switching stations. That train will not get you far, unless you are really stubborn, and most people aren't no matter how smart they are.</p>
<p>You might go in a different direction - okay, this thing is most certainly not for me, not on my radar, but what can I learn from it? I would not decline the opportunity to learn something new, no matter how much I don't like the task at hand. And that is how we get to the most important thing here - learning! This means that you are more of a growth mindset <a href="https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322">^1</a> person, which I fancy myself of being, and that you will tackle almost every challenge with the learning mindset. You'll find out that the things you will learn from this assignment cannot measure to the things you wouldn't if you did decline it, just because you didn't like it.</p>
<p>What can help us to start in the correct mindset is to see something we don't like not as a blocking point that will be a major problem for us, but concentrate on what things that we can learn from it. This all sounds like some buzz phrase, but it really is of great help. Concentrating on things which I can learn really helped me when I was tasked with a project I didn't know anything about, in a foreign land, doing the job I don't know. Why I have accepted it you might ask. Well, it was do or do not decision - so I decided to do it, even though all the impediments I will have to face alongside. It ended up really good for me, I learned a lot about the job, the project, even the country I was in, just because I concentrated on the learning part.</p>
<p>I'll finish off this rambling of mine with an African proverb - Smooth seas do not make skillful sailors.</p>
<p>Until next reading!</p>

			]]></content>
		</entry>
	
</feed>
