<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[AJ's Blog]]></title><description><![CDATA[AJ's Blog]]></description><link>https://blog.arunjagadish.space</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 11:38:34 GMT</lastBuildDate><atom:link href="https://blog.arunjagadish.space/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Stop Trying Harder: The Identity Shift That Makes Success Automatic]]></title><description><![CDATA[We’ve all been there.
You start a new diet, a new workout routine, or a new creative project with a burst of motivation. For a few days, or maybe even two weeks, you’re unstoppable.
And then, suddenly, you’re not.
You skip one gym session. You order ...]]></description><link>https://blog.arunjagadish.space/stop-trying-harder-the-identity-shift-that-makes-success-automatic</link><guid isPermaLink="true">https://blog.arunjagadish.space/stop-trying-harder-the-identity-shift-that-makes-success-automatic</guid><category><![CDATA[Identity]]></category><category><![CDATA[Self Improvement ]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Fri, 14 Nov 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763268430220/460cd7fd-19b6-44b0-b7c6-5ab1a339dfd1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’ve all been there.</p>
<p>You start a new diet, a new workout routine, or a new creative project with a burst of motivation. For a few days, or maybe even two weeks, you’re unstoppable.</p>
<p>And then, suddenly, you’re not.</p>
<p>You skip one gym session. You order the pizza. You forget about the project “just for today.” Before long, you’re right back where you started, wondering:</p>
<p>“Why can’t I stay disciplined? What’s wrong with me?”</p>
<p>The truth? Nothing is wrong with your discipline. The problem is your identity.</p>
<p>You’re trying to force actions that contradict the person you believe you are. And in the long run, identity always beats willpower.</p>
<hr />
<h3 id="heading-part-1-the-unseen-governor-your-self-image">Part 1: The Unseen Governor: Your Self-Image</h3>
<p>In his classic book <em>Psycho-Cybernetics</em>, Dr. Maxwell Maltz described the self-image as a psychological “thermostat.”</p>
<p>It's not just a label, but an active control system. Your self-image is the temperature setting for your life.</p>
<ul>
<li><p>If your set point is “I’m a 180-pound person,” your habits will keep you there.</p>
</li>
<li><p>If your set point is “I’m disorganized,” your environment will inevitably slide back into chaos.</p>
</li>
</ul>
<p>This is why self-sabotage happens. Whenever your actions rise above your self-image, your mind quietly finds ways to pull you back to what feels familiar, not because you’re weak, but because your system is functioning exactly as it was programmed.</p>
<p>You cannot consistently outperform the person you believe you are. Not without friction, burnout, or relapse.</p>
<hr />
<h3 id="heading-part-2-why-willpower-fails-and-identity-wins">Part 2: Why Willpower Fails (and Identity Wins)</h3>
<p>This is why “grinding harder” never works for long.</p>
<p>Every day becomes a draining negotiation between:</p>
<ul>
<li><p>Your conscious goals (“I want to lose weight”)</p>
</li>
<li><p>Your subconscious identity (“I’m someone who struggles with fitness”)</p>
</li>
</ul>
<p>James Clear’s <em>Atomic Habits</em> explains the difference between two approaches:</p>
<p><strong>Outcome-Based Habits</strong> “I want to lose 20 pounds.” Every workout requires force.</p>
<p><strong>Identity-Based Habits</strong> “I am an active, healthy person.” Every workout becomes an act of consistency, not a battle.</p>
<p>Outcome-based thinking:</p>
<blockquote>
<p>“I need willpower to go to the gym.”</p>
</blockquote>
<p>Identity-based thinking:</p>
<blockquote>
<p>“I’m an active person. Active people move their bodies.”</p>
</blockquote>
<p>When identity shifts, behavior becomes dramatically easier. Not effortless, but <em>natural</em>.</p>
<p>You’re not pushing. You’re aligning.</p>
<hr />
<h3 id="heading-part-3-the-blueprint-to-reprogram-your-identity">Part 3: The Blueprint to Reprogram Your Identity</h3>
<p>So how do you change the thermostat?</p>
<p>Not by wishing. Not by pretending. Not by grinding harder.</p>
<p>Identity shifts through a blend of mental rehearsal and physical proof.</p>
<h4 id="heading-step-1-decide-who-you-want-to-be">Step 1: Decide Who You Want to Be</h4>
<p>Not a goal. A role. A simple, clear, present-tense identity.</p>
<ul>
<li><p><strong>Instead of:</strong> “I want to write a book.”</p>
</li>
<li><p><strong>Choose:</strong> “I am a writer.”</p>
</li>
<li><p><strong>Instead of:</strong> “I need to quit junk food.”</p>
</li>
<li><p><strong>Choose:</strong> “I am a healthy person.”</p>
</li>
<li><p><strong>Instead of:</strong> “I want to be more disciplined.”</p>
</li>
<li><p><strong>Choose:</strong> “I am a consistent person.”</p>
</li>
</ul>
<p>Your identity should feel like a direction, not a fantasy.</p>
<h4 id="heading-step-2-rehearse-it-the-maltz-method">Step 2: Rehearse It (The Maltz Method)</h4>
<p>Your mind responds to imagined experiences similarly to real ones. Athletes, performers, and therapists use this every day.</p>
<p>Spend 3 to 5 minutes visualizing yourself acting as your new identity. Don't just see the outcome; see and <em>feel</em> the behavior.</p>
<ul>
<li><p>If you’re <strong>“a writer,”</strong> picture yourself opening the laptop. <em>Feel the slight resistance</em>, then <em>feel the quiet satisfaction</em> of typing one clean sentence.</p>
</li>
<li><p>If you’re <strong>“a healthy person,”</strong> imagine walking past junk food. <em>Feel the brief pull</em>, then <em>feel the sense of pride</em> as you choose something nourishing.</p>
</li>
<li><p>If you’re <strong>“an organized person,”</strong> imagine calmly putting one item back in its place. <em>Feel the small moment of peace</em> it creates.</p>
</li>
</ul>
<p>Visualization prepares your nervous system, making the identity feel familiar before it becomes true.</p>
<h4 id="heading-step-3-prove-it-with-small-wins-james-clear">Step 3: Prove It with Small Wins (James Clear)</h4>
<p>Identity is built through evidence.</p>
<p>Every action you take is a <strong>vote</strong> for the kind of person you’re becoming. You don’t need a landslide victory. Just a steady trickle of votes.</p>
<ul>
<li><p>Write one sentence → <strong>vote for writer</strong></p>
</li>
<li><p>Walk for 5 minutes → <strong>vote for active person</strong></p>
</li>
<li><p>Put one object away → <strong>vote for organized person</strong></p>
</li>
</ul>
<p>Tiny wins. Huge identity impact.</p>
<h4 id="heading-the-feedback-loop-that-changes-you">The Feedback Loop That Changes You</h4>
<p>Here’s where momentum kicks in:</p>
<ol>
<li><p>Mental rehearsal makes taking action easier.</p>
</li>
<li><p>Small action gives your brain proof.</p>
</li>
<li><p>Proof reinforces the identity.</p>
</li>
<li><p>Identity makes the next action even easier.</p>
</li>
</ol>
<p>This is the upward spiral.</p>
<p>At first, you feel like you're faking it. Then you feel like you're practicing it. Then one day you wake up and realize: you became it.</p>
<hr />
<h3 id="heading-part-4-your-new-identity-is-one-vote-away">Part 4: Your New Identity Is One Vote Away</h3>
<p>Identity is not something you “find.”</p>
<p>It’s something you <strong>build</strong>.</p>
<p>Your self-image shapes your actions, and your actions reshape your self-image. You can enter that loop at any moment, including this one.</p>
<p>You don’t need a perfect morning routine. You don’t need motivation. You don’t need a Monday.</p>
<p>You just need one small vote.</p>
<p>So ask yourself:</p>
<p><strong>What’s one tiny action you can take today that your future identity will recognize as its own?</strong></p>
<p>Because the moment you cast that vote, the shift begins.</p>
]]></content:encoded></item><item><title><![CDATA[The Beautiful Mess of Being Human]]></title><description><![CDATA[Ever scroll through social media and feel a wave of inadequacy? Everyone’s life looks so polished, so perfect. Flawless homes, brilliant careers, picture-perfect holidays. It’s easy to look at our own messy, complicated lives and feel like we’re fall...]]></description><link>https://blog.arunjagadish.space/the-beautiful-mess-of-being-human</link><guid isPermaLink="true">https://blog.arunjagadish.space/the-beautiful-mess-of-being-human</guid><category><![CDATA[perfectionism]]></category><category><![CDATA[Self Improvement ]]></category><category><![CDATA[Life lessons]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Sat, 18 Oct 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761037684075/41f76b50-1370-482d-8ed1-f90b9e31ce33.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever scroll through social media and feel a wave of inadequacy? Everyone’s life looks so polished, so perfect. Flawless homes, brilliant careers, picture-perfect holidays. It’s easy to look at our own messy, complicated lives and feel like we’re falling behind.</p>
<p>We’re all chasing this ghost called “perfection,” and honestly, it’s exhausting. It’s a game we can’t win.</p>
<p>But what if I told you that the goal isn’t to be perfect? What if the most beautiful, valuable, and <em>real</em> parts of you are tangled up in your imperfections? What if our flaws are not failures, but fingerprints of a life fully lived?</p>
<h2 id="heading-1-the-flaw-is-what-makes-it-real"><strong>1. The Flaw is What Makes It Real</strong></h2>
<p>Think about a diamond. A truly flawless diamond is incredibly rare, almost sterile. Often, the most perfect-looking ones are made in a lab. It’s the tiny, natural inclusions, the so-called “flaws”, that prove a diamond is real. They tell the story of its creation, forged under incredible pressure deep inside the earth.</p>
<p>There’s a beautiful Japanese art form called <em>Kintsugi</em>, which means “golden joinery.” When a piece of pottery breaks, it isn’t thrown away. The pieces are carefully reassembled, and the cracks are filled with a lacquer mixed with powdered gold. The philosophy is that the piece is more beautiful <em>because</em> it has been broken. The breaks are part of its history, and they aren’t meant to be hidden. They are meant to be illuminated.</p>
<p>Press enter or click to view image in full size</p>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/1*5xa61WSOhdbSphXVW_Dh1g.png" alt /></p>
<p>Your life is the same. Your story isn’t written in the moments you were perfect. It’s written in the cracks, the heartbreaks, the mistakes, the moments you fell and got back up. Those aren’t your flaws; they’re your golden repairs. They are what make you, you.</p>
<h2 id="heading-2-imperfection-isnt-a-final-grade-its-a-guide"><strong>2. Imperfection Isn’t a Final Grade; It’s a Guide</strong></h2>
<p>Imagine telling yourself you’re a failure every time you make a mistake while learning something new. You’d never try anything, would you?</p>
<p>Somehow, we’ve come to see imperfection as a final judgment on our worth. It’s not. <strong>Imperfection is simply a compass.</strong> It’s a piece of data that gently points you toward your next step.</p>
<ul>
<li><p>That bug in your code? It isn’t a verdict on your skills. It’s a puzzle that will make you a stronger problem-solver.</p>
</li>
<li><p>That clumsy conversation you had? It’s not a social failure. It’s a lesson in communication you can carry with you.</p>
</li>
<li><p>That project that didn’t turn out as you hoped? It’s not a waste. It’s the necessary first version that will lead to the brilliant second one.</p>
</li>
</ul>
<p>When you start seeing your imperfections as guides instead of critics, everything changes. It’s a quiet whisper saying, “There’s more to discover over here.”</p>
<h2 id="heading-3-perfection-is-lonely-imperfection-builds-connection"><strong>3. Perfection is Lonely. Imperfection Builds Connection.</strong></h2>
<p>When have you felt most connected to someone?</p>
<p>I bet it wasn’t when they were listing their achievements or showing you how they have it all together. It was probably in a quiet moment when they were brave enough to be vulnerable. When they admitted, “I’m struggling,” or “I was wrong,” or “I have no idea what I’m doing.”</p>
<p>Trying to be perfect builds walls. It keeps people at a distance because it isn’t real, and deep down, we all know it. Imperfection is an open door. It says, “I’m human, just like you.” It gives the people around you permission to take off their own masks and be human, too. This is where real trust and friendship are born, not in the polished highlight reels, but in the messy, beautiful truth.</p>
<h2 id="heading-4-perfection-paralyzes-imperfection-sets-you-free"><strong>4. Perfection Paralyzes. Imperfection Sets You Free.</strong></h2>
<p>Ah, perfectionism. My old friend, and probably yours too. It’s that voice that whispers, “Don’t start yet. You’re not ready. It’s not good enough.” It’s the saboteur that keeps your brilliant ideas locked in a notebook and your dreams on a vision board.</p>
<p>It’s the reason for that blank page, that unlaunched project, that unsent email.</p>
<p>Embracing imperfection is what gives you permission to <em>begin</em>. It’s the simple, powerful idea that “done is better than perfect.” You can’t steer a parked car. You can only improve something that already exists, flaws and all. Give yourself permission to be a beginner. Give yourself permission to be messy. Just start.</p>
<h2 id="heading-you-are-a-work-in-progress"><strong>You Are a Work in Progress</strong></h2>
<p>So, let’s make a deal. Let’s stop chasing a filtered, flawless version of life that doesn’t exist. Let’s start embracing the beautiful, messy, wonderfully imperfect reality of being human.</p>
<p>Look for the Kintsugi in your own story. Follow the compass of your mistakes. Let your vulnerability be the bridge that connects you to others.</p>
<p>Your imperfections are not signs of weakness. They are the beautiful, messy, undeniable proof that you are alive.</p>
<p><a target="_blank" href="https://medium.com/tag/perfectionism?source=post_page-----5904fb374700---------------------------------------">  
</a></p>
]]></content:encoded></item><item><title><![CDATA[The Quiet Rebellion: Finding Joy in a 'Boring' Life]]></title><description><![CDATA[In a world obsessed with viral moments and endless scrolling, admitting you want a "boring" life feels like a quiet act of rebellion. But what if trading chaos for calm isn’t a downgrade? What if it’s the smartest upgrade you could ever make? I’m con...]]></description><link>https://blog.arunjagadish.space/the-quiet-rebellion-finding-joy-in-a-boring-life</link><guid isPermaLink="true">https://blog.arunjagadish.space/the-quiet-rebellion-finding-joy-in-a-boring-life</guid><category><![CDATA[boredom]]></category><category><![CDATA[Life lessons]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Thu, 16 Oct 2025 17:19:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760635382496/900d40e7-04cb-4ef6-b874-e0e69830e04e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In a world obsessed with viral moments and endless scrolling, admitting you want a "boring" life feels like a quiet act of rebellion. But what if trading chaos for calm isn’t a downgrade? What if it’s the smartest upgrade you could ever make? I’m convinced that the steady, uneventful life isn't something to escape, it’s something to build. It’s a sign that things are, for the most part, working.</p>
<h4 id="heading-the-digital-fog-and-the-myth-of-keeping-up"><strong>The Digital Fog and the Myth of 'Keeping Up'</strong></h4>
<p>That feeling of being mentally fried after 20 minutes on social media? It’s not just you. Platforms like TikTok and Instagram run on a diet of digital junk food, short, flashy content that delivers a quick dopamine hit and leaves you hungry for more. This isn't just a harmless distraction; it's rewiring your brain.</p>
<p>The endless scroll creates a kind of mental fog. Your attention span shrinks, and your thinking becomes fragmented. The algorithm trains you to crave constant novelty, making it almost painful to sit through a movie, read a chapter of a book, or just be alone with your thoughts. I’ve watched brilliant, creative friends get lost in this haze, chasing trends until they burn out, feeling disconnected from their own lives. A life built on reacting to the next notification isn't really your life at all.</p>
<p>The way out is simpler than it sounds: unplug. Try giving yourself a hard stop after 30 minutes. Curate your feed ruthlessly. Better yet, take walks without your phone. Reclaim your mental space for things that actually nourish you, not just distract you. A good life often starts when you tune out the noise.</p>
<h4 id="heading-rediscovering-the-power-of-boredom"><strong>Rediscovering the Power of Boredom</strong></h4>
<p>We've been sold the idea that happiness is a perpetual state of stimulation. But that’s exhausting. True well-being has an ebb and a flow, and the calm in between is where boredom works its magic.</p>
<p>Boredom isn't a void to be filled; it's fertile ground. It’s the mental space where your mind can finally wander, untethered from a screen. It’s in those quiet moments, staring out a window, waiting for the kettle to boil, that genuine ideas spark and self-reflection happens. Inventors and artists have long credited these "empty" moments for their biggest breakthroughs.</p>
<p>So the next time you feel the itch for distraction during a quiet moment, try to just sit with it. Let your mind drift. Journal. Tinker with something. Boredom isn't the enemy; it's the reset button your overstimulated brain has been begging for. It’s where contentment begins.</p>
<h4 id="heading-the-quiet-hum-of-a-life-thats-working"><strong>The Quiet Hum of a Life That's Working</strong></h4>
<p>Think about it: what we call "excitement" is often just the relief that follows chaos. It's the thrill of escaping a rut, dodging a disaster, or landing a win after a long struggle. But if your days feel predictable and calm, it’s not a sign you’re stuck. It’s a sign you’re not in survival mode anymore.</p>
<p>A life defined by thrilling highs and dramatic lows is a life spent on a rollercoaster. A "boring" life is a steady, scenic hike. It offers peace, reliable routines, and the space to appreciate small, profound joys, a home-cooked meal, a conversation that flows effortlessly, the comfort of a familiar sunset.</p>
<p>This is a life where problems stay manageable and fulfillment comes from consistency, not spectacle. If your happiness depends on the next big thrill, you’re always one step away from a fall. Aim for the plateau instead. It’s sustainable, it’s peaceful, and it’s profoundly good.</p>
<p>Ultimately, embracing a "boring" life isn't about settling. It’s about choosing depth over distraction. It’s about stripping away the digital haze, welcoming boredom as a guide, and learning to cherish the steady, quiet hum of a life that is truly your own.</p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 9: Happiness (or So I Thought, Observing the Chaos)]]></title><description><![CDATA[The system was alive. A beautiful, complex organism of services, queues, and databases, all working in concert. It was fast, resilient, and truly scalable. I had built a system I could be proud of.
I was floating on Cloud 9, sipping coffee and starin...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-9-happiness-or-so-i-thought-observing-the-chaos</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-9-happiness-or-so-i-thought-observing-the-chaos</guid><category><![CDATA[observability]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[OpenTelemetry]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Thu, 21 Aug 2025 18:42:00 GMT</pubDate><content:encoded><![CDATA[<p>The system was alive. A beautiful, complex organism of services, queues, and databases, all working in concert. It was fast, resilient, and truly scalable. I had built a system I could be proud of.</p>
<p>I was floating on <strong>Cloud 9</strong>, sipping coffee and staring at green dashboards, imagining that this was the pinnacle of developer happiness. Everything was perfect. Or so I thought.</p>
<p>But as I looked at the architecture diagram on my screen, a new kind of uncertainty settled in. The system was now so complex, so distributed, that I couldn't hold it all in my head anymore. My Cloud 9 view suddenly felt like being 30,000 feet up in the air with no instruments beautiful, yet terrifying. A single misstep and I could plummet into chaos.</p>
<p>This reality hit me when a user emailed, "I signed up, but I never got my welcome email."</p>
<p>My blood ran cold. In the old monolithic days, I would just check one log file. But now? The API call was successful, so its logs would just show a cheerful <code>200 OK</code>. The "work" was happening somewhere else, at some other time, in some other container. My internal monologue became a frantic series of questions:</p>
<ul>
<li><p>Did the API successfully publish the event to RabbitMQ in the first place?</p>
</li>
<li><p>Is the message just sitting in the queue, unprocessed, because my workers are all busy or broken?</p>
</li>
<li><p>Or did a worker pick it up and fail silently?</p>
</li>
<li><p>Which worker? I have five identical pods running. How do I find the logs for that specific one?</p>
</li>
<li><p>How do I correlate the initial API request from the user with the background job that ran three seconds later on a completely different container?</p>
</li>
</ul>
<p>I was flying blind on my Cloud 9, surrounded by fluffy green dashboards that masked a storm below.</p>
<hr />
<h2 id="heading-pillar-1-centralized-logging-the-storybook"><strong>Pillar 1: Centralized Logging (The Storybook)</strong></h2>
<p>Logs were scattered across dozens of ephemeral containers. Finding the right one was impossible. I needed all the stories from all my services in one single, searchable library.</p>
<p>Enter <strong>centralized logging</strong>. I set up a stack using <strong>Loki</strong> and <strong>Promtail</strong>. Promtail runs alongside containers, collecting logs and shipping them to Loki, a central log database.</p>
<p>Now, instead of SSHing into pods, I could query a single dashboard:</p>
<pre><code class="lang-plaintext">{app="worker"} |~ "error" and "userId: 123"
</code></pre>
<p>Suddenly, my Cloud 9 had windows. I could finally read the story of what my system was doing.</p>
<hr />
<h2 id="heading-pillar-2-metrics-the-health-dashboard"><strong>Pillar 2: Metrics (The Health Dashboard)</strong></h2>
<p>Logs tell you what happened, but not the current health. I needed <strong>metrics</strong>, a dashboard of dials and gauges for my system’s vital signs.</p>
<p>I used <strong>Prometheus</strong> to scrape numerical data:</p>
<pre><code class="lang-text">http_requests_total{method="POST", path="/api/register"} 210
rabbitmq_messages_in_queue{queue="user_processing"} 15
</code></pre>
<p>Then I connected <strong>Grafana</strong> to build dashboards, showing queue depth, API error rates, and database usage.</p>
<p>Now, I could see trouble brewing before it became an emergency. I was no longer just a historian; I was a doctor monitoring a live patient on my floating Cloud 9.</p>
<hr />
<h2 id="heading-pillar-3-distributed-tracing-the-gps"><strong>Pillar 3: Distributed Tracing (The GPS)</strong></h2>
<p>Metrics and logs helped, but I still couldn’t trace a single request end-to-end. This is where <strong>distributed tracing</strong> comes in. Using <strong>OpenTelemetry</strong>, every API call generated a unique <strong>trace ID</strong>, which was passed along through RabbitMQ to workers.</p>
<p>With <strong>Jaeger</strong>, I could finally see the journey:</p>
<pre><code class="lang-text">POST /api/register (Trace ID: abc-123) - 50ms
Publish to RabbitMQ (Trace ID: abc-123) - 5ms
Worker Process Job (Trace ID: abc-123) - 6500ms
Resize Image - 4000ms
Call MailChimp API - 1500ms
Call SendGrid API - 1000ms -&gt; ERROR
</code></pre>
<p>The black box now had a window. My Cloud 9 wasn’t just floating it had railings, lights, and a clear view of the storm below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755801211522/1b0db9ea-1778-4125-afc7-2ed4d0f6ddd1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755801293850/306d502d-5a2d-4f2c-8b11-0ffd66e695fb.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-cloud-9-really"><strong>Cloud 9, Really?</strong></h2>
<p>I did it. I actually did it. I stared into the abyss of distributed systems, and the abyss blinked first. My application is a fortress of resilience, a symphony of automation. It practically runs itself. I’ve conquered Cloud 9.</p>
<p>Until I realize… Cloud 9 is a thin, fluffy layer. Beautiful, yes but one small mistake and gravity is real. A single missing trace, a quiet failed worker, or a delayed message could ruin the view.</p>
<p>I've earned a break. Time to lean back, put my feet up, and admire the dashboards. Until the next dread, I’m still on Cloud 9 at least for now.</p>
<p><em>(Seriously though, what’s next on the anxiety list? Breaking this perfect monolith into a thousand tiny microservices for fun? Or something truly cursed, like multi-region deployments? Cast your vote for my next adventure in suffering.)</em></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 8: Event-Driven Architecture with Rabbitmq]]></title><description><![CDATA[I Went All-In on Serverless, and It Was a Mistake
The system was a thing of beauty. It was a distributed, resilient, auto-scaling marvel. My infrastructure was code, my application was orchestrated, and my database was a distributed tier that could h...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-8-event-driven-architecture-with-rabbitmq</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-8-event-driven-architecture-with-rabbitmq</guid><category><![CDATA[rabbitmq]]></category><category><![CDATA[event-driven-architecture]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[caching]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Wed, 20 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-i-went-all-in-on-serverless-and-it-was-a-mistake">I Went All-In on Serverless, and It Was a Mistake</h2>
<p>The system was a thing of beauty. It was a distributed, resilient, auto-scaling marvel. My infrastructure was code, my application was orchestrated, and my database was a distributed tier that could handle immense traffic. I had climbed every mountain and slain every dragon. The application was, by all technical measures, complete.</p>
<p>But my users didn't care about my beautiful architecture. They cared about how the app felt. And lately, it felt slow in weirdly specific places.</p>
<p>The feedback was consistent:</p>
<ul>
<li><p>"Signing up took forever."</p>
</li>
<li><p>"My profile picture took a long time to appear."</p>
</li>
<li><p>"I clicked 'submit order' and the little spinner just spun and spun."</p>
</li>
</ul>
<p>Nothing was crashing. The database wasn't overloaded. The servers weren't stressed. I was staring at a wall of green dashboards, yet the user experience was suffering. My perfectly scaled architecture was still delivering a sluggish experience. It didn't make sense.</p>
<h2 id="heading-the-synchronous-trap">The Synchronous Trap</h2>
<p>I decided to trace a single "slow" request. A new user signs up. They fill out the form, upload a profile picture, and click "Register." I watched the logs for that single API call:</p>
<pre><code class="lang-plaintext">BEGIN TRANSACTION

INSERT INTO users...

COMMIT

-- User created. Now, process the profile picture...

-- Loading image into memory...

-- Resizing image to 3 different sizes...

-- Uploading 3 thumbnails to cloud storage...

-- Image processing complete. Now, add to mailing list...

-- Calling external MailChimp API...

-- MailChimp API success. Now, send welcome email...

-- Calling external SendGrid API...

-- SendGrid API success. Now, send success response to user.
</code></pre>
<p>The whole process took seven seconds. For seven seconds, the user was staring at a loading spinner, waiting for my server to finish its long to-do list. The problem wasn't a bottleneck; it was the process itself. My API was trying to do everything, all at once, in one long, synchronous chain.</p>
<h2 id="heading-the-fork-in-the-road-two-paths-to-asynchronous">The Fork in the Road: Two Paths to Asynchronous</h2>
<p>I needed to decouple the initial request from the slow background work. The answer was to shift to an <strong>event-driven architecture</strong>. My API would just announce that a <code>USER_CREATED</code> event had happened by publishing a message to a queue. Then, something else would listen for that message and do the actual work.</p>
<p>My research led me to two distinct architectural paths:</p>
<ol>
<li><p><strong>The Kubernetes-Native Path:</strong> Use RabbitMQ as the message queue and a Worker Process as the consumer. Both would run as containers inside my existing Kubernetes cluster. It felt like a natural extension of my current setup.</p>
</li>
<li><p><strong>The Fully Managed, Serverless Path:</strong> Go all-in on my cloud provider's ecosystem. Use Amazon SQS (Simple Queue Service) as the message queue and AWS Lambda as the consumer.</p>
</li>
</ol>
<p>My gut told me the Kubernetes path made sense. My app had a constant, steady stream of these jobs, and I was already paying for the Kubernetes cluster, so adding one more small process would be cheap and predictable. But my experience screamed otherwise. Every time I had chosen a managed service RDS for my database, ElastiCache for my cache, it had saved me from anxiety. The lesson seemed obvious: <strong>managed is often better.</strong></p>
<p>Ignoring my instinct, I chose the fully managed, serverless path. I configured my API to send a message to an SQS queue, which in turn would trigger my Lambda function. It felt clean. It felt modern. It felt... expensive.</p>
<h2 id="heading-the-wrong-tool">The Wrong Tool</h2>
<p>At the end of the month, I got my cloud bill, and my jaw dropped. The cost of running millions of SQS API calls and Lambda invocations was shockingly high. Worse, users were still complaining about occasional slowness. I realized that sometimes, when a user was the first to trigger the function in a while, they were experiencing a <strong>"cold start"</strong> a multi-second delay while the managed service "woke up" the function. I had even hit the 15-minute execution limit on a large video file, causing a job to fail silently.</p>
<p>My solution had created a new set of problems that were even harder to debug. The managed service wasn't a silver bullet. It was a specific tool for a specific job, and I had chosen it for the wrong one. It was perfect for infrequent, unpredictable tasks, but my workload was constant and predictable.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755800922962/8af68149-ee20-43d5-b346-3a9645682ade.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-right-tool">The Right Tool</h2>
<p>Humbled, I deleted the SQS queue and the Lambda function. I went back to my original, instinctual plan.</p>
<p>I deployed RabbitMQ and a simple <code>worker.js</code> process as new containers in my Kubernetes Deployment manifest. The worker's only job was to connect to RabbitMQ and process jobs from the queue.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># deployment.yaml</span>
<span class="hljs-comment"># ... (existing app container config) ...</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">worker-container</span> <span class="hljs-comment"># &lt;-- The new container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">myusername/my-awesome-app-worker:v1.1</span>
        <span class="hljs-comment"># No ports, it just does background work</span>
</code></pre>
<p>I deployed the change. The user registration endpoint was still lightning fast. The background jobs were processed reliably. And my cloud bill went back down to a sane number.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755800978824/f770fe03-ba0b-4aaf-9ff6-e9fc9b35d78a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-key-lesson-learned">Key Lesson Learned</h2>
<p>The real lesson finally sank in, more profound than just "use the right tool." My previous successes had taught me a simple, dangerous rule: <strong>"outsource your anxiety."</strong> But that rule lacked nuance. The real principle isn't just about outsourcing; it's about understanding <strong>what you're trading for what you're getting</strong>.</p>
<ul>
<li><p>With RDS and ElastiCache, I was trading a small amount of money for a massive reduction in operational complexity and a huge gain in reliability. A fantastic trade.</p>
</li>
<li><p>With the SQS+Lambda stack, I was trading a lot of money and predictable performance for a small reduction in management overhead. I already had a powerful Kubernetes cluster; adding one more container was practically free and gave me total control.</p>
</li>
</ul>
<p>The journey isn't about blindly following a dogma like "managed is always better." It's about looking at your workload, understanding trade-offs, and leveraging investments you've already made. My system was now a collection of small, independent services communicating through a central message bus. It was fast, resilient, and truly scalable.</p>
<p>But as I looked at the architecture diagram on my screen a web of services, databases, caches, and queues, a new kind of uncertainty settled in. The system was now so complex, so distributed, that I couldn't hold it all in my head anymore. If a user's welcome email never arrived, how would I even begin to debug it? The event was fired, but which worker picked it up? Did it fail? Where are the logs? It was a beautiful, powerful machine, but a complete black box.</p>
<p>up next: <a target="_blank" href="https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-9-happiness-or-so-i-thought-observing-the-chaos">A Developer’s Journey to the Cloud 9: Happiness (or So I Thought, Observing the Chaos)</a></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 7: Advanced Database Scaling]]></title><description><![CDATA[After conquering servers with Kubernetes and automating infrastructure with Terraform, I thought I had reached peak scalability. I was wrong.
I had finally reached the summit. Or so I thought. My infrastructure was now code, managed by Terraform. My ...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-7-advanced-database-scaling</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-7-advanced-database-scaling</guid><category><![CDATA[Databases]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Tue, 19 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p><em>After conquering servers with Kubernetes and automating infrastructure with Terraform, I thought I had reached peak scalability. I was wrong.</em></p>
<p>I had finally reached the summit. Or so I thought. My infrastructure was now code, managed by Terraform. My application was an orchestra, conducted by Kubernetes. I could scale my web servers from three to thirty with a single line change in a YAML file. I could rebuild my entire cloud environment from scratch before my coffee got cold. I had achieved true automation. The system felt less like a fragile application and more like a living, resilient organism.</p>
<p>As more users flocked to the app, I would watch with pride as my Kubernetes cluster effortlessly added more containers to meet the demand. The load balancer distributed the traffic perfectly. The cache absorbed the read-heavy requests. It was a beautiful, self-healing, scalable machine.</p>
<p>But then, the alerts started again. Not the loud, obvious "server down" alerts of my early days, but something more subtle:</p>
<ul>
<li><p><strong>P99 Latency Increased</strong></p>
</li>
<li><p><strong>Database Connection Saturation</strong></p>
</li>
</ul>
<p>The application wasn't crashing, but it was getting… sluggish. Pages would occasionally take seconds to load instead of milliseconds. Users were complaining about timeouts during peak hours.</p>
<p>My powerful, infinitely scalable army of application servers was running perfectly. The problem wasn't with the army; it was with what the army was trying to do.</p>
<h2 id="heading-the-bottleneck-at-the-end-of-the-road">The Bottleneck at the End of the Road</h2>
<p>I dove into the monitoring tools provided by my cloud provider, specifically the Amazon RDS dashboard. The picture was immediately clear. My Kubernetes pods were barely breaking a sweat. My cache was healthy. The problem was the one component I thought I had solved: my managed database. Its CPU utilization graph was pinned at 100%. The "Database Connections" metric was hitting the maximum limit.</p>
<p>I had built a ten-lane superhighway of application servers, but it all led to a single-lane toll booth. Every new user, every new post, every write operation in my system had to go through this one, increasingly overwhelmed primary database. My managed service was doing its best it was strong and reliable but still just one.</p>
<p>I had scaled everything else, but I had forgotten to scale my source of truth.</p>
<h2 id="heading-splitting-the-workload-reads-vs-writes">Splitting the Workload: Reads vs. Writes</h2>
<p>My first thought was to just throw more money at the problem upgrade to a bigger database instance. But that was vertical scaling, a temporary fix. The real solution had to be horizontal: scale across more than one machine.</p>
<p>Most applications, including mine, perform far more read operations than write operations. This led me to <strong>read replicas</strong>.</p>
<p>A read replica is an exact, real-time, read-only copy of your primary database. My cloud provider could create one with a few clicks. The plan: direct all SELECT queries to the replica, leaving the primary free for INSERT, UPDATE, DELETE operations.</p>
<p>This required a change in my application's database connection logic. I had to make the code smart enough to know which database to talk to.</p>
<pre><code class="lang-js"><span class="hljs-comment">// dbRouter.js</span>
<span class="hljs-keyword">const</span> { Pool } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'pg'</span>);

<span class="hljs-comment">// Connection pool for the primary (write) database</span>
<span class="hljs-keyword">const</span> writePool = <span class="hljs-keyword">new</span> Pool({
  <span class="hljs-attr">connectionString</span>: process.env.PRIMARY_DB_URL,
});

<span class="hljs-comment">// Connection pool for the read replica database</span>
<span class="hljs-keyword">const</span> readPool = <span class="hljs-keyword">new</span> Pool({
  <span class="hljs-attr">connectionString</span>: process.env.READ_REPLICA_URL,
});

<span class="hljs-built_in">module</span>.exports = {
  <span class="hljs-comment">// Uses writePool for writes</span>
  <span class="hljs-attr">query</span>: <span class="hljs-function">(<span class="hljs-params">text, params</span>) =&gt;</span> writePool.query(text, params),
  <span class="hljs-comment">// Uses readPool for reads</span>
  <span class="hljs-attr">select</span>: <span class="hljs-function">(<span class="hljs-params">text, params</span>) =&gt;</span> readPool.query(text, params),
};
</code></pre>
<p>Throughout the app, I updated calls:</p>
<pre><code class="lang-js"><span class="hljs-comment">// Before</span>
<span class="hljs-keyword">const</span> { rows } = <span class="hljs-keyword">await</span> db.query(<span class="hljs-string">'SELECT * FROM users WHERE id = $1'</span>, [userId]);

<span class="hljs-comment">// After</span>
<span class="hljs-keyword">const</span> { rows } = <span class="hljs-keyword">await</span> db.select(<span class="hljs-string">'SELECT * FROM users WHERE id = $1'</span>, [userId]);

<span class="hljs-comment">// Writes stay the same</span>
<span class="hljs-keyword">await</span> db.query(<span class="hljs-string">'UPDATE users SET last_login = NOW() WHERE id = $1'</span>, [userId]);
</code></pre>
<p>Deploying this change was transformative. CPU on the primary database dropped by 80%, connection limits were no longer an issue, and the application was fast again. I had effectively doubled database capacity by separating reads and writes.</p>
<h2 id="heading-the-final-frontier-sharding">The Final Frontier: Sharding</h2>
<p>Read replicas solved most immediate problems, but what happens when <strong>write traffic alone exceeds a single primary database</strong>? The answer: <strong>sharding</strong>.</p>
<p>Sharding splits data across multiple independent primary databases. For AWS RDS, this means creating two or more smaller RDS instances (shard-1, shard-2, etc.). The application acts as a router using a <strong>shard key</strong> for example, <code>user_id</code>.</p>
<pre><code class="lang-js"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getDbForUser</span>(<span class="hljs-params">userId</span>) </span>{
  <span class="hljs-keyword">if</span> (userId % <span class="hljs-number">2</span> === <span class="hljs-number">0</span>) {
    <span class="hljs-comment">// Even user IDs go to shard 1</span>
    <span class="hljs-keyword">return</span> dbConnectionForShard1;
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-comment">// Odd user IDs go to shard 2</span>
    <span class="hljs-keyword">return</span> dbConnectionForShard2;
  }
}

<span class="hljs-comment">// Fetching a user</span>
<span class="hljs-keyword">const</span> userId = <span class="hljs-number">123</span>; <span class="hljs-comment">// Odd</span>
<span class="hljs-keyword">const</span> db = getDbForUser(userId); <span class="hljs-comment">// Returns shard 2 connection</span>
<span class="hljs-keyword">await</span> db.query(<span class="hljs-string">'SELECT * FROM users WHERE id = $1'</span>, [userId]);
</code></pre>
<p>With this strategy, write capacity can scale almost infinitely by adding more RDS instances and updating routing logic. The database was no longer a single, magical box it had become a <strong>distributed data tier</strong>, just like the application tier.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755755079046/deb915a7-85e1-451b-973a-3bce3f58bad3.png" alt class="image--center mx-auto" /></p>
<p>But even with the database healthy, I noticed a new bottleneck. User registration was still slow not due to the database, but because my processes were synchronous: creating a user, resizing profile pictures, adding them to a mailing list, and sending a welcome email to all while the user waited.</p>
<p>My architecture had scaled, but my processes hadn’t. That was the next challenge.</p>
<p><em>next post:</em> <a target="_blank" href="https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-8-event-driven-architecture-with-rabbitmq"><em>A Developer’s Journey to the Cloud 8: Event-Driven Architecture with RabbitMQ.</em></a></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 6: My Path to Kubernetes and IaC]]></title><description><![CDATA[From Herding Servers to Building Worlds with Code
I had done it. I had achieved high availability. My application was running on a fleet of two identical servers, managed by a smart load balancer. If one server went down, the other would seamlessly t...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-6-my-path-to-kubernetes-and-iac</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-6-my-path-to-kubernetes-and-iac</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ansible]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Sat, 16 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-from-herding-servers-to-building-worlds-with-code">From Herding Servers to Building Worlds with Code</h2>
<p>I had done it. I had achieved high availability. My application was running on a fleet of two identical servers, managed by a smart load balancer. If one server went down, the other would seamlessly take over. My application was resilient. It was professional. I felt invincible.</p>
<p>That feeling lasted until it was time to deploy a new feature.</p>
<p>My beautiful, simple CI/CD pipeline was now obsolete. It was designed to update one server. How was I supposed to update a whole fleet? My first attempt was a clumsy bash script a for loop that would SSH into each server, one by one, pull the latest code, and restart the container.</p>
<p>The first time I ran it, my heart was in my throat. I watched the logs scroll by, praying that server #1 would come back online before server #2 went down. It was a "rolling update" in the most literal, terrifying sense of the word. My fleet wasn't a clean, unified entity; it was a messy collection of individuals that I had to wrangle personally. I wasn't a developer anymore; I was the stressed-out admiral of a small, complicated armada, and I was spending all my time just keeping the ships sailing in the same direction.</p>
<h2 id="heading-chapter-1-the-fleet-commander">Chapter 1: The Fleet Commander</h2>
<p>My life was no longer about building features. It was about managing the fleet. My evenings were spent writing and debugging deployment scripts. My anxieties shifted. "What if the deployment script fails halfway through?" "How do I roll back an update across all servers at once?" "What happens when I need to scale from two servers to five? Or ten?" I was back to micromanaging machines, and it felt like a huge step backward.</p>
<p>This constant, low-grade fear of things getting out of sync led me down a late-night research rabbit hole of "container orchestration." My first stop was my cloud provider's own solution, Amazon ECS (Elastic Container Service). It seemed like the logical next step simple, deeply integrated, and less complex than the other options. It felt like the "easy" path.</p>
<p>But then I hesitated. A familiar feeling crept in the same feeling I had when I chose the "easy" path of running Redis in a Docker container. Was I about to tie my entire application's fate to a single cloud provider's proprietary system? What if I wanted to move to another cloud in the future? Or run a hybrid setup? All my knowledge of ECS would be useless. I would be locked in. I had learned my lesson: the easy path is often a trap.</p>
<p>This time, I decided to invest in the long term. I chose the other path, the one that was known for being more complex, but also more powerful and universal: Kubernetes.</p>
<p>Learning Kubernetes felt like learning a new language. The initial tutorials were a flood of new concepts: Pods, Services, Deployments, ReplicaSets. It wasn't a tool you could master in an afternoon. But as I pushed through, a fundamental, game-changing idea began to crystallize.</p>
<p>With my bash script, I was giving the servers a list of imperative commands: "Go here. Stop this. Pull that. Start this." I was the micromanager.</p>
<p>Kubernetes didn't want my instructions. It wanted my intent.</p>
<p>I stopped telling my servers what to do. Instead, I wrote a configuration file that declared the state I wanted, and Kubernetes worked tirelessly, like a powerful robot, to make that state a reality.</p>
<p>I no longer commanded; I declared.</p>
<p>Instead of a script that says "update server 1, then update server 2," I now wrote a Deployment manifest a simple YAML file that acted as the sheet music for my application's orchestra.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># deployment.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-awesome-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span> <span class="hljs-comment"># &lt;-- I declare I want 3 copies.</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app-container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">myusername/my-awesome-app:v1.1</span> <span class="hljs-comment"># &lt;-- I declare which version to run.</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3001</span>
</code></pre>
<p>To perform an update, I just changed the image tag in this file and applied it. Kubernetes handled the zero-downtime rolling update. If a server died, Kubernetes would just reschedule its containers elsewhere. The individual machines had become an invisible, abstract resource. I had finally stopped being an admiral and could go back to being an architect.</p>
<h2 id="heading-chapter-2-the-fragile-ground-beneath-my-feet">Chapter 2: The Fragile Ground Beneath My Feet</h2>
<p>I had done it. My application was now managed by a powerful, automated fleet commander. I felt unstoppable. I decided to create a staging environment a perfect replica of production for testing. So I went to my cloud provider's console to start building it all again.</p>
<p>And that's when a quiet, sinking feeling set in.</p>
<p>How did I create my production Kubernetes cluster in the first place? I had clicked through dozens of web UI forms. I had configured VPCs, subnets, security groups, and IAM roles manually. It had taken me a whole day. I had no record of what I did. I couldn't remember every setting. My entire production environment, the ground upon which my perfect Kubernetes setup stood, was a fragile, hand-made artifact. How could I ever hope to recreate it perfectly? What if I accidentally deleted something? The whole thing felt like a house of cards.</p>
<p>I had automated my application, but the infrastructure itself was still a manual, brittle mess. This led me to my next discovery: Infrastructure as Code (IaC). The idea is to do for your infrastructure what Docker did for your application environment: define it all in code. For this, I found a powerful duo: Terraform for provisioning the infrastructure, and Ansible for configuring it.</p>
<p>With Terraform, I could write files that described my entire cloud setup the "what."</p>
<pre><code class="lang-plaintext"># main.tf

# Define the Virtual Private Cloud (VPC)
resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
}

# Define a managed Kubernetes cluster within our VPC
resource "aws_eks_cluster" "production" {
  name     = "my-awesome-app-cluster"
  role_arn = aws_iam_role.eks_cluster_role.arn

  vpc_config {
    subnet_ids = [ ... ]
  }
}

# Provision a separate EC2 instance to be our secure "bastion" host
resource "aws_instance" "bastion" {
  ami           = "ami-0c55b159cbfafe1f0" # An Amazon Linux 2 AMI
  instance_type = "t2.micro"
  subnet_id     = # ...
}
</code></pre>
<p>Terraform was brilliant at creating the empty house, but how did I install the specific tools I needed on that bastion host? That's where Ansible came in. It handled the "how" the configuration of the software on the machines Terraform built. I wrote an Ansible "playbook" to define the desired state of my bastion server.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># playbook.yml</span>
<span class="hljs-meta">---</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">hosts:</span> <span class="hljs-string">bastion_hosts</span> <span class="hljs-comment"># This group is defined in an inventory file</span>
  <span class="hljs-attr">become:</span> <span class="hljs-literal">yes</span>
  <span class="hljs-attr">tasks:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Ensure</span> <span class="hljs-string">standard</span> <span class="hljs-string">monitoring</span> <span class="hljs-string">tools</span> <span class="hljs-string">are</span> <span class="hljs-string">installed</span>
      <span class="hljs-attr">apt:</span>
        <span class="hljs-attr">name:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">htop</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">ncdu</span>
        <span class="hljs-attr">state:</span> <span class="hljs-string">latest</span>
        <span class="hljs-attr">update_cache:</span> <span class="hljs-literal">yes</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Create</span> <span class="hljs-string">a</span> <span class="hljs-string">specific</span> <span class="hljs-string">user</span> <span class="hljs-string">for</span> <span class="hljs-string">developers</span>
      <span class="hljs-attr">user:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">dev_user</span>
        <span class="hljs-attr">state:</span> <span class="hljs-string">present</span>
        <span class="hljs-attr">shell:</span> <span class="hljs-string">/bin/bash</span>
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c9xrj3ed6j81wwlbvmfd.png" alt="Architectural diagram with k8s, terraform and ansible" /></p>
<p>I spent a week converting my entire hand-clicked setup into these declarative files. When I was done, I could destroy and recreate my entire production network, Kubernetes cluster, and all its supporting services from scratch with a single command: <code>terraform apply</code>, followed by <code>ansible-playbook playbook.yml</code>.</p>
<p>My infrastructure was no longer a fragile artifact; it was now a set of version-controlled text files living in my Git repository. Creating an identical staging environment was now as simple as running the same commands with a different variable.</p>
<p>I had finally reached a new level of automation. Everything, from the virtual network cables up to the application replicas, was now code. The system felt truly robust. With a few keystrokes, I could scale my application containers, and with a few more, I could scale the very cluster they ran on. The system felt unstoppable. And as more users flocked to the app, I saw my cluster effortlessly adding more resources to meet the demand. But all this traffic, all these new users, were all being funneled to one place. The bottleneck had moved again. My application servers and infrastructure were an army, but they were all trying to get through a single door: my database.</p>
<p><em>next post:</em> <a target="_blank" href="http://blog.arunjagadish.space/a-developers-journey-to-the-cloud-7-advanced-database-scaling"><em>A Developer’s Journey to the Cloud 7: Advanced Database Scaling.</em></a></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 5: Load Balancers & Multiple Servers]]></title><description><![CDATA[My Server Was a Superhero, and That Was the Problem
I had finally done it. My application was a well-oiled machine.The database and cache were offloaded to managed services, so they could scale on their own.My deployments were a one-command, automate...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-5-load-balancers-and-multiple-servers</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-5-load-balancers-and-multiple-servers</guid><category><![CDATA[AWS]]></category><category><![CDATA[Load Balancing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Fri, 15 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-my-server-was-a-superhero-and-that-was-the-problem">My Server Was a Superhero, and That Was the Problem</h2>
<p>I had finally done it. My application was a well-oiled machine.<br />The database and cache were offloaded to managed services, so they could scale on their own.<br />My deployments were a one-command, automated dream.<br />My single server was humming along, its memory usage was stable, and the app was faster than ever.</p>
<p>For the first time, I felt like I had a truly <strong>professional setup</strong>.</p>
<p>And then, one Tuesday morning, AWS sent a routine email:</p>
<blockquote>
<p>"Scheduled maintenance for hardware upgrades in your server's host region. Expect a brief reboot..."</p>
</blockquote>
<p>My blood ran cold.</p>
<p>A reboot. A <strong>brief</strong> reboot. My entire application, my whole online presence, was going to just… turn off.<br />Sure, maybe it would only be for five minutes, but in that instant the <em>fragility</em> of my architecture hit me.<br />Everything I had built every feature, every user account, every bit of hard work depended entirely on one single machine staying on.</p>
<p>My server wasn't just a server; it was a superhero, single-handedly holding up my entire digital world.<br />And even superheroes have to sleep.</p>
<hr />
<h2 id="heading-the-vertical-scaling-trap">The Vertical Scaling Trap</h2>
<p>My first instinct?<br /><em>"Maybe I just need a better server."</em></p>
<p>It’s an appealing idea click a button, pay more money, and upgrade to a machine with more CPU cores and RAM.<br />That’s <strong>vertical scaling</strong>: making your one thing bigger and stronger.</p>
<p>But the maintenance email proved a brutal truth: even the biggest, most expensive server is still <strong>just one server</strong>.<br />It still has to be rebooted. Its hard drive can still fail. Its power supply can still die.</p>
<p>Scaling vertically is like buying an “unsinkable” ship it feels safe, but you’re still betting everything on a single vessel.<br />It doesn’t fix the real flaw: the <strong>single point of failure</strong>.</p>
<p>I didn’t need a bigger boat.<br />I needed a <strong>fleet</strong>.</p>
<hr />
<h2 id="heading-the-power-of-more-not-bigger">The Power of <em>More</em>, Not <em>Bigger</em></h2>
<p>The only way to survive the failure of one thing is to have <strong>more than one</strong> of it.<br />That’s <strong>horizontal scaling</strong>.</p>
<p>Instead of one big server, what if I had two smaller, identical ones?<br />If one went down for maintenance or failed unexpectedly, the other could keep running and my users would never even know.</p>
<p>This was the path to true resilience.<br />But it raised a new question:</p>
<blockquote>
<p>“If I have two servers, which one do my users connect to? And how is the traffic split?”</p>
</blockquote>
<p>That led me to AWS’s dashboard, where I met my new best friend: the <strong>Application Load Balancer (ALB)</strong>.</p>
<hr />
<h2 id="heading-building-the-fleet">Building the Fleet</h2>
<p>Surprisingly, the plan was straightforward.</p>
<ol>
<li><p><strong>Launch a Twin</strong><br /> I spun up a second, identical VM and deployed my same Dockerized application to it.<br /> Now I had <strong>two servers</strong>, side-by-side, each capable of running the whole app.</p>
</li>
<li><p><strong>Hire the Traffic Cop</strong><br /> The ALB doesn’t just point at individual server IPs.<br /> First, I created a <strong>Target Group</strong> a logical container for my servers.<br /> I set up a health check that pinged <code>/health</code> every 30 seconds.<br /> If it got a <code>200 OK</code>, the server was marked healthy.<br /> <em>(Think of it like a backstage manager making sure every performer is ready before sending them on stage.)</em></p>
</li>
<li><p><strong>Set Up the Listener</strong><br /> On the ALB itself, I configured a <strong>Listener</strong> for port 80.<br /> Its rule was simple:</p>
<blockquote>
<p>“When a request comes in, send it to a healthy server in my Target Group.”<br />The ALB would automatically distribute requests evenly.</p>
</blockquote>
</li>
<li><p><strong>Update the Address</strong><br /> The magic moment updating my DNS.<br /> Instead of pointing <a target="_blank" href="http://myapp.com"><code>myapp.com</code></a> to my server’s IP, I pointed it to the <strong>public DNS name of the ALB</strong>.</p>
</li>
</ol>
<hr />
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/901kz2zyottxi7zq0ro2.png" alt="Architectural diagram with ALB" /></p>
<h2 id="heading-the-first-test">The First Test</h2>
<p>I shut down one server manually, refreshed my site… and nothing happened.<br />It stayed online, smooth as ever.</p>
<p>Behind the scenes, the load balancer had noticed the outage during a health check and was silently routing all traffic to the surviving server.<br />When I restarted the downed server, the ALB welcomed it back into rotation without any downtime.</p>
<p>That was it I had built <strong>high availability</strong>.<br />No single point of failure.<br />A system that could take a punch and keep running.</p>
<hr />
<h2 id="heading-the-new-problem">The New Problem</h2>
<p>As I admired my two-server fleet, a thought crept in.</p>
<p>My CI/CD pipeline was perfect for one server.<br />But now? Two servers meant two deployments.</p>
<p>What if I needed <strong>five servers</strong>?<br />Or ten?<br />How would I update them all at once without breaking things?</p>
<p>I could already picture the nightmare:<br />half my servers running old code, the other half on a new version, users getting inconsistent results.</p>
<p>I had solved the single-point-of-failure problem…<br />and opened the door to the <strong>complexity-at-scale problem</strong>.</p>
<hr />
<p><strong>Next up:</strong><br /><a target="_blank" href="https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-6-my-path-to-kubernetes-and-iac"><em>A Developer’s Journey to the Cloud 6: Managing Complexity with Kubernetes</em></a></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 4: Caching with Redis]]></title><description><![CDATA[My App Was Getting Popular, and It Was Starting to Hurt
For the first time in this journey, I felt a sense of true peace. My deployments were fully automated. I could push a new feature, walk away to make a cup of tea, and return to find it live in p...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-4-caching-with-redis</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-4-caching-with-redis</guid><category><![CDATA[Redis]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Thu, 14 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-my-app-was-getting-popular-and-it-was-starting-to-hurt">My App Was Getting Popular, and It Was Starting to Hurt</h2>
<p>For the first time in this journey, I felt a sense of <strong>true peace</strong>. My deployments were fully automated. I could push a new feature, walk away to make a cup of tea, and return to find it live in production.<br />No manual checklists. No “Did I forget to restart that service?” anxiety. The high-wire act of deploying by hand was gone. For a developer, this was bliss. I could finally focus purely on the application itself.</p>
<p>And that’s when I started to notice things.</p>
<p>The main dashboard took <em>just a hair</em> longer to load.<br />A user emailed me to say their profile page “felt sticky.”<br />Nothing was crashing. Nothing was broken. But a new kind of unease began to creep in, the quiet, creeping dread of a system that is silently starting to buckle under its own weight.</p>
<hr />
<h2 id="heading-the-investigation-a-different-kind-of-broken">The Investigation: A Different Kind of Broken</h2>
<p>My first instinct was the usual: check the server’s health.<br />I SSH’d in, ran my standard CPU and memory checks… all green. No spikes. No memory leaks. So why did everything feel <em>sluggish</em>?</p>
<p>I dug deeper one layer down into my managed database’s monitoring dashboard. And that’s when the story changed. The CPU utilization graph looked like an EKG for a hummingbird: constant, jagged peaks. My database was <em>working incredibly hard</em>.</p>
<p>I enabled query logging, leaned back in my chair, and watched the flood of requests pour in.</p>
<p>And then I saw it: my application was asking my database <strong>the exact same questions</strong> over and over.<br />It was like sending the same intern to the library hundreds of times a minute to fetch the same book.<br />The database, bless its heart, dutifully sprinted to the shelves every single time never pausing to wonder if maybe it could just keep a copy on its desk.</p>
<p>My app wasn’t broken. It was just… tired. And it was tiring out my database.</p>
<hr />
<h2 id="heading-the-easy-fix-and-the-mistake-i-didnt-see-coming">The "Easy" Fix and the Mistake I Didn’t See Coming</h2>
<p>The solution seemed obvious: give my app a short-term memory.<br />In other words <strong>caching</strong>.</p>
<p>Redis was the obvious choice. It’s an in-memory, high-speed store designed for exactly this problem.</p>
<p>I already had my <code>docker-compose.yml</code> set up. What’s one more service?<br />It felt clean. Simple. No meetings required.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># docker-compose.yml (The "easy" but flawed approach)</span>
<span class="hljs-attr">version:</span> <span class="hljs-string">'3.8'</span>
<span class="hljs-attr">services:</span>
  <span class="hljs-attr">app:</span>
    <span class="hljs-comment"># ... my app config ...</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">db</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">cache</span> <span class="hljs-comment"># &lt;-- Added this dependency</span>

  <span class="hljs-attr">db:</span>
    <span class="hljs-comment"># ... my managed db is external now, so this is gone ...</span>

  <span class="hljs-attr">cache:</span> <span class="hljs-comment"># &lt;-- The new service</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">redis:6-alpine</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"6379:6379"</span>
</code></pre>
<p>I wired the caching logic into my code, redeployed, and the results were instant.<br />The app was flying. The once frantic database CPU graph became a calm, glass-smooth line.</p>
<p>For a few days, I walked taller. I’d done it again problem solved.</p>
<hr />
<h2 id="heading-the-crash-that-felt-familiar">The Crash That Felt Familiar</h2>
<p>Then, one afternoon, the alerts started firing.<br />The site wasn’t just slow it was timing out.</p>
<p>My stomach tightened as I SSH’d into the server and ran:</p>
<pre><code class="lang-bash">top
The truth stared back:  
MEM --- 98.7% used
</code></pre>
<p>My little server’s RAM was choking. Sometimes Redis was the culprit. Other times, my Node.js process. Either way, the system was suffocating.</p>
<p>And there it was the wave of déjà vu.<br />Just months ago, I’d been losing sleep over my database. Now I was losing sleep over my cache.<br />I hadn’t really solved the problem. I’d just moved the stress around, like shifting a heavy box from one arm to the other.</p>
<p>The lesson was starting to crystalize:<br /><strong>The goal isn’t just to use the right tool it’s to use it in a way that reduces your operational anxiety, not just relocates it.</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xs1ntomqqm6ybc2rsgag.png" alt="Architectural diagram with docker redis" /></p>
<hr />
<h2 id="heading-the-real-fix-outsourcing-my-anxiety-again">The Real Fix: Outsourcing My Anxiety (Again)</h2>
<p>Humbled, I shut down the Redis container.<br />Then I went shopping for the <em>right</em> kind of Redis.</p>
<p>My cloud provider had exactly what I needed: <strong>Amazon ElastiCache</strong>, a fully managed Redis service.<br />A few clicks later, I had a production-grade cache that:</p>
<ul>
<li><p>Didn’t touch my app server’s RAM</p>
</li>
<li><p>Was scalable and secure</p>
</li>
<li><p>Was patched and monitored by people whose <em>full-time job</em> was making Redis run perfectly</p>
</li>
</ul>
<p>The migration was almost embarrassingly simple. All I had to do was swap the connection string in my <code>.env</code>:</p>
<p>From this:</p>
<pre><code class="lang-plaintext">REDIS_URL=redis://cache:6379
</code></pre>
<p>To this:</p>
<pre><code class="lang-plaintext">REDIS_URL=redis://my-app-cache.x1y2z.ng.0001.aps1.cache.amazonaws.com:6379
</code></pre>
<p>I redeployed.<br />The app was still fast.<br />My server’s RAM sat at a comfortable 30%.<br />And for the first time in weeks, I wasn’t worried about my cache exploding at 2 a.m.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vcsrxyjhqndxsz7p9qe.png" alt="Architectural diagram with managed redis" /></p>
<hr />
<h2 id="heading-the-next-problem-already-knocking">The Next Problem, Already Knocking</h2>
<p>But as I watched my healthy server hum along, a new thought crept in.</p>
<p>I’d offloaded my database.<br />I’d offloaded my cache.<br />But my <strong>application code</strong> the heart of the product still lived on <em>one</em> single server.</p>
<p>What happens when even without extra baggage, my app needs more CPU or RAM than one box can give?<br />What happens when <em>the process itself</em> becomes the bottleneck?</p>
<p><strong>Up next:</strong> <a target="_blank" href="https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-5-load-balancers-and-multiple-servers"><strong><em>A Developer’s Journey to the Cloud Part 5: Load Balancers &amp; Multiple Servers</em></strong></a></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 3: Building a CI/CD Pipeline]]></title><description><![CDATA[My Deployments Were a Ritual, Not a Process
We've come so far. Our application is neatly containerized in Docker, and our data is safe and sound in managed cloud services. I had eliminated the "works on my machine" curse and outsourced my 3 AM data-l...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-3-building-a-cicd-pipeline</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-3-building-a-cicd-pipeline</guid><category><![CDATA[Cloud]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Wed, 13 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-my-deployments-were-a-ritual-not-a-process">My Deployments Were a Ritual, Not a Process</h2>
<p>We've come so far. Our application is neatly containerized in Docker, and our data is safe and sound in managed cloud services. I had eliminated the <em>"works on my machine"</em> curse and outsourced my 3 AM data-loss fears. I should have been happy.</p>
<p>But a new kind of dread was creeping in, a dread that arrived every time I had to ship a new feature: <strong>the deployment ritual</strong>.</p>
<hr />
<h2 id="heading-the-manual-dance">The Manual Dance</h2>
<p>It was a clunky, manual dance that I had perfected:</p>
<ol>
<li><p><code>git push</code> my changes.</p>
</li>
<li><p>Open a terminal and SSH into my server.</p>
</li>
<li><p>Navigate to the project folder.</p>
</li>
<li><p>Run <code>docker-compose down</code>, <code>git pull</code>, and finally <code>docker-compose up --build</code>.</p>
</li>
</ol>
<p>Every. Single. Time.</p>
<p>It <em>worked</em>, but it felt wrong. It was slow. It was nerve-wracking, what if I accidentally typed the wrong command on my production server?</p>
<p>Most importantly, it was a <strong>bottleneck</strong>. I was the only one who could do it. I had become the <em>deployment guy</em>, a title I never wanted.</p>
<hr />
<h2 id="heading-the-day-i-locked-myself-out">The Day I Locked Myself Out</h2>
<p>The breaking point came in a coffee shop on shaky Wi-Fi. I SSH'd into my server to deploy a critical hotfix, but my connection dropped midway through.</p>
<p>The application stopped.<br />The server never got the command to bring it back up.<br />The site was down.</p>
<p>It took me ten frantic minutes to get a stable connection and fix it, but the damage was done.</p>
<p>I realized my manual process wasn’t just inefficient, it was <strong>fragile</strong>. It relied on me, my laptop, and a stable internet connection.</p>
<hr />
<h2 id="heading-what-if-deployment-just-happened">What If Deployment Just… Happened?</h2>
<p>That night, I asked myself:</p>
<blockquote>
<p><em>What if deploying wasn't a ritual I had to perform? What if it was just… something that happened automatically when the code was ready?</em></p>
</blockquote>
<p>That question led me to <strong>CI/CD</strong> (Continuous Integration / Continuous Deployment) an <em>assembly line for code</em>.</p>
<p>Tools like GitHub Actions or GitLab CI act as robots on this assembly line: you give them a recipe, and they execute it perfectly, every time.</p>
<hr />
<h2 id="heading-giving-the-robot-my-ritual">Giving the Robot My Ritual</h2>
<p>I built a recipe so that GitHub itself would:</p>
<ol>
<li><p>Build and push my Docker image.</p>
</li>
<li><p>Tell my server to pull and run the new version.</p>
</li>
</ol>
<p>After a day of tinkering, I had my workflow:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># .github/workflows/deploy.yml</span>
<span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">Production</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">main</span> ] <span class="hljs-comment"># Run on every push to main branch</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-and-deploy:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">Code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Login</span> <span class="hljs-string">to</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v2</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_TOKEN</span> <span class="hljs-string">}}</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">Push</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Image</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/build-push-action@v4</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">context:</span> <span class="hljs-string">.</span>
          <span class="hljs-attr">push:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">tags:</span> <span class="hljs-string">myusername/my-awesome-app:${{</span> <span class="hljs-string">github.sha</span> <span class="hljs-string">}}</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">Server</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">appleboy/ssh-action@master</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">host:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.SERVER_HOST</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.SERVER_USER</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">key:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.SSH_PRIVATE_KEY</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">script:</span> <span class="hljs-string">|
            cd /home/user/app
            export IMAGE_TAG=${{ github.sha }}
            docker-compose up -d --no-build</span>
</code></pre>
<p>The first time the pipeline ran successfully, it felt like magic.<br />I pushed my code, and a few minutes later <strong>changes were live</strong>.</p>
<p>I hadn’t even opened my terminal. My server was no longer a sacred place I had to log into. It was just a machine that ran containers. The “keys” now lived securely in GitHub’s secrets, not in my pocket.</p>
<hr />
<h2 id="heading-from-ritual-to-repeatable-code">From Ritual to Repeatable Code</h2>
<p>My deployment process was no longer trapped in my head it was now <strong>code</strong> in my repository. Version-controlled. Repeatable. Secure.</p>
<p>Deployments went from being a 10-minute, high-anxiety event to a complete non-event. They just… happened.</p>
<hr />
<h2 id="heading-a-new-problem-emerges">A New Problem Emerges</h2>
<p>For the first time, I felt real peace. I could focus entirely on the application itself.</p>
<p>But then I noticed:</p>
<ul>
<li><p>A slight lag when the dashboard loaded.</p>
</li>
<li><p>A user emailing to say their profile page <em>"felt sticky"</em>.</p>
</li>
</ul>
<p>Nothing was crashing. Nothing was broken. But the system was <strong>straining</strong>.<br />The irony? My new, efficient deployments made it easier for more users to sign up and creating the very load that was slowing things down.</p>
<p>Diving into the logs, I saw the truth:<br />My database was getting hammered with the same queries over and over.</p>
<p>The app wasn’t broken. It was tired.<br />It was time to give it a break.</p>
<hr />
<p><strong>Stay tuned for the next post:</strong> <em>A Developer’s Journey to the Cloud 4: Caching with Redis</em></p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 2: My Database Lived in a Shoebox, and I Didn’t Even Know It]]></title><description><![CDATA[My Database Lived in a Shoebox, and I Didn’t Even Know It
We did it. In the last post, we took our application, boxed it up with Docker, and shipped it to a server. It was running, stable, and consistent. The "works on my machine" curse was broken. I...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-2-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-2-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Databases]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Tue, 12 Aug 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it">My Database Lived in a Shoebox, and I Didn’t Even Know It</h2>
<p>We did it. In the last post, we took our application, boxed it up with Docker, and shipped it to a server. It was running, stable, and consistent. The "works on my machine" curse was broken. I felt like I had conquered the cloud.</p>
<p>For about a week, I was a DevOps king, basking in the glory of my perfectly containerized world.</p>
<p>Then, one evening, as I was about to shut my laptop, a cold thought washed over me:</p>
<blockquote>
<p><strong>Where does my data actually live?</strong></p>
</blockquote>
<hr />
<h2 id="heading-the-shoebox-realization">The Shoebox Realization</h2>
<p>It hit me like a bad database query: my entire database, every user, every post, every precious row of information was running inside that same Docker container, on that same single server.</p>
<p>And it wasn’t just the database. My user-uploaded images? Just sitting in a <code>/uploads</code> folder on that same hard drive, quietly piling up like old photos in a forgotten attic.</p>
<p>The whole thing was one fragile digital shoebox. If the lid blew off (or the drive failed), it would all scatter into the void.</p>
<hr />
<h2 id="heading-the-3-am-fear">The 3 AM Fear</h2>
<p>That night I lay in bed thinking about <code>rm -rf /</code> nightmares and spinning disks giving their last click of life.</p>
<p>What if the server’s hard drive failed? It’s just a machine, after all. Everything would be gone. Instantly.</p>
<p>What about backups? Sure, I could write a script, maybe a cron job:</p>
<pre><code class="lang-bash">pg_dump mydb &gt; backup.sql
</code></pre>
<p>But… where would that backup go? Another folder? On the same server? That’s like hiding your spare house key under the doormat of a house that’s on fire.</p>
<p>The more I thought about it, the more absurd it became.</p>
<h3 id="heading-googling-myself-into-dba-territory">Googling Myself Into DBA Territory</h3>
<p>I started Googling “how to back up a database properly” and promptly fell into a black hole: replication strategies, point-in-time recovery, WAL archiving, security patching.</p>
<p>I wasn’t just a developer anymore, I was now an unwilling, unqualified, and mildly terrified part-time Database Administrator and Storage Manager.</p>
<p>This wasn’t the dream. The dream was building my app, not babysitting a database and a pile of user images like some digital hoarder.</p>
<h3 id="heading-the-clouds-best-kept-secret">The Cloud’s Best-Kept Secret</h3>
<p>Defeated, I wandered through my cloud provider’s dashboard, clicking through services with names I didn’t fully understand.</p>
<p>And then I saw them two shiny lifeboats in a sea of uncertainty:</p>
<ul>
<li><p>Relational Database Service (RDS): “A managed relational database service... handles provisioning, patching, backup, recovery, failure detection, and repair.”</p>
</li>
<li><p>Simple Storage Service (S3): “Object storage designed to store and retrieve any amount of data... with 99.999999999% durability.”</p>
</li>
</ul>
<p>It was almost comical. Of course the cloud companies were good at this. This is their entire business!</p>
<p>Here I was, ready to script a janky nightly backup, while they had teams of engineers whose only job was to make sure data never disappears.</p>
<h3 id="heading-handing-over-the-keys">Handing Over the Keys</h3>
<p>The next day, I stopped being stubborn and started migrating.</p>
<h4 id="heading-database-migration">Database Migration</h4>
<p>With a few clicks, I spun up an RDS instance. Automatic backups? done High availability? done Security patches? done</p>
<p>I just updated my app’s connection string:</p>
<pre><code class="lang-plaintext">DATABASE_URL=postgres://user:password@database-1.abcdefghij12.us-east-1.rds.amazonaws.com:5432/mydb
</code></pre>
<h3 id="heading-file-storage-migration">File Storage Migration</h3>
<p>Instead of saving files locally, I integrated the S3 SDK and changed my upload logic:</p>
<pre><code class="lang-javascript">s3.upload({
  <span class="hljs-attr">Bucket</span>: <span class="hljs-string">"my-app-bucket"</span>,
  <span class="hljs-attr">Key</span>: <span class="hljs-string">`uploads/<span class="hljs-subst">${file.name}</span>`</span>,
  <span class="hljs-attr">Body</span>: file.data
});
</code></pre>
<p>Suddenly, my images weren’t trapped in <code>/uploads;</code> they were in a globally redundant, highly durable vault.</p>
<h3 id="heading-a-stronger-foundation">A Stronger Foundation</h3>
<p>From the outside, my app looked exactly the same. But beneath the surface, the foundation had gone from a shoebox on a wobbly shelf to a bank vault inside a fortress.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ma8sl4wdoobmvzqx28d.png" alt="Image description" /></p>
<p>I was no longer the single point of failure. I could finally focus on writing code without the looming fear of catastrophic data loss.</p>
<h3 id="heading-but-one-problem-remained">But One Problem Remained…</h3>
<p>Even with the data safe, I still had to deploy my code the old-fashioned way: SSH into the server, run some commands, cross my fingers, and hope nothing broke.</p>
<p>It felt clunky. Slow. Archaic. There had to be a better way.</p>
<p>Next up: <a target="_blank" href="https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-3-building-a-cicd-pipeline">A Developer’s Journey to the Cloud 3: Building a CI/CD Pipeline</a>.</p>
]]></content:encoded></item><item><title><![CDATA[A Developer’s Journey to the Cloud 1: From Localhost to Dockerized Deployment]]></title><description><![CDATA[About This Series
Over the past 8 years, I’ve built and deployed a variety of applications, each with its own unique set of challenges, lessons, and occasionally, hard-earned scars. Instead of presenting those experiences as isolated technical write-...]]></description><link>https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-1-from-localhost-to-dockerized-deployment</link><guid isPermaLink="true">https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-1-from-localhost-to-dockerized-deployment</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Docker]]></category><category><![CDATA[hosting]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Arun SD]]></dc:creator><pubDate>Mon, 11 Aug 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755629075122/92f7da6b-5e3b-4646-85e0-2df6a9715537.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-about-this-series">About This Series</h2>
<p>Over the past 8 years, I’ve built and deployed a variety of applications, each with its own unique set of challenges, lessons, and occasionally, hard-earned scars. Instead of presenting those experiences as isolated technical write-ups, I’ve woven them into a single, continuous narrative: A Developer’s Journey to the Cloud.</p>
<p>While the “developer” in this story is fictional, the struggles, breakthroughs, and aha-moments are all drawn from real projects I’ve worked on spanning multiple tech stacks, deployment models, and problem domains. Each post captures the why and what behind key decisions and technologies, without drowning in step-by-step tutorials.</p>
<p>Think of it as a mix between a memoir and a guide, part storytelling, part practical insight, narrating the messy, funny, and sometimes painful path of learning to build in the cloud.</p>
<hr />
<h2 id="heading-that-time-i-thought-localhosthttplocalhost-was-lying-to-me">That Time I Thought <a target="_blank" href="http://localhost">localhost</a> Was Lying to Me</h2>
<p>It all starts with that beautiful, electric feeling. The final line of code clicks into place, and your application , your glorious, bug-free masterpiece, purrs to life on <a target="_blank" href="http://localhost">localhost</a>. You've tested every button, every form, every feature. It's perfect. All that's left is to share it with the world.</p>
<p>"How hard can that be?" I thought, basking in the glow of my monitor. And so began my glorious, agonizing, and unintentionally hilarious journey into the wild.</p>
<h2 id="heading-the-first-launch">The First Launch</h2>
<p>My plan was simple: rent the cheapest virtual server I could find. For a few hundred rupees, I was the proud owner of a blank command line with a public IP address. It felt like being handed the keys to the internet.</p>
<p>With the confidence of someone who had successfully used npm install more than once, I manually installed Node.js and PostgreSQL on the server. Then came the big moment: getting my code onto the server. The process was painfully manual.</p>
<p>First, I prepared my project files locally:</p>
<pre><code class="lang-Bash">
<span class="hljs-comment"># On my laptop</span>
zip -r my-awesome-app.zip .
</code></pre>
<p>Then, I used scp (Secure Copy) to upload the zipped file to the server:</p>
<pre><code class="lang-Bash">
<span class="hljs-comment"># On my laptop</span>
scp my-awesome-app.zip user@your_server_ip:/home/user/
</code></pre>
<p>Finally, I logged into the server to unpack and run everything:</p>
<pre><code class="lang-Bash">
<span class="hljs-comment"># On the server</span>
ssh user@your_server_ip
unzip my-awesome-app.zip -d app
<span class="hljs-built_in">cd</span> app
npm install
npm start
</code></pre>
<p>I typed the IP address into my browser. It worked. My app was alive. I was, for all intents and purposes, a genius.</p>
<p>The first crack appeared a day later. I found a tiny typo. "Easy fix," I thought. I corrected the text, zipped everything up again, and repeated the entire upload-unzip-restart dance. I restarted the app.</p>
<p>And the entire thing crashed.</p>
<p>Somehow, in that simple process, something had gone terribly wrong. I had no history, no undo button. It took me an hour of frantic re-uploading to get it back online. I decided the "F" in FTP stood for "fragile."</p>
<h3 id="heading-getting-smarter-a-little">Getting Smarter, a Little</h3>
<p>Okay, no more zip files. I was a professional, and professionals use Git. I SSH'd into my server and set up a "bare" repository, a special repo just for receiving pushes.</p>
<pre><code class="lang-Bash">
<span class="hljs-comment"># On the server</span>
git init --bare /var/repos/app.git
</code></pre>
<p>I configured a hook that would automatically check out the code into my live directory whenever I pushed to it. My deployment process was now a sleek and sophisticated git push production main. I had leveled up.</p>
<p>This new system worked beautifully for weeks. I built a major new feature, a complex image upload and processing tool. To get access to some new performance improvements, I developed it locally using the <strong>latest</strong> version of Node.js. As always, it ran like a dream on my laptop. I pushed the code, and the deployment hook ran. I restarted the app with a confident smirk.</p>
<p>It crashed. Instantly.</p>
<p>The error message was a nightmare. A function I was using simply didn't exist. But... I had just used it. It was right there in my code. I spent the next six hours in a state of pure disbelief.</p>
<p>Then, it hit me. My server, being a server, had been prudently set up with the stable, Long-Term Support (LTS) version of Node.js. My feature, built with the shiny new tools of the "latest" version, was trying to call a function that hadn't been introduced in the "stable" release yet. It was then I understood that the most soul-crushing phrase in our industry, "but it works on my machine," isn't a joke. It's a curse. My code wasn't the problem; the entire universe it was running in was fundamentally, fatally different.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mqw0z87hsvw8hyzazuhk.png" alt="1 tier server" /></p>
<h3 id="heading-shipping-the-entire-universe">Shipping the Entire Universe</h3>
<p>Defeated, I started searching for answers, and I kept stumbling upon the same word: Docker. The promise was simple: what if, instead of just shipping your code, you could ship your code's entire world along with it?</p>
<p>The idea was to define your exact environment in a text file, a Dockerfile. This file acts as a blueprint to create a "container," a lightweight, standardized box holding your app and its perfect environment.</p>
<p>I spent a weekend tinkering. My Dockerfile looked something like this:</p>
<pre><code class="lang-Dockerfile">
<span class="hljs-comment"># Dockerfile</span>
<span class="hljs-comment"># Use the 'latest' Node.js version we need</span>
<span class="hljs-keyword">FROM</span> node:latest

<span class="hljs-comment"># Set the working directory in the container</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Copy package files and install dependencies</span>
<span class="hljs-keyword">COPY</span><span class="bash"> package*.json ./</span>
<span class="hljs-keyword">RUN</span><span class="bash"> npm install --only=production</span>

<span class="hljs-comment"># Copy the rest of our application code</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment"># Expose the port and start the server</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">3001</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [ <span class="hljs-string">"node"</span>, <span class="hljs-string">"index.js"</span> ]</span>
</code></pre>
<p>To run my database alongside my app and manage secrets properly, I created a <code>docker-compose.yml</code> file and a separate <code>.env</code> file for my credentials.</p>
<p>This is the <code>.env</code> file, which should never be committed to Git:</p>
<pre><code class="lang-plaintext">Code snippet

# .env
# Database credentials
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mysecretpassword
POSTGRES_DB=myapp_db
And this is the `docker-compose.yml` that uses it:
</code></pre>
<pre><code class="lang-YAML">
<span class="hljs-comment"># docker-compose.yml</span>
<span class="hljs-attr">version:</span> <span class="hljs-string">'3.8'</span>
<span class="hljs-attr">services:</span>
  <span class="hljs-attr">app:</span>
    <span class="hljs-attr">build:</span> <span class="hljs-string">.</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"3001:3001"</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">db</span>
    <span class="hljs-comment"># Load environment variables from the .env file</span>
    <span class="hljs-attr">env_file:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">.env</span>
    <span class="hljs-comment"># Pass necessary variables to the app container</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">PGHOST:</span> <span class="hljs-string">db</span>
      <span class="hljs-attr">PGUSER:</span> <span class="hljs-string">${POSTGRES_USER}</span>
      <span class="hljs-attr">PGPASSWORD:</span> <span class="hljs-string">${POSTGRES_PASSWORD}</span>
      <span class="hljs-attr">PGDATABASE:</span> <span class="hljs-string">${POSTGRES_DB}</span>
      <span class="hljs-attr">PGPORT:</span> <span class="hljs-number">5432</span>

  <span class="hljs-attr">db:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:13</span>
    <span class="hljs-comment"># Use the same .env file to configure the database</span>
    <span class="hljs-attr">env_file:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">.env</span>
</code></pre>
<p>I ran <code>docker-compose up</code> on my laptop. It worked. But this was the real test. I installed Docker on my server, copied my project over (including the <code>.env</code> file), and ran the exact same command.</p>
<p>And it just... worked. No errors. No version conflicts. No drama.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25fwep5c0qvgsq8roxhm.png" alt="1 tier server with docker" /></p>
<p>It wasn't magic. It was simply that for the first time, the environment on my server was not just similar to my laptop's; it was identical. I hadn't just deployed my code. I had shipped its entire universe in a box, and it didn't care where it was opened.</p>
<p>That's when it clicked. The goal was never just to get the code onto a server. It was to get a predictable, repeatable result, every single time. And my journey, I realized, was just getting started.</p>
<p>Our code is now safe, but our data is living dangerously. Let's fix that.</p>
<p>Stay tuned for the next post: <a target="_blank" href="https://blog.arunjagadish.space/a-developers-journey-to-the-cloud-2-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it">A Developer’s Journey to the Cloud 2: Managed Databases &amp; Cloud Storage.</a></p>
]]></content:encoded></item></channel></rss>