<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Kris' Dev blog]]></title><description><![CDATA[Kris' Dev blog]]></description><link>https://krisfeher.com</link><generator>RSS for Node</generator><lastBuildDate>Mon, 13 Apr 2026 08:34:07 GMT</lastBuildDate><atom:link href="https://krisfeher.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Add your own domain to your S3 hosted static website]]></title><description><![CDATA[In one of the previous articles we've created an S3 static website. It was using the CloudFront generated URI, which isn't particularly professional 🙂
In this guide we'll connect it to our own domain.
Services we use

Route53 for DNS management

AWS...]]></description><link>https://krisfeher.com/add-your-own-domain-to-your-s3-hosted-static-website</link><guid isPermaLink="true">https://krisfeher.com/add-your-own-domain-to-your-s3-hosted-static-website</guid><category><![CDATA[cloudfront]]></category><category><![CDATA[Static Website]]></category><category><![CDATA[SSL]]></category><category><![CDATA[SSL Certificate]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Mon, 18 Mar 2024 15:12:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1710774708667/1b116b0d-1aa2-4dad-a318-1da7ab0536b9.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In one of the previous articles we've created an S3 static website. It was using the CloudFront generated URI, which isn't particularly professional 🙂</p>
<p>In this guide we'll connect it to our own domain.</p>
<h2 id="heading-services-we-use">Services we use</h2>
<ul>
<li><p>Route53 for DNS management</p>
</li>
<li><p>AWS Certificate Manager for SSL certificates</p>
</li>
<li><p>CloudFront</p>
</li>
</ul>
<p>This guide assumes you have DNS management in AWS. I'll lightly mention the other 2 options we might have if you don't want that within AWS.</p>
<h2 id="heading-step-by-step">Step by step</h2>
<p>This guide assumes you went through with the <a target="_blank" href="https://krisfeher.com/create-a-close-to-0-static-website-on-aws-step-by-step">steps of creating an S3 static website</a>, and therefore will build on that architecture.</p>
<p>In a nutshell these are the steps we'll take:</p>
<ol>
<li><p>create a hosted zone under subdomain (optional)</p>
</li>
<li><p>create 4 NS records in Cloudflare for <a target="_blank" href="http://aws.krisfeher.com">aws.krisfeher.com</a> (optional)</p>
</li>
<li><p>edit CloudFront to add Alternate Domain Name</p>
</li>
<li><p>request a certificate in N.Virginia (all CloudFront certificates requires a cert in this region)</p>
</li>
<li><p>add the DNS validation check entry to Route53</p>
</li>
<li><p>Select the now existing cert in ACM</p>
</li>
<li><p>Wait for deploy</p>
</li>
<li><p>Create alias record in Route53 to direct to the CloudFront distribution</p>
</li>
</ol>
<h3 id="heading-create-a-hosted-zone">Create a hosted zone</h3>
<p>As mentioned, this step is optional, you may already have one and in that case just skip this step.</p>
<p>Go to <code>Route 53 =&gt; Hosted zones =&gt; Create hosted zone</code></p>
<p>Add the domain you want to control and make it public.</p>
<h3 id="heading-direct-ns-queries-from-your-provider-to-aws">Direct NS queries from your provider to AWS</h3>
<p>You may be in the same shoes as me, and have your DNS managed in CloudFlare (or elsewhere). If that's not the case, you can safely ignore this step as well.</p>
<p>Otherwise, you can create an NS record for a subdomain within Cloudflare and direct it to AWS's name servers:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710771303541/e9fc3fc5-4a19-4832-8ff2-cd2e54e82aac.png" alt class="image--center mx-auto" /></p>
<p>You can of course just redirect the entire root domain if you want to handle it all in AWS.</p>
<p>You can find out your NS name servers in Route53 once you created a public Hosted Zone for your domain/subdomain:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710771368542/f433b2fe-1f63-42b1-a7e3-8717ec137249.png" alt class="image--center mx-auto" /></p>
<p>The advantages of this approach is that SSL termination will be handled entirely in AWS, and this setup is very simple.</p>
<p>(alternatively) If you want to avoid a Hosted Zone in AWS, you'd need to set up SSL ideally in both Cloudflare (or wherever you manage your DNS), and AWS.</p>
<p>It's because you want to make sure SSL is enabled from your browser to =&gt; CloudFlare and from CloudFlare to =&gt; AWS.</p>
<p>This process is a little more complicated and the SSL setup entirely depends on your provider.</p>
<h3 id="heading-add-custom-domain-to-cloudfront">Add custom domain to CloudFront</h3>
<p>Go to <code>CloudFront =&gt; Distributions =&gt; EXXXXXXXXT =&gt; Edit settings</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710772502803/c64acf54-8842-41d1-9efa-faeeb7c8f124.png" alt class="image--center mx-auto" /></p>
<p>Here you can add your Alternate Domain name.</p>
<p>Then select the SSL certificate which will prompt you to create a new one.</p>
<h3 id="heading-request-a-new-cloudfront-ssl-certificate">Request a new CloudFront SSL certificate</h3>
<p>The previous step should take you to N. Virginia region's create certificate console.</p>
<p>If it hasn't, the you can navigate to:</p>
<p><code>AWS Certificate Manager =&gt; Certificates =&gt; Request certificate</code></p>
<p>Request a public certificate there, and add your fully qualified domain name or a wildcard if you want to have the certificate cover your entire domain.</p>
<h3 id="heading-add-the-dns-validation-check-entry-to-route53">Add the DNS validation check entry to Route53</h3>
<p>This is required if you selected DNS validation.</p>
<p>AWS will generate a CNAME name and a CNAME value on the certificate you need to add to Route53, which looks like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710773526789/4162cd2b-96ef-432f-96e4-a46cf190f8f4.png" alt class="image--center mx-auto" /></p>
<p>You can add these manually to wherever you control DNS (either to Route53 or to Cloudflare/etc.)</p>
<p>For convenience AWS added a "Create records in Route53" button on top which just does this for you.</p>
<h3 id="heading-continue-in-cloudfront-to-add-the-certificate">Continue in Cloudfront to add the certificate</h3>
<p>Once you did the above step, wait a couple of minutes until your SSL certificate is validated. You can see a "Success" message on the certificate instead of "Pending..."</p>
<p>If that's done, refresh the certificate list on Cloudfront and select the newly created cert.</p>
<p>Click "save chances" on the bottom of the page and wait until the distribution re-deploys (took me about 5 minutes).</p>
<h3 id="heading-create-an-alias-in-route53">Create an alias in Route53</h3>
<p>The last step is to direct calls to aws.krisfeher.com to the CloudFront distribution</p>
<p>Create a new A record in Route53 and tick the "alias", then select CloudFront and your distribution name.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710773921199/8480e356-3776-4324-98ad-f2f60fb70be7.png" alt class="image--center mx-auto" /></p>
<p>Done!</p>
<p>Once this is completed, you can access your static site from the new domain name. In my case, the example Vite project:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710774118065/5491a933-922d-470a-b621-8aa0cccf6805.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[How to AI code on your PC locally and privately (even with an AMD laptop!)]]></title><description><![CDATA[1. Introduction
This is going to be a quick article, as I thought a lot of people could benefit from it.
AI coding is not the future. It's the present! If you haven't tried it yet, then what are you waiting for? 🙂🤖
There're various paid tools avail...]]></description><link>https://krisfeher.com/how-to-ai-code-on-your-pc-locally-and-privately-even-with-an-amd-laptop</link><guid isPermaLink="true">https://krisfeher.com/how-to-ai-code-on-your-pc-locally-and-privately-even-with-an-amd-laptop</guid><category><![CDATA[continue.dev]]></category><category><![CDATA[jan.ai]]></category><category><![CDATA[AI]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[llm]]></category><category><![CDATA[IDEs]]></category><category><![CDATA[cursor IDE]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Mon, 26 Feb 2024 17:30:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708968090262/62eaaff4-456c-4f0f-ab99-cca7f9817dd5.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-introduction">1. Introduction</h2>
<p>This is going to be a quick article, as I thought a lot of people could benefit from it.</p>
<p>AI coding is not the future. It's the present! If you haven't tried it yet, then what are you waiting for? 🙂🤖</p>
<p>There're various paid tools available to get started, but my favourite stack I'll describe below so far has a lot of advantages over others. Mainly price and privacy.</p>
<h2 id="heading-2-why-not-chatgpt">2. Why not chatGPT?</h2>
<p>For 3 reasons:</p>
<ol>
<li><p><em>from my experience</em> ChatGPT recently has been "dumbed down", which you may heard elsewhere. It generally became lazier at writing longer text (cost saving measure I'd guess), which is quite important for long pieces of code</p>
</li>
<li><p>chatGPT has no context of your code</p>
</li>
<li><p>chatGPT isn't private and they may use your input to train their model.</p>
</li>
</ol>
<p>Of course you can use chatGPT API, and issue #3 and #1 would be resolved, but that's still a pretty dang expensive tool for coding.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708938609481/4229e3c4-f294-47e2-8a4d-f71aa550b780.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-3-so-whats-the-alternative">3. So what's the alternative?</h2>
<p>Well, your own PC!</p>
<p>If you have a half-decent CPU / GPU, it's worth giving it a go and see how much "speed" you'll get.</p>
<p>You'll VERY LIKELY NOT be able to run bigger models (like a 70 Billion parameter), as those require a lot of GPU vRAM, which your laptop won't have.</p>
<p>Anyway, to get started, go ahead and download <a target="_blank" href="https://jan.ai/">https://jan.ai/</a><br />This is a tool that allows you to run all those juicy LLMs locally!</p>
<p>The dudes maintaining the application are <strong>amazing</strong>! They have their own discord channel where all development work is OPEN TO PUBLIC! (along with the source code). You can ask them questions, help them test or contribute to the project. They're very responsive.</p>
<p>Additionally, as far as I know jan.ai the ONLY solution that works with AMD powered GPU laptops. (for desktop PC there're other options)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708939060813/32ed33f5-70eb-4197-8152-c5f63a2ba6fe.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-4-what-if-i-have-a-crappy-pc">4. What if I have a crappy PC?</h2>
<p>Don't worry, you still have plenty of options.</p>
<p>There're a LOT of <strong>serverless</strong> infrastructure providers that gives you option to an API you can use.</p>
<p>Places like <a target="_blank" href="https://www.mystic.ai/">mystic.ai</a> , <a target="_blank" href="http://app.predibase.com">predibase</a> and my favourite <a target="_blank" href="https://www.together.ai/">together.ai</a> . But there's more!</p>
<p>They all give you free $20-25 credit, so there's no hurt in trying them.</p>
<p>Do remember the word "serverless", as those are the ones charging you per token or per second of inference (the other option is "dedicated").</p>
<p>The best part of it?</p>
<p>Much better pricing!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708939309972/7103ac10-25d8-42cc-94f4-e0b6640d9ffa.png" alt class="image--center mx-auto" /></p>
<p>Yes, you read that right, it's 1M tokens, not 1k!! That makes this solution about 500-1000 times cheaper than openAI's GPT-4 turbo!</p>
<h2 id="heading-5-what-about-opensource-model-performance">5. what about opensource model performance?</h2>
<p>Good question.</p>
<p>Let me show you a little screenshot:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708939479934/e2de03c5-50b6-4315-b596-e8338c3890f9.png" alt="https://deepseekcoder.github.io/" class="image--center mx-auto" /></p>
<p>(source: <a target="_blank" href="https://deepseekcoder.github.io/">https://deepseekcoder.github.io/</a>)</p>
<p>It isn't so bad at all!</p>
<p>DeepSeek-Coder Instruct 33B is about as good as GPT4!</p>
<p>Now you probably won't be able to run a 33B model on your local PC (unless you have a beefy machine at home), that's why we can use together.ai !</p>
<ul>
<li><em>"Ok, so these are the models. What about the IDE?"</em></li>
</ul>
<h2 id="heading-6-ide-and-extensions">6. IDE and extensions</h2>
<p>Here, you have more than just one options. VSCode and Jetbrains has their own AI assistants, but they're paid. There're probably others too.<br />Then you have more niche IDEs like "<a target="_blank" href="https://cursor.sh/">Cursor</a>", which is an AI driven IDE from ground up (using VSCode as basis). The gui of this is excellent!</p>
<p>This means it's smoother and more integrated, and it's easier to do AI-related tasks within the IDE. I've used this for a while, but as you may guessed, it's paid as well (unless you use your own openai key)</p>
<p>Then recently I discovered an extension called "<a target="_blank" href="https://continue.dev/">Continue</a>". And that's where I stopped!</p>
<p>With <strong><em>continue</em></strong>, you can use either VScode or Jetbrains IDEs :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708967642671/9db8ffea-7cad-4de0-b7aa-ec489c64cf38.gif" alt class="image--center mx-auto" /></p>
<p>So to get going, install the extension and modify the config file.</p>
<p>Here's 2 examples you can add in the config file for local (jan.ai) and remote (together.ai):</p>
<pre><code class="lang-json">    {
      <span class="hljs-attr">"title"</span>: <span class="hljs-string">"deepseek-coder-6.7b-instruct.Q2_K"</span>,
      <span class="hljs-attr">"model"</span>: <span class="hljs-string">"deepseek-coder-6.7b-instruct.Q2_K"</span>,
      <span class="hljs-attr">"apiBase"</span>: <span class="hljs-string">"http://127.0.0.1:1337/v1"</span>,
      <span class="hljs-attr">"completionOptions"</span>: {},
      <span class="hljs-attr">"provider"</span>: <span class="hljs-string">"openai"</span>
    },
    {
      <span class="hljs-attr">"title"</span>: <span class="hljs-string">"deepseek-ai/deepseek-coder-33b-instruct"</span>,
      <span class="hljs-attr">"model"</span>: <span class="hljs-string">"deepseek-ai/deepseek-coder-33b-instruct"</span>,
      <span class="hljs-attr">"apiKey"</span>: <span class="hljs-string">"your secret API key for together.ai"</span>,
      <span class="hljs-attr">"completionOptions"</span>: {},
      <span class="hljs-attr">"provider"</span>: <span class="hljs-string">"together"</span>
    },
</code></pre>
<p>As you see the first item has the provider "openai" set. The reason behind this is that jan.ai creates a web server that has openAI compatible API. So any other models can be accessed the same way as openAi. (switching between them is easy)</p>
<p>And honestly? That's it! That's all you need to set this up.</p>
<p>If you really want, you can download additional models from Huggingface and just copy it into the following folder : <code>C:\Users\youruser\jan\models\deepseek-coder-6.7b-instruct.Q2_K</code></p>
<p>Make sure the model and the containing folder name are identical.</p>
<p>So to summarize, the tools we used:</p>
<ul>
<li><p>jan.ai</p>
</li>
<li><p>continue.dev</p>
</li>
<li><p>vscode (or vscodium as I did above)</p>
</li>
<li><p>an LLM model</p>
</li>
<li><p>together.ai</p>
</li>
</ul>
<p>Let me know if you're stuck or you have a question and good luck AI-coding!</p>
]]></content:encoded></item><item><title><![CDATA[Create a close to $0 Static Website on AWS: Step-by-Step]]></title><description><![CDATA[1. Introduction
In this guide I'll show you how to create a static website on S3 and serve it via Cloudfront.
You'll end up with something like this:- duhttestg7mi2.cloudfront.net
Which is a generated domain name. In the next article I'll show you ho...]]></description><link>https://krisfeher.com/create-a-close-to-0-static-website-on-aws-step-by-step</link><guid isPermaLink="true">https://krisfeher.com/create-a-close-to-0-static-website-on-aws-step-by-step</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[Static Website]]></category><category><![CDATA[free]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Tue, 20 Feb 2024 13:53:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708437152088/dd39ce4e-48f5-4429-920a-6b0d1212b8d8.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-introduction">1. Introduction</h2>
<p>In this guide I'll show you how to create a static website on S3 and serve it via Cloudfront.</p>
<p>You'll end up with something like this:<br />- duhttestg7mi2.cloudfront.net</p>
<p>Which is a generated domain name. In the next article I'll show you how to do a custom domain name.</p>
<h4 id="heading-so-why-is-it-free">So why is it free?</h4>
<p>Well... technically not free, but VERY VERY cheap.</p>
<p>The only 2 services you'll use is S3, where storing a small website costs virtually nothing, and CloudFront which has a free tier resetting every month forever.</p>
<h4 id="heading-and-what-is-a-static-website">And what is a static website?</h4>
<p>A static website consists of pre-built pages with fixed content, displaying the same information to every visitor. Created using HTML, CSS, and JavaScript files, these sites don't require server-side processing or database management. This simplicity leads to faster load times, improved performance, and easy maintenance. Static websites can be hosted on various platforms, including CDNs, enhancing speed and reliability. In short, static websites are efficient and reliable, offering benefits like faster load times and easier upkeep, making them popular among web developers and content creators.</p>
<h4 id="heading-what-examples-can-you-do-with-a-static-website">What examples can you do with a static website?</h4>
<ul>
<li><p>Personal blogs or portfolios showcasing an your work</p>
</li>
<li><p>Small business websites providing information about products, services, and contact details</p>
</li>
<li><p>Landing pages for marketing campaigns, product launches, or events</p>
</li>
<li><p>Online documentation or knowledgebases for software products or services</p>
</li>
</ul>
<h4 id="heading-what-are-the-limitations-and-things-you-cannot-accomplish-with-a-static-website">What are the limitations and things you CANNOT accomplish with a static website?</h4>
<ul>
<li><p>Inability to handle complex, dynamic content like user-generated content or real-time data feeds</p>
</li>
<li><p>Limited support for e-commerce functionality, such as shopping carts and payment processing</p>
</li>
<li><p>Challenges in implementing advanced search capabilities or personalized content based on user preferences</p>
</li>
<li><p>Difficulty in managing large-scale websites with frequently updated content, as each update requires regenerating the entire site</p>
</li>
</ul>
<p>Okay, so this is out of the way, let's get to work!</p>
<h2 id="heading-2-services-we-will-use">2. Services we will use</h2>
<p>S3 =&gt; to host the website<br />Cloudfront =&gt; to cache the content and give it HTTPS</p>
<p><strong>What this guide will not cover?</strong></p>
<ul>
<li>a custom domain name. You'll end up with an autogenerated site-name (you'll find custom website name in the next article)</li>
</ul>
<h2 id="heading-3-create-a-bucket">3. Create a bucket</h2>
<p>There's nothing special in here.</p>
<p>Go to <code>Amazon =&gt; S3 Buckets</code> and create a bucket.</p>
<p>Contrary to what you heard you <strong>don't</strong> need to do any of the following:</p>
<ul>
<li><p>Enable static website hosting (don't do it)</p>
</li>
<li><p>Enable public access to the bucket (don't do it)</p>
</li>
</ul>
<p>The reason for the above is, that we'll block public access to S3 and only have our static site available from CloudFront via OAC (Origin Access Control).</p>
<p>However what we need later on is:</p>
<ul>
<li>A bucket policy to allow access from CloudFront</li>
</ul>
<p>The only step you do here is the following:</p>
<ul>
<li>Upload your static website to S3. I you need an easy way to upload your website, you can use my previous article about <a target="_blank" href="https://krisfeher.com/how-to-access-s3-like-its-your-local-files">accessing S3 from your PC</a> (or just use AWS Console)</li>
</ul>
<p>I will just use one of the example Vite project I created a short while ago.</p>
<h2 id="heading-4-cloudfront">4. Cloudfront</h2>
<p>We can have Cloudfront in front of our S3 website to provide secure access (S3 doesn't provide us with an SSL certificate)</p>
<ul>
<li><p>Go to Cloudfront and create a new distribution.</p>
</li>
<li><p>Make sure you enter the S3 bucket name and NOT the bucket endpoint</p>
</li>
<li><p>Add Origin access control settings and create a new OAC with default settings. Make sure you select "<strong>Sign Requests (recommended)</strong>" here, otherwise you'll get an Access Denied from CloudFront</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708435596258/2b05dde5-17cd-4506-b5de-85dd3d63d5dd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select : "Redirect HTTP to HTTPS" at the Viewer section.</p>
</li>
<li><p>Add <code>index.html</code> do the "Default root object <em>- optional"</em> section</p>
</li>
</ul>
<p>You can leave the rest as default or modify it to your liking.<br />Click <code>Create</code>.</p>
<p>After a few minutes of deploying you'll be provided a domain distribution name like <a target="_blank" href="https://dwi0g1ihg4pye.cloudfront.net">https://xxxxxxx.cloudfront.net</a> that you can use to access your website.</p>
<p>While you wait you can do the bucket policy update.<br />When you created the CF distribution, you probably had a yellow alert on top of the page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708435917771/1fe25328-fbe9-427d-9640-bce3485a3ede.png" alt class="image--center mx-auto" /></p>
<p>You can either click on the provided "Copy policy" button or have this added to your bucket</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2008-10-17"</span>,
    <span class="hljs-attr">"Id"</span>: <span class="hljs-string">"PolicyForCloudFrontPrivateContent"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"AllowCloudFrontServicePrincipal"</span>,
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Principal"</span>: {
                <span class="hljs-attr">"Service"</span>: <span class="hljs-string">"cloudfront.amazonaws.com"</span>
            },
            <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:GetObject"</span>,
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::staticwebsiteexample123456/*"</span>,
            <span class="hljs-attr">"Condition"</span>: {
                <span class="hljs-attr">"StringEquals"</span>: {
                    <span class="hljs-attr">"AWS:SourceArn"</span>: <span class="hljs-string">"arn:aws:cloudfront::123456767:distribution/EOXIN7AR0ZZO2"</span>
                }
            }
        }
    ]
}
</code></pre>
<p>Of course use your own CloudFront and s3 bucket ARN.</p>
<p>Once you have all these you should have your static site deployed.</p>
<p>In the next article I'll show you how to have a custom domain name!</p>
]]></content:encoded></item><item><title><![CDATA[How to access S3 like it's your local files]]></title><description><![CDATA[At the end of this quick guide you'll be able to access S3's unlimited storage locally form your PC.
After setting this up once, you don't need to even open a browser!
1. The tools
We'll use the following tools:

S3

IAM for permissions

Cyberduck (o...]]></description><link>https://krisfeher.com/how-to-access-s3-like-its-your-local-files</link><guid isPermaLink="true">https://krisfeher.com/how-to-access-s3-like-its-your-local-files</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[Cyberpunk]]></category><category><![CDATA[files]]></category><category><![CDATA[IAM]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Thu, 15 Feb 2024 14:16:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708006538212/d466dfdd-2470-41e7-b722-81530e1d5cf5.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At the end of this <strong>quick</strong> guide you'll be able to access S3's unlimited storage locally form your PC.</p>
<p>After setting this up once, you don't need to even open a browser!</p>
<h2 id="heading-1-the-tools">1. The tools</h2>
<p>We'll use the following tools:</p>
<ul>
<li><p>S3</p>
</li>
<li><p>IAM for permissions</p>
</li>
<li><p>Cyberduck (or any other file explorer client with S3 support)</p>
</li>
</ul>
<p>That's all!</p>
<h2 id="heading-2-s3-setup">2. S3 setup</h2>
<p>Go to S3 here, and create a bucket <code>Amazon S3 =&gt; Buckets =&gt; Create bucket</code></p>
<p>Give it a name, and leave everything else default (for now).</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">S3 bucket names are globally unique.</div>
</div>

<h2 id="heading-3-iam-setup">3. IAM setup</h2>
<p>As Cyberduck (and many 3rd party tools) only accepts access key and secret key, we'd need to generate one with appropriate permissions.</p>
<p>If you're doing this as an admin, you don't want to have all permissions associated with your account provided to Cyberduck.</p>
<p>In security, it's called Principle of least privilege, which is:</p>
<blockquote>
<p>... a security concept that ensures users or entities <strong>only</strong> have access to the necessary data, resources, and applications for completing a task, and no more.</p>
</blockquote>
<p>Let's create a new user for this purpose at <code>IAM =&gt; Users =&gt; Create user</code> . Give it a name.</p>
<p>Then create a new policy at <code>IAM =&gt; Policies =&gt; Create policy</code></p>
<p>And add a JSON for the policy with your own bucket name:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"s3:ListBucket"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::cyberduck-bucket"</span>
        },
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"s3:GetObject"</span>,
                <span class="hljs-string">"s3:PutObject"</span>,
                <span class="hljs-string">"s3:DeleteObject"</span>,
                <span class="hljs-string">"s3:AbortMultipartUpload"</span>,
                <span class="hljs-string">"s3:ListMultipartUploadParts"</span>,
                <span class="hljs-string">"s3:InitiateMultipartUpload"</span>,
                <span class="hljs-string">"s3:UploadPart"</span>,
                <span class="hljs-string">"s3:CompleteMultipartUpload"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::cyberduck-bucket/*"</span>
        }
    ]
}
</code></pre>
<p>Save this policy under a name, then go back to your user and attach the policy to it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708005578307/1b88f593-9b1e-4d47-b1b3-bd32761c9193.png" alt class="image--center mx-auto" /></p>
<p>Once done, create an access key for the user <code>IAM Users =&gt; cyberduck-user =&gt; Create access key</code></p>
<p>Once you generated one, you can go to the next section and enter it in Cyberduck.</p>
<h2 id="heading-4-file-explorer">4. File explorer</h2>
<p>Download <a target="_blank" href="https://cyberduck.io/download/">Cyberduck</a> (or your tool of choice) and install it.</p>
<p>Create a new "bookmark" and fill in these details.</p>
<p>Add the Access Key ID of the user we created in the previous section.</p>
<p>Make sure you add the bucket name to the "path" variable. As you see this is also appended to the URL above. If you're using a different client, it may ask for the full URL instead.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708004574804/58839f62-f678-4c60-949a-8e0df66c5f22.png" alt class="image--center mx-auto" /></p>
<p>Once done, you can drag and drop files across S3 without resorting to S3's own console or using the terminal.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708005839466/fb488888-b5b4-4682-bc08-8f48d07a8f96.png" alt class="image--center mx-auto" /></p>
<p>That's it!</p>
<p>Hope this quick tutorial helped you being more comfortable using S3.</p>
]]></content:encoded></item><item><title><![CDATA[How to create AI reminders]]></title><description><![CDATA[It's Friday!
Well, it's Friday afternoon, and I had way too much coffee, so this article will be a little light hearted ! If you're not up for it, and you want serious content.... well, I guess I have some AWS-related content you can have a look inst...]]></description><link>https://krisfeher.com/how-to-create-ai-reminders</link><guid isPermaLink="true">https://krisfeher.com/how-to-create-ai-reminders</guid><category><![CDATA[rubbish]]></category><category><![CDATA[n8n]]></category><category><![CDATA[llm]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Fri, 09 Feb 2024 15:38:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707493026330/2ce644da-127a-452a-a209-8ce28a9a6291.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-its-friday">It's Friday!</h2>
<p>Well, it's Friday afternoon, and I had way too much coffee, so this article will be a little <em>light hearted</em> ! If you're not up for it, and you want serious content.... well, I guess I have some AWS-related content you can have a look instead 🙂</p>
<p>My dream is to have a Jarvis-like AI available in my home and help me out when I'm building my iron man su........when I'm doing aws stuff.</p>
<p>Today I'll show you how to remind yourself of important things that we can't live without. Things that makes to world go by, things that provide people satisfaction and a reason to live.</p>
<p>Yes, I'm talking about <strong>taking out the bins</strong>! Yep, that bin. Not the virtual one on your pc, but real-life one that you forgot to take out and now looks like a tower built from trash.</p>
<p>Here's what we'll do today:</p>
<p><strong>"We'll have an AI notify us every time the garbage collector comes to take our bins away!"</strong></p>
<h2 id="heading-tools-well-use">Tools we'll use</h2>
<p>We'll use the following tools for this purpose:</p>
<h4 id="heading-n8n">N8N</h4>
<p>This is similar to Zapier. An automation framework that'll greatly simplify building this all. Why not Zapier? Because N8N can be self-hosted on my own homelab and have access to secure private information (and my home assistant setup). Also, N8N self-hosting is free, and always has been. Big thanks to the guys at <a target="_blank" href="https://n8n.io/">N8N</a> for making this possible!</p>
<p>I'll not cover installing N8N, but if you have docker installed, it takes like 2 minutes. If you don't use Docker yet....well then what are you waiting for?</p>
<p><a target="_blank" href="https://docs.n8n.io/hosting/installation/docker/">https://docs.n8n.io/hosting/installation/docker/</a></p>
<h4 id="heading-chatgpt-openai-api">ChatGPT / OpenAI API</h4>
<ul>
<li><p>Yep, we'll use the big evil here.</p>
</li>
<li><p>Yes, I don't like it either.</p>
</li>
<li><p>Yes it'd be better locally.</p>
</li>
<li><p>Do I have disposable £2000 to build a local LLM machine running at decent speed? Maybe.</p>
</li>
<li><p>Can I justify the spending for my wife? No. 🙂</p>
</li>
</ul>
<p>As a silver lining our AI masters has promised us not to use data on the API as training data, so we can be relatively safe:</p>
<p><a target="_blank" href="https://openai.com/enterprise-privacy"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707487629586/73f75f70-71f6-4c12-841d-4b67c16eae60.png" alt class="image--center mx-auto" /></a></p>
<p>Once you have openAI api access, you can generate an API key for it.</p>
<p>There's also a possibility to use other Huggingface models with inference (which may be actually free), but I'm not sure which model would perform well.</p>
<h4 id="heading-telegrampushoveremail-whatever">Telegram/Pushover/Email, whatever</h4>
<p>So here we'll set up a Lambda function via Eventbridge and use SNS to send a text.......nah, just kidding, not on a FRIDAY AFTERNOON!</p>
<p>Basically just pick any notification service. Pick your preferred one. Check <a target="_blank" href="https://n8n.io/integrations/">N8N integrations</a>, there's around 700 different integrations.</p>
<p>I'll not detail this part, as it'll be different for everyone, and N8N makes it incredibly easy to set it up with whatever preference you have.</p>
<p>You can even have it create a reminder for you in your google calendar if you wish.</p>
<p>I personally use a Telegram chatbot, because I find it more flexible than anything else. An easy option as well is <a target="_blank" href="https://pushover.net/">Pushover</a>, which is a service for like a decade that does extremely simple phone notification. Both of them are just an API call, so it's not rocket science 🙂<br />Oh, and both are free.</p>
<h4 id="heading-a-bin-collection-website">A bin collection website</h4>
<p>For our particular area, there's a website that provides me with the bin collection dates. You type in the postcode, and it gives you a list of when it's being collected and what date.</p>
<p>If you don't find it immediately check the browser dev tools for any sort of API calls. If you're lucky, you'll be able to call it from your own machine. Here's how mine looked like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707488986173/f058ab21-0cd2-4620-a0d6-c6d59f177b3c.png" alt class="image--center mx-auto" /></p>
<p>Which means I can take a note of the request URL and retrieve the bin collection dates anytime I want specific to my address. Keep a note of your URL, as you'll need it later on.</p>
<h2 id="heading-step-1-http-request">Step 1: HTTP request</h2>
<p>Not much to say here, grab the HTTP request node and have the URL pasted in with a GET request</p>
<p><strong>Extract content</strong></p>
<p>Get an HTML node and extract the specific content. Here I had a block of text with an ID assigned to them (I got lucky), but you can pick any CSS selectors, try to make sure it's somewhat unique and it's a dynamic element that may move around the page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707489440972/f280ae19-4073-473c-a1db-b8a6a99b5ca6.png" alt class="image--center mx-auto" /></p>
<p>you can go on further extracting more stuff from this using even more HTML nodes and go as specific as you want. I wanted to remove my address from the response, so I made some gymnastics using a few HTML nodes and merges to do that.</p>
<p>At the end, this is how I made my response look:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707494172253/c7abe8ed-e605-4f13-97ff-2d23efe46443.png" alt class="image--center mx-auto" /></p>
<p>Which resulted in something like this:</p>
<p><code>[{"response": "Friday 9th February - ResidualBin"}]</code></p>
<p>This is 2 lists, on containing the dates, other the bin types. Then with a code block, I returned the first hit, which is the next collection date and bin type.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">if you wish, you can do the data extraction with gpt-4 as well, it's entirely possible, I just found this approach to be more reliable.</div>
</div>

<p><strong>Checking if date is tomorrow</strong></p>
<p>For that I wrote a disgusting piece of code that determines if that piece of text is tomorrow. I had chatGPT generate me code. I'm not crazy enough to do this myself.</p>
<p>There's probably easier ways to do this, but how could we call ourselves engineers if we didn't pick the most complicated way of doing something? 😊</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">isDateTomorrow</span>(<span class="hljs-params">dateString</span>) </span>{
    <span class="hljs-comment">// Extract date components from the string</span>
    <span class="hljs-keyword">const</span> dateComponents = dateString.match(<span class="hljs-regexp">/(\w+)\s+(\d+)\w+\s+(\w+)/</span>);
    <span class="hljs-keyword">if</span> (!dateComponents) {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; <span class="hljs-comment">// Format not matched</span>
    }

    <span class="hljs-comment">// Map month names to month numbers</span>
    <span class="hljs-keyword">const</span> monthNames = [<span class="hljs-string">"January"</span>, <span class="hljs-string">"February"</span>, <span class="hljs-string">"March"</span>, <span class="hljs-string">"April"</span>, <span class="hljs-string">"May"</span>, <span class="hljs-string">"June"</span>, <span class="hljs-string">"July"</span>, <span class="hljs-string">"August"</span>, <span class="hljs-string">"September"</span>, <span class="hljs-string">"October"</span>, <span class="hljs-string">"November"</span>, <span class="hljs-string">"December"</span>];
    <span class="hljs-keyword">const</span> monthNumber = monthNames.indexOf(dateComponents[<span class="hljs-number">3</span>]);

    <span class="hljs-keyword">if</span> (monthNumber === <span class="hljs-number">-1</span>) {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; <span class="hljs-comment">// Invalid month name</span>
    }

    <span class="hljs-comment">// Create a date object from the extracted components</span>
    <span class="hljs-keyword">const</span> year = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().getFullYear(); <span class="hljs-comment">// Assuming the year is the current year</span>
    <span class="hljs-keyword">const</span> parsedDate = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(year, monthNumber, <span class="hljs-built_in">parseInt</span>(dateComponents[<span class="hljs-number">2</span>], <span class="hljs-number">10</span>));

    <span class="hljs-comment">// Get today's date for comparison</span>
    <span class="hljs-keyword">const</span> today = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>();
    today.setHours(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>); <span class="hljs-comment">// Reset time to 00:00:00 for date-only comparison</span>
    today.setDate(today.getDate() + <span class="hljs-number">1</span>)

    <span class="hljs-comment">// Compare the parsed date with today's date</span>
    <span class="hljs-keyword">return</span> parsedDate.getTime() === today.getTime();
}

<span class="hljs-keyword">const</span> isTomorrow = isDateTomorrow($input.last().json.response);
<span class="hljs-keyword">const</span> resp = { <span class="hljs-attr">isTomorrow</span>: isTomorrow };

<span class="hljs-keyword">if</span> (isTomorrow) {
    resp.when = $input.last().json.response;
}

<span class="hljs-keyword">return</span> [{<span class="hljs-attr">response</span>: resp }]
</code></pre>
<p>After a tiny bit of modification it worked fine.</p>
<p>For every return statement please make sure it conforms to the following structure:</p>
<p>return [{some object}] or in our case it's: <code>[ { "response": { "isTomorrow": true, "when": "Friday 9th February - ResidualBin" } } ]</code></p>
<p>i.e. an object (or multiple) returned in a list. If you don't do it like that N8N will complain 🙂</p>
<p>Then once you have this, just return the value:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707490321195/0e2eb584-efae-495c-a013-dc2c5d99f46e.png" alt class="image--center mx-auto" /></p>
<p>So for now our flow does the following:</p>
<ul>
<li>On being called, it returns the date of next collection, along with the fact if it's tomorrow or not.</li>
</ul>
<p>Here's how the whole flow looks:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707490394384/ec15bbfa-aaf1-452d-bc02-e3bc0f53513a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-2-ai-agent">Step 2: AI Agent</h2>
<p>Wow that sounds fancy isn't it? 🙂</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707490602630/26b31790-1c5a-4c81-9161-b0ad0656b7da.png" alt class="image--center mx-auto" /></p>
<p>This will be your agent you can ask to do anything (i.e. you can re-use this later on for other tasks)</p>
<p>First of all, add your OpenAI API key to the <code>credentials</code> menu on the left. It'll then apply to all further OpenAI related nodes.</p>
<p>A few nodes to mention that you can see on the screenshot</p>
<ul>
<li><p>Execute Workflow Trigger =&gt; it's basically a node that lets other workflows trigger this workflow.</p>
</li>
<li><p>On a new manual Chat message =&gt; this will be added automatically when you add a new agent. It just means you can test the agent from this screen via a chat interface</p>
</li>
<li><p>Model: here's where you add the model you want to use. Nothing else required here</p>
</li>
<li><p>Agent: Again, nothing else here besides the value to use from the previous node, which is : {{ $<a target="_blank" href="http://json.chat">json.chat</a>_input }}</p>
</li>
<li><p>Window Buffer memory : this is to keep your Agent's conversation somewhere, so it remembers your previous message. For this guide it's irrelevant.</p>
</li>
<li><p>Tools: you can see a few tools attached. These are basically other workflows you created in N8N that the agent can use.</p>
<p>  Here's how the tool looks like that we've created in the previous step:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707491266913/a10eefb2-770d-4704-9746-a148d4c9b840.png" alt class="image--center mx-auto" /></p>
<p>  Take a note of your workflow ID. That's the URL of the previous workflow you worked on.</p>
</li>
</ul>
<p>The agent will make an educated guess on which tool to use.<br />Unfortunately I haven't had too much luck on complex tool usage, especially if the agent doesn't get the expected output. It just keeps on calling the tool until it gets bored. Literally.</p>
<p>Working with an AI Agent is like working with a kid. At least that's how it feels!</p>
<h2 id="heading-step-3-scheduling">Step 3: Scheduling</h2>
<p>This is a short step, and can be done in other ways.</p>
<p>I set up a schedule workflow separately that calls the Agent with a specific input:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707491572606/b8ae29ff-b8d6-4e1e-9e9e-a2b41139d4be.png" alt class="image--center mx-auto" /></p>
<p>The Schedule has a daily schedule every day 8pm. The edit fields contains the following:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707491631785/e2e41d61-0988-46c6-aa58-08018c594899.png" alt class="image--center mx-auto" /></p>
<p>with a value of: <em>"Please check if there's bin collection tomorrow. Once you did that, send me a mobile message if it's tomorrow with the details. If it's NOT tomorrow, don't send me any message."</em></p>
<p>You can see from the above, that it kept sending me messages even if it wasn't tomorrow 😅 Yes, we got to the point where an AI agent is harassing me.</p>
<p>Then the last node is just to call the workflow with the specific ID.</p>
<p>So the full flow so far:</p>
<ol>
<li><p>Schedule triggers and calls Agent</p>
</li>
<li><p>Agent gets instruction and uses bin collection tool</p>
</li>
<li><p>Bin collection tool scrapes website and returns bin date</p>
</li>
<li><p>Agent decides to send message or not based on response</p>
</li>
<li><p>(not done yet) Agent uses messaging tool to send message</p>
</li>
</ol>
<p>As you see it's fairly simple and looking at it now I can see that the <strong>AI agent is a completely redundant part of the flow</strong> (you could just send a message straight away after "istomorrow" is true.)</p>
<p>But I wrote all these up already, I won't delete the entire article, sorry 😅<br />At least we learned how to use an agent! And N8N!</p>
<p>One thing is left tho, which is sending the notification to your phone</p>
<h2 id="heading-step-4-notification">Step 4: Notification</h2>
<p>Again, this will be different for every person, but here's my flow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707491982368/df5fe7e9-70d5-43ef-adfe-77eb3a34420e.png" alt class="image--center mx-auto" /></p>
<p>In the middle is an API call to Telegram, while you can see an error or a success node at the end to lead the Agent on the right path.</p>
<p>Once you have this, save it and add it as a tool for your AI Agent.<br />Now your AI agent has 2 tools!</p>
<p>That's it!</p>
<p>As you can see it's fairly easy to add more tools to your agent and do other stuff for you.</p>
<p>Let me know if you have any questions.</p>
<p>N8N and LLMs are something I'm actively playing with, so you can expect more of these articles in the future (besides AWS)</p>
]]></content:encoded></item><item><title><![CDATA[How to set up CI/CD pipeline on AWS using BitBucket, ECS, ECR]]></title><description><![CDATA[If you missed the first part of this series, you can click here to see it: https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr

to continue.... :

1. Setting up EC2
cluster
Let's set up an ECS cluster to host our applicat...]]></description><link>https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr-1</link><guid isPermaLink="true">https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr-1</guid><category><![CDATA[AWS]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[ECS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[Bitbucket]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Fri, 09 Feb 2024 08:00:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707321222094/07d788a8-62b1-4d5e-8aeb-9b081e66fe6d.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you missed the first part of this series, you can click here to see it: <a target="_blank" href="https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr">https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr</a></p>
<blockquote>
<p>to continue.... :</p>
</blockquote>
<h2 id="heading-1-setting-up-ec2">1. Setting up EC2</h2>
<h3 id="heading-cluster">cluster</h3>
<p>Let's set up an ECS cluster to host our application. If you have this already available you can skip this step and jump to the next one.</p>
<p>In ECS click on create cluster, and in the infrastructure section you can set what EC2 would you like your docker host to be.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706868908087/d7260a84-86cd-42a1-8331-ceab701bfeac.png" alt class="image--center mx-auto" /></p>
<p>The network configuration will be already filled in with all your subnets and VPC, but you can customize it if you want.</p>
<p>Select the defaulted public subnets.</p>
<p>What this will do is create an EC2 along with an autoscaling group that scales your EC2 (with default dynamic scaling) and a launch template which is just a blueprint for your EC2.</p>
<p>Your next step should be to modify the auto-scaling group, as it's totally wrong 🙂</p>
<p>A few things we need to create a new launch template version<br /><code>EC2 =&gt; Launch templates =&gt; Modify template (Create new version)</code></p>
<ul>
<li><p>pick an AMI that's ECS optimized. Search for "ECS" within the AWS Marketplace AMIs, and pick the operating system you wish. Here's the one I used for Amazon linux in us-east : <code>ami-04d4dd7b34e293332</code></p>
</li>
<li><p>change the user_data: Open up the "Advanced" section and dunk this script in, where "test-cluster" if the name of your cluster. If it's already there, happy days.</p>
<pre><code class="lang-json">  #!/bin/bash 
  echo ECS_CLUSTER=test-cluster &gt;&gt; /etc/ecs/ecs.config;
</code></pre>
</li>
</ul>
<p>There're a few things that need to happen in order to have an EC2 attach to an ECS cluster, namely:  </p>
<ul>
<li><p>user data with the cluster name as above</p>
</li>
<li><p>ECS agent installed, which is included in the AMI</p>
</li>
<li><p>linked to capacity provider, which is done by AWS when you created the cluster</p>
</li>
<li><p>IAM instance profile for ECS agent, done as well by AWS</p>
</li>
<li><p>Outgoing security groups to provide communication, done as well.</p>
</li>
</ul>
<p>I'll provide a terraform template later on that includes all these.</p>
<h2 id="heading-2-setting-up-ecs">2. Setting up ECS</h2>
<h3 id="heading-task-definition">task definition</h3>
<p>This is a blueprint for our that defines containers configurations(Docker image to use, CPU and memory requirements, environment variables, and networking settings).</p>
<p>To create one, within ECS on the left hand menu click on "Task definition", then give it a name. You can fill in each section as below:</p>
<p><strong>Infrastructure requirements</strong></p>
<p>You can leave most of this default.<br />Only a few to change:<br />- Launch type is EC2<br />- network mode is <strong>bridge</strong> (this is important, as awsvpc doesn't support dynamic port mapping)<br />- (optional) Task size: 512 MB (we'll not require much memory for this)</p>
<p><strong>Container</strong></p>
<p>Some of the changes I've made:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707319806481/c554a6ce-0825-4f26-b061-a7bf1a3646b2.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Add a name and the ECR image URL.</p>
</li>
<li><p><strong>host port: 0</strong> (this is to indicate dynamic port mapping)</p>
</li>
</ul>
<p>Once done, click create and it'll create revision 1 of your task definition.</p>
<p><em>Again, I'll provide a TF template for this.</em></p>
<p><strong>Service</strong></p>
<p>The next up is the service. Why not a task you may ask?</p>
<ul>
<li><p>A Task Definition outlines container requirements, such as Docker image, ports, resources, and environment variables; a Task runs the defined containers, suitable for <strong>short-lived jobs</strong>, and is not replaced automatically if stopped.</p>
</li>
<li><p>A Service ensures a set number of Tasks are constantly running, replaces failed Tasks, can balance them across resources and zones, and can be configured with a load balancer, unlike standalone Tasks.</p>
</li>
</ul>
<p>For our purpose, a service is better.</p>
<p>Here're the sections one-by-one:</p>
<ul>
<li>environment</li>
</ul>
<p>Choose the default capacity provider strategy we created earlier.</p>
<ul>
<li>deployment configuration</li>
</ul>
<p>Give a name to your service, add the task definition we created earlier and select "Replica" as the service type.<br />You can then select how many tasks (clones) you want to run on this service. For the sake of simplicity I selected one.</p>
<ul>
<li>load balancing</li>
</ul>
<p>This is an important one. Make sure you select your current load balancer, and the target group you defined above. If you're unable to do so "it's greyed out, or not available", then make sure you have the following:</p>
<ol>
<li><p>your target group exists, does not have targets, and is an <em>instance</em> type target group</p>
</li>
<li><p>your target group is assigned to the load balancer, within the correct VPC</p>
</li>
<li><p>your task definition includes "bridge" networking</p>
</li>
<li><p>your container definition includes host port of 0</p>
</li>
</ol>
<p>The rest of the options you can leave it as default. There's a lot more to pick here, but we'll not delve into those options.</p>
<p>Once this is done, you can see the tasks provisioning for the service (depending on how many task you picked earlier):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707216560781/b848c58f-f511-446a-9f5b-a58808f73a8c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-3-terraform-templates">3. Terraform templates</h2>
<p>To provide you with a terraform templates:</p>
<p>EC2.tf</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_launch_template"</span> <span class="hljs-string">"EC2LaunchTemplate"</span> {
  name_prefix   = <span class="hljs-attr">"ECSLaunchTemplate-"</span>
  image_id      = <span class="hljs-attr">"ami-04d4dd7b34e293332"</span>  #ECS-OPTIMIZED AMI
  instance_type = <span class="hljs-attr">"t3.medium"</span>
  key_name      = aws_key_pair.sshkey.key_name

  iam_instance_profile {
    arn = aws_iam_instance_profile.ecsInstanceRole.arn
  }

  vpc_security_group_ids = [aws_security_group.ecs_sg.id]

  user_data = base64encode(<span class="hljs-string">"#!/bin/bash\necho ECS_CLUSTER=${var.cluster_name} &gt;&gt; /etc/ecs/ecs.config"</span>)
}

resource <span class="hljs-string">"aws_key_pair"</span> <span class="hljs-string">"sshkey"</span> {
  key_name   = <span class="hljs-attr">"ssh-key"</span>
  public_key = <span class="hljs-attr">"add your own SSH public key here you generated on your PC"</span>
}

resource <span class="hljs-string">"aws_autoscaling_group"</span> <span class="hljs-string">"AutoScalingGroup"</span> {
  name_prefix      = <span class="hljs-attr">"ecs-asg-"</span>
  min_size         = 1
  max_size         = 2
  desired_capacity = 1
  launch_template {
    id      = aws_launch_template.EC2LaunchTemplate.id
    version = <span class="hljs-attr">"$Latest"</span>
  }

  vpc_zone_identifier = [aws_subnet.public_subnet_1.id, aws_subnet.public_subnet_2.id]
  health_check_type   = <span class="hljs-string">"EC2"</span>
}

resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ecsInstanceRole"</span> {
  name_prefix = <span class="hljs-attr">"ecsInstanceRole-"</span>
  assume_role_policy = jsonencode({
    Version = <span class="hljs-attr">"2012-10-17"</span>
    Statement = [{
      Action = <span class="hljs-attr">"sts:AssumeRole"</span>
      Effect = <span class="hljs-attr">"Allow"</span>
      Principal = {
        Service = <span class="hljs-attr">"ec2.amazonaws.com"</span>
      }
    }]
  })
}

resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"ecs_sg"</span> {
  name_prefix = <span class="hljs-attr">"ecs-sg-"</span>
  vpc_id      = aws_vpc.my_vpc.id

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = <span class="hljs-attr">"-1"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = <span class="hljs-attr">"tcp"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }
}

resource <span class="hljs-string">"aws_iam_instance_profile"</span> <span class="hljs-string">"ecsInstanceRole"</span> {
  name = aws_iam_role.ecsInstanceRole.name
  role = aws_iam_role.ecsInstanceRole.name
}

resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"ecs_role_policy"</span> {
  role       = aws_iam_role.ecsInstanceRole.name
  policy_arn = <span class="hljs-attr">"arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"</span>
}
</code></pre>
<p>ECS.tf</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_ecr_repository"</span> <span class="hljs-string">"ECRRepository"</span> {
  name = <span class="hljs-attr">"cicd-example"</span>
}

resource <span class="hljs-string">"aws_ecs_cluster"</span> <span class="hljs-string">"ECSCluster"</span> {
  name = var.cluster_name

  capacity_providers = [aws_ecs_capacity_provider.ecs_capacity_provider.name]

  default_capacity_provider_strategy {
    capacity_provider = aws_ecs_capacity_provider.ecs_capacity_provider.name
    weight            = 1
    base              = 1
  }
}

resource <span class="hljs-string">"aws_iam_policy"</span> <span class="hljs-string">"ecs_logs_policy"</span> {
  name        = <span class="hljs-attr">"ecsLogsPolicy"</span>
  description = <span class="hljs-attr">"Allow ECS tasks to interact with CloudWatch Logs"</span>

  policy = jsonencode({
    Version = <span class="hljs-attr">"2012-10-17"</span>,
    Statement = [
      {
        Effect = <span class="hljs-attr">"Allow"</span>,
        Action = [
          <span class="hljs-attr">"logs:CreateLogStream"</span>,
          <span class="hljs-attr">"logs:PutLogEvents"</span>,
          <span class="hljs-attr">"logs:CreateLogGroup"</span>,
          <span class="hljs-attr">"logs:DescribeLogStreams"</span>
        ],
        Resource = <span class="hljs-attr">"arn:aws:logs:*:*:*"</span>
      }
    ]
  })
}

resource <span class="hljs-string">"aws_iam_policy_attachment"</span> <span class="hljs-string">"ecs_logs_policy_attachment"</span> {
  name       = <span class="hljs-attr">"ecs-logs-policy-attachment"</span>
  roles      = [aws_iam_role.ecsTaskExecutionRole.name]
  policy_arn = aws_iam_policy.ecs_logs_policy.arn
}


resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ecsTaskExecutionRole"</span> {
  name = <span class="hljs-attr">"ecsTaskExecutionRole"</span>

  assume_role_policy = jsonencode({
    Version = <span class="hljs-attr">"2012-10-17"</span>,
    Statement = [
      {
        Effect = <span class="hljs-attr">"Allow"</span>,
        Principal = {
          Service = <span class="hljs-attr">"ecs-tasks.amazonaws.com"</span>
        },
        Action = <span class="hljs-string">"sts:AssumeRole"</span>
      }
    ]
  })
}

resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"ecsTaskExecutionRole_policy"</span> {
  role       = aws_iam_role.ecsTaskExecutionRole.name
  policy_arn = <span class="hljs-attr">"arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"</span>
}

resource <span class="hljs-string">"aws_ecs_task_definition"</span> <span class="hljs-string">"ECSTaskDefinition"</span> {
  family                   = <span class="hljs-attr">"example-react-project"</span>
  execution_role_arn       = aws_iam_role.ecsTaskExecutionRole.arn
  network_mode             = <span class="hljs-attr">"bridge"</span>
  requires_compatibilities = [<span class="hljs-attr">"EC2"</span>]
  cpu                      = <span class="hljs-attr">"1024"</span>
  memory                   = <span class="hljs-attr">"512"</span>
  container_definitions = templatefile(<span class="hljs-attr">"container_definitions.json.tpl"</span>, {
    account_id = data.aws_caller_identity.current.account_id
  })
}

resource <span class="hljs-string">"aws_ecs_service"</span> <span class="hljs-string">"ECSService"</span> {
  name                               = <span class="hljs-attr">"react-service"</span>
  cluster                            = aws_ecs_cluster.ECSCluster.arn
  task_definition                    = aws_ecs_task_definition.ECSTaskDefinition.arn
  desired_count                      = 2
  deployment_maximum_percent         = 200
  deployment_minimum_healthy_percent = 100
  scheduling_strategy                = <span class="hljs-attr">"REPLICA"</span>

  load_balancer {
    target_group_arn = aws_lb_target_group.exampleTG.arn
    container_name   = <span class="hljs-attr">"react-container"</span>
    container_port   = 80
  }
}

resource <span class="hljs-string">"aws_ecs_capacity_provider"</span> <span class="hljs-string">"ecs_capacity_provider"</span> {
  name = <span class="hljs-attr">"EC2CapacityProvider"</span>

  auto_scaling_group_provider {
    auto_scaling_group_arn = aws_autoscaling_group.AutoScalingGroup.arn

    managed_scaling {
      maximum_scaling_step_size = 1
      minimum_scaling_step_size = 1
      status                    = <span class="hljs-attr">"ENABLED"</span>
      target_capacity           = 100
    }
  }
}
</code></pre>
<p>And the container definition:</p>
<p>container_definition.json.tpl</p>
<pre><code class="lang-json">[
  {
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"react-container"</span>,
    <span class="hljs-attr">"image"</span>: <span class="hljs-string">"${account_id}.dkr.ecr.us-east-1.amazonaws.com/cicd-example:main"</span>,
    <span class="hljs-attr">"cpu"</span>: <span class="hljs-number">0</span>,
    <span class="hljs-attr">"portMappings"</span>: [
      {
        <span class="hljs-attr">"containerPort"</span>: <span class="hljs-number">80</span>,
        <span class="hljs-attr">"hostPort"</span>: <span class="hljs-number">0</span>,
        <span class="hljs-attr">"protocol"</span>: <span class="hljs-string">"tcp"</span>
      }
    ],
    <span class="hljs-attr">"essential"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"environment"</span>: [],
    <span class="hljs-attr">"mountPoints"</span>: [],
    <span class="hljs-attr">"volumesFrom"</span>: [],
    <span class="hljs-attr">"logConfiguration"</span>: {   
      <span class="hljs-attr">"logDriver"</span>: <span class="hljs-string">"awslogs"</span>,
      <span class="hljs-attr">"options"</span>: {
        <span class="hljs-attr">"awslogs-create-group"</span>: <span class="hljs-string">"true"</span>,
        <span class="hljs-attr">"awslogs-group"</span>: <span class="hljs-string">"/ecs/example-react-project"</span>,
        <span class="hljs-attr">"awslogs-region"</span>: <span class="hljs-string">"us-east-1"</span>,
        <span class="hljs-attr">"awslogs-stream-prefix"</span>: <span class="hljs-string">"ecs"</span>
      }
    }
  }
]
</code></pre>
<p>The above terraform templates represent the manual steps you've done before.</p>
<p>Because AWS does a lot of wiring automatically in the console, it's not required to do there, however for TF templates it isn't the case, hence the lengthy configuration.</p>
<p>One more thing. Before you deploy the ECS service, make sure you run your bitbucket pipeline, otherwise it won't deploy.</p>
<h2 id="heading-4-next-steps">4. Next steps</h2>
<p>So far you should have an architecture that deploys your code fine to ECR, however it doesn't yet re-trigger the deploy step in ECS.</p>
<p>An easy way to get around this issue is to trigger a re-deploy via this bitbucket pipe:</p>
<p><a target="_blank" href="https://bitbucket.org/product/features/pipelines/integrations?search=ecs&amp;p=atlassian/aws-ecs-deploy">https://bitbucket.org/product/features/pipelines/integrations?search=ecs&amp;p=atlassian/aws-ecs-deploy</a></p>
<p>you can simple add a few new lines to your bitbucket-pipelines.yaml file:</p>
<pre><code class="lang-json">- pipe: atlassian/aws-ecs-deploy:<span class="hljs-number">1.12</span><span class="hljs-number">.1</span>
              variables:
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: <span class="hljs-string">"us-east-1"</span>
                CLUSTER_NAME: <span class="hljs-string">"cicd-cluster"</span>
                SERVICE_NAME: <span class="hljs-string">"react-service"</span>
                FORCE_NEW_DEPLOYMENT: <span class="hljs-string">"true"</span>
</code></pre>
<p>on every push now, the service will redeploy:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707320935997/9da4f8e7-186f-42ac-aa08-adb9d1d3b55d.png" alt class="image--center mx-auto" /></p>
<p>There you go! You now have a very basic pipeline that deploys to ECS.</p>
<p>Needless to say, please don't use this in production, as this isn't meant for that. This is meant to show how you could do the same, and provides a baseline you can improve later on.</p>
]]></content:encoded></item><item><title><![CDATA[How to set up CI/CD pipeline on AWS using BitBucket, ECS, ECR]]></title><description><![CDATA[What you should already have

A bitbucket repository with existing SSH keys to connect to and 2FA enabled

An existing repository with some code to build

An ALB (Application Load balancer)

Your application dockerized


The simplified flow

Develope...]]></description><link>https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr</link><guid isPermaLink="true">https://krisfeher.com/how-to-set-up-cicd-pipeline-on-aws-using-bitbucket-ecs-ecr</guid><category><![CDATA[AWS]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Bitbucket]]></category><category><![CDATA[ECS]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Wed, 07 Feb 2024 15:07:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707318276984/3561808b-20fa-464b-9006-609a714b9d96.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-you-should-already-have">What you should already have</h2>
<ul>
<li><p>A bitbucket repository with existing SSH keys to connect to and 2FA enabled</p>
</li>
<li><p>An existing repository with some code to build</p>
</li>
<li><p>An ALB (Application Load balancer)</p>
</li>
<li><p>Your application dockerized</p>
</li>
</ul>
<h2 id="heading-the-simplified-flow">The simplified flow</h2>
<ol>
<li><p>Developer pushes up code to bitbucket on developer branch, then creates pull request to main branch</p>
</li>
<li><p>Somebody approves the pull request after reviewing code changes</p>
</li>
<li><p>Bitbucket triggers build on their server and tests are run</p>
</li>
<li><p>Docker image is is built and pushed up to ECR</p>
</li>
<li><p>Based on new image, ECS service is re-deployed</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706777650828/b6ac2c1d-1ff6-4d7d-ab03-a90cf2afef6b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-things-this-guide-will-not-cover">Things this guide will not cover</h2>
<h4 id="heading-1-changes-to-database-schema-on-push">1. Changes to database schema on push</h4>
<p>Sometimes a change is required in the database schema. This is generally more complex, depending on how the database is set up, if any ORM is in place, how the database differs between environments, if there're data fixtures, etc. This requires a lot more thought, and the goal of this article isn't something else.</p>
<h4 id="heading-2-backend-layer">2. Backend layer</h4>
<p>This largely depends on the architecture, but in a simple 3-tier, the backend would be just another similar ECS service beside the front-end.</p>
<h4 id="heading-3-complex-code">3. Complex code</h4>
<p>With simple code and architecture it's easier to understand how the flow works, so adding complex code (or testing code) is not necessary. For the same reason we'll not have a separate web server container or worry about database.</p>
<h4 id="heading-4-branching-strategy">4. Branching strategy</h4>
<p>While it's an important part of the pipeline, for this article I'll have only have 1 branch (main).</p>
<p>In an ideal scenario you would want to have multiple branches associated with multiple environments, say "feature branches", "staging", "preprod/UAT" and "production" as an example. Then each developer would create pull requests to one of the environment branches, get the code approved and merge the changes.</p>
<p>At release-time, you would tag a commit as a release (with the release number), and deploy that commit. We'll omit all these today, but will come back to it later.</p>
<h2 id="heading-1-the-setup">1. The setup</h2>
<h3 id="heading-aws">AWS</h3>
<h4 id="heading-terraform-template">Terraform template</h4>
<p>In this section we need to have an example architecture, which in our case is:</p>
<ul>
<li><p>1x ECR repository to store our docker images</p>
</li>
<li><p>1x ECS machine set up as a docker host for ECS (yes, in this guide we'll not use Fargate)</p>
</li>
<li><p>+ other parts within ECS: cluster, service, task (+ task definition)</p>
</li>
<li><p>+ AWS wiring, like permissions, auto-scaling, networking, etc.</p>
</li>
</ul>
<p>This is boring to set up, so I show you with a Terraform template to speed this up. It'll create the necessary bits.</p>
<p>If you don't know how to run terraform, first you need to install AWS CLI. Here's a quick tutorial I found for that: <a target="_blank" href="https://medium.com/@simonazhangzy/installing-and-configuring-the-aws-cli-7d33796e4a7c">https://medium.com/@simonazhangzy/installing-and-configuring-the-aws-cli-7d33796e4a7c</a></p>
<p>Then you can use Terraform : <a target="_blank" href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/infrastructure-as-code">https://developer.hashicorp.com/terraform/tutorials/aws-get-started/infrastructure-as-code</a>  </p>
<p><strong>networking.tf</strong></p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"my_vpc"</span> {
  cidr_block           = <span class="hljs-attr">"10.0.0.0/16"</span>
  enable_dns_support   = true
  enable_dns_hostnames = true
}

resource <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"public_subnet_1"</span> {
  vpc_id                  = aws_vpc.my_vpc.id
  cidr_block              = <span class="hljs-attr">"10.0.1.0/24"</span>
  map_public_ip_on_launch = true
  availability_zone       = <span class="hljs-attr">"us-east-1a"</span>
}

resource <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"public_subnet_2"</span> {
  vpc_id                  = aws_vpc.my_vpc.id
  cidr_block              = <span class="hljs-attr">"10.0.4.0/24"</span>
  map_public_ip_on_launch = true
  availability_zone       = <span class="hljs-attr">"us-east-1b"</span>
}

resource <span class="hljs-string">"aws_internet_gateway"</span> <span class="hljs-string">"internet_gw"</span> {
  vpc_id = aws_vpc.my_vpc.id
}

resource <span class="hljs-string">"aws_route_table"</span> <span class="hljs-string">"public_route_table"</span> {
  vpc_id = aws_vpc.my_vpc.id
  route {
    cidr_block = <span class="hljs-attr">"0.0.0.0/0"</span>
    gateway_id = aws_internet_gateway.internet_gw.id
  }
}

resource <span class="hljs-string">"aws_route_table_association"</span> <span class="hljs-string">"public_subnet_association"</span> {
  subnet_id      = aws_subnet.public_subnet_1.id
  route_table_id = aws_route_table.public_route_table.id
}

resource <span class="hljs-string">"aws_route_table_association"</span> <span class="hljs-string">"public_subnet_2_association"</span> {
  subnet_id      = aws_subnet.public_subnet_2.id
  route_table_id = aws_route_table.public_route_table.id
}

resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"allow_web"</span> {
  vpc_id = aws_vpc.my_vpc.id

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = <span class="hljs-attr">"-1"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = <span class="hljs-attr">"tcp"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }
}
</code></pre>
<p>This will create a brand new VPC, along with 2 subnets, route tables, internet gateway.</p>
<p><strong>variables.tf</strong></p>
<pre><code class="lang-json">data <span class="hljs-string">"aws_caller_identity"</span> <span class="hljs-string">"current"</span> {}

variable <span class="hljs-string">"ami_id"</span> {
  description = <span class="hljs-attr">"The AMI ID to use for EC2"</span>
  type        = string
  default     = <span class="hljs-attr">"ami-04d4dd7b34e293332"</span>
}

variable <span class="hljs-string">"instance_type"</span> {
  description = <span class="hljs-attr">"The instance type of the EC2 instance"</span>
  type        = string
  default     = <span class="hljs-attr">"t3.medium"</span>
}

variable <span class="hljs-string">"cluster_name"</span> {
  description = <span class="hljs-attr">"The name of the ECS cluster"</span>
  type        = string
  default     = <span class="hljs-attr">"cicd-cluster"</span>
}
</code></pre>
<p>Here you can add your specific requirements for your EC2 host (for ECS), and the ECS cluster name. (we'll have a TF template for the cluster later on)</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Please note, that this guide is done in mind using an ECS optimized AMI, which means the ECS agent is already installed on the EC2 machine. If you pick a different AMI, you'll need to install the ECS agent yourself.</div>
</div>

<p><strong>load_balancing.tf</strong></p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_lb"</span> <span class="hljs-string">"alb1"</span> {
  name               = <span class="hljs-attr">"alb1"</span>
  internal           = false
  load_balancer_type = <span class="hljs-attr">"application"</span>
  security_groups    = [aws_security_group.allow_web.id]
  subnets            = [aws_subnet.public_subnet_1.id, aws_subnet.public_subnet_2.id]
}

resource <span class="hljs-string">"aws_lb_target_group"</span> <span class="hljs-string">"exampleTG"</span> {
  name        = <span class="hljs-attr">"exampleTG"</span>
  port        = 80
  protocol    = <span class="hljs-attr">"HTTP"</span>
  target_type = <span class="hljs-attr">"instance"</span>
  vpc_id      = aws_vpc.my_vpc.id

  health_check {
    enabled = true
    path    = <span class="hljs-attr">"/"</span>
  }
}

resource <span class="hljs-string">"aws_lb_listener"</span> <span class="hljs-string">"http_listener"</span> {
  load_balancer_arn = aws_lb.alb1.arn
  port              = 80
  protocol          = <span class="hljs-attr">"HTTP"</span>

  default_action {
    type             = <span class="hljs-attr">"forward"</span>
    target_group_arn = aws_lb_target_group.exampleTG.arn
  }
}
</code></pre>
<p>Above will create you a load balancer and a target group.</p>
<h4 id="heading-target-groups">Target Groups</h4>
<p>As you see the above template includes a target group. You can do it manually as well as using terraform.<br />Once you have a load balancer, you would need to have a target group to direct traffic.</p>
<p>You can create the Target group on the console here:<br /><code>EC2 =&gt; Target groups =&gt; Create target group</code></p>
<p>A few things to watch out for :</p>
<ul>
<li><p>target-type: instance (IP does not work for dynamic port mapping)</p>
</li>
<li><p>VPC: select your VPC, it'll default to......the default VPC 🙂</p>
</li>
<li><p>DO NOT assign any targets to the target group. Leave it empty and save it.</p>
</li>
</ul>
<h3 id="heading-bitbucket">Bitbucket</h3>
<ul>
<li><p>Enable deployments for your repository in the "Deployments" section. For this you'll require 2FA to be set up.</p>
</li>
<li><p>Enable pipelines as well in the repository settings</p>
</li>
<li><p>At the same place go to Repository Variables and add your AWS access keys:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706802841704/98f38801-bf5d-4a07-88b8-4d71ad90ec3f.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-2-setting-up-the-pipeline-to-run">2. Setting up the pipeline to run</h2>
<h3 id="heading-dockerfile">dockerfile</h3>
<p>In order to create a docker image on the pipeline, we need a docker file.</p>
<p>A very simple example for a react vite application could be this:</p>
<pre><code class="lang-json">FROM node:<span class="hljs-number">20</span> as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

FROM nginx:stable-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE <span class="hljs-number">80</span>
CMD [<span class="hljs-string">"nginx"</span>, <span class="hljs-string">"-g"</span>, <span class="hljs-string">"daemon off;"</span>]
</code></pre>
<h3 id="heading-bitbucket-pipelinesyml">bitbucket-pipelines.yml</h3>
<p>Here's the example I've used.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-string">node:20</span>

<span class="hljs-attr">options:</span>
  <span class="hljs-attr">docker:</span> <span class="hljs-literal">true</span>

<span class="hljs-attr">pipelines:</span>
  <span class="hljs-attr">branches:</span>
    <span class="hljs-attr">main:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">step:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">Push</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Image</span>
          <span class="hljs-attr">script:</span>
            <span class="hljs-comment"># Install AWS CLI v2</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">" --&gt; Installing AWS CLI..."</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">curl</span> <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> <span class="hljs-string">--create-dirs</span> <span class="hljs-string">-o</span> <span class="hljs-string">"/tmp/awscli/awscliv2.zip"</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">unzip</span> <span class="hljs-string">-qq</span> <span class="hljs-string">/tmp/awscli/awscliv2.zip</span> <span class="hljs-string">-d</span> <span class="hljs-string">/tmp/</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">/tmp/aws/install</span>

            <span class="hljs-comment"># Login to Amazon ECR</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">" --&gt; Logging into Amazon ECR..."</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">aws</span> <span class="hljs-string">ecr</span> <span class="hljs-string">get-login-password</span> <span class="hljs-string">--region</span> <span class="hljs-string">us-east-1</span> <span class="hljs-string">|</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">--username</span> <span class="hljs-string">AWS</span> <span class="hljs-string">--password-stdin</span> <span class="hljs-number">529768619555.</span><span class="hljs-string">dkr.ecr.us-east-1.amazonaws.com</span>

            <span class="hljs-comment"># Build Docker image with cache-from option and build arguments</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">" --&gt; Building Docker image..."</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">.</span> <span class="hljs-string">--progress=plain</span> <span class="hljs-string">--tag=cicd-example</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">images</span>

            <span class="hljs-comment"># Push the Docker image to Amazon ECR</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">" --&gt; Pushing Docker image to Amazon ECR..."</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">pipe:</span> <span class="hljs-string">atlassian/aws-ecr-push-image:1.5.0</span>
              <span class="hljs-attr">variables:</span>
                <span class="hljs-attr">AWS_ACCESS_KEY_ID:</span> <span class="hljs-string">$AWS_ACCESS_KEY_ID</span>
                <span class="hljs-attr">AWS_SECRET_ACCESS_KEY:</span> <span class="hljs-string">$AWS_SECRET_ACCESS_KEY</span>
                <span class="hljs-attr">AWS_DEFAULT_REGION:</span> <span class="hljs-string">"us-east-1"</span>
                <span class="hljs-attr">IMAGE_NAME:</span> <span class="hljs-string">"cicd-example"</span>
                <span class="hljs-attr">TAGS:</span> <span class="hljs-string">"${BITBUCKET_BRANCH} ${BITBUCKET_COMMIT}"</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Make sure you set the build tag and the IMAGE_NAME the same thing as your ECR repository name.</div>
</div>

<p>Once you push this file up, a build will trigger.</p>
<p>You can see the description in the comments. The "pipe" section is something specific to bitbucket (bitbucket pipes), which are on their own little docker functions to do a specific task.</p>
<p>You can read about this particular here:<br /><a target="_blank" href="https://bitbucket.org/bitbucket/product/features/pipelines/integrations?search=ecr">https://bitbucket.org/bitbucket/product/features/pipelines/integrations?search=ecr</a></p>
<p>Once this pipeline finishes, you should see something like this in ECR:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706873676272/e402ea42-26bb-4b95-a56b-e6a6f33d089f.png" alt class="image--center mx-auto" /></p>
<p>For now, we'll leave it at this.</p>
<blockquote>
<p>In the next part we'll continue with ECS and EC2 setup. If you have any question feel free to contact me, happy to help!</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[How to prevent public AWS Lambda abuse using API Gateway]]></title><description><![CDATA[The Risks of Public AWS Lambda Functions
Public AWS Lambda functions are powerful, but without proper safeguards, they can be misused. Here’s what can go wrong:

Open Access: If anyone can trigger your Lambda function, it might be used for the wrong ...]]></description><link>https://krisfeher.com/how-to-prevent-public-aws-lambda-abuse-using-api-gateway</link><guid isPermaLink="true">https://krisfeher.com/how-to-prevent-public-aws-lambda-abuse-using-api-gateway</guid><category><![CDATA[AWS]]></category><category><![CDATA[lambda]]></category><category><![CDATA[API Gateway]]></category><category><![CDATA[Security]]></category><category><![CDATA[apikey]]></category><category><![CDATA[cloudformation]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Thu, 01 Feb 2024 08:00:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706633347913/74284bd3-0bdb-439f-979b-464db10f7501.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-risks-of-public-aws-lambda-functions">The Risks of Public AWS Lambda Functions</h2>
<p>Public AWS Lambda functions are powerful, but without proper safeguards, they can be misused. Here’s what can go wrong:</p>
<ol>
<li><p><strong>Open Access:</strong> If anyone can trigger your Lambda function, it might be used for the wrong reasons, leading to overuse or sensitive data exposure.</p>
</li>
<li><p><strong>Cost Issues:</strong> Lambda charges are based on usage. If someone repeatedly triggers your function, your bills could skyrocket.</p>
</li>
<li><p><strong>Data Theft:</strong> A Lambda function dealing with sensitive data can be a target for data leaks if not secured properly.</p>
</li>
<li><p><strong>Service Disruption:</strong> Excessive traffic, whether intentional or not, can overload your Lambda functions, disrupting the service.</p>
</li>
</ol>
<p>In this guide I'll focus on #2.</p>
<h3 id="heading-the-steps"><strong>The steps</strong></h3>
<p>Here're the steps in order we'll do.</p>
<ol>
<li><p>Create a Lambda function + test</p>
</li>
<li><p>Add an API gateway endpoint to it</p>
</li>
<li><p>Create an API key with limited usage on it + test</p>
</li>
</ol>
<h2 id="heading-the-lambda-function">The Lambda function</h2>
<p>First we need to create an example Lambda function:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706615329628/8ae6c65d-1eeb-46a9-9701-26006d252d4a.png" alt class="image--center mx-auto" /></p>
<p>Let's fill in the lambda function with code that adds a unix timestamp to the request JSON:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> time

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-comment"># Parsing the JSON body from the event</span>
    data = json.loads(event[<span class="hljs-string">'body'</span>])

    <span class="hljs-comment"># Append the current Unix timestamp</span>
    data[<span class="hljs-string">'timestamp'</span>] = int(time.time())

    <span class="hljs-comment"># Return the modified data as JSON</span>
    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>,
        <span class="hljs-string">'body'</span>: json.dumps(data)
    }
</code></pre>
<p>Click Deploy.</p>
<p>Once done, you can add a new test event and test with this example that imitates an API gateway JSON:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"resource"</span>: <span class="hljs-string">"/formsubmit"</span>,
    <span class="hljs-attr">"path"</span>: <span class="hljs-string">"/formsubmit"</span>,
    <span class="hljs-attr">"httpMethod"</span>: <span class="hljs-string">"POST"</span>,
    <span class="hljs-attr">"headers"</span>: {
        <span class="hljs-attr">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>,
        <span class="hljs-attr">"Accept"</span>: <span class="hljs-string">"application/json"</span>
    },
    <span class="hljs-attr">"queryStringParameters"</span>: <span class="hljs-literal">null</span>,
    <span class="hljs-attr">"multiValueQueryStringParameters"</span>: <span class="hljs-literal">null</span>,
    <span class="hljs-attr">"pathParameters"</span>: <span class="hljs-literal">null</span>,
    <span class="hljs-attr">"stageVariables"</span>: <span class="hljs-literal">null</span>,
    <span class="hljs-attr">"requestContext"</span>: {
        <span class="hljs-attr">"requestTime"</span>: <span class="hljs-string">"30/Jan/2024:12:31:45 +0000"</span>,
        <span class="hljs-attr">"path"</span>: <span class="hljs-string">"/prod/formsubmit"</span>,
        <span class="hljs-attr">"protocol"</span>: <span class="hljs-string">"HTTP/1.1"</span>,
        <span class="hljs-attr">"stage"</span>: <span class="hljs-string">"prod"</span>,
        <span class="hljs-attr">"domainName"</span>: <span class="hljs-string">"api.example.com"</span>,
        <span class="hljs-attr">"requestId"</span>: <span class="hljs-string">"123456789"</span>,
        <span class="hljs-attr">"requestTimeEpoch"</span>: <span class="hljs-number">1580389905401</span>,
        <span class="hljs-attr">"accountId"</span>: <span class="hljs-string">"123456789012"</span>,
        <span class="hljs-attr">"apiId"</span>: <span class="hljs-string">"abcdefghij"</span>
    },
    <span class="hljs-attr">"body"</span>: <span class="hljs-string">"{\"name\": \"John Doe\", \"email\": \"johndoe@example.com\"}"</span>,
    <span class="hljs-attr">"isBase64Encoded"</span>: <span class="hljs-literal">false</span>
}
</code></pre>
<p>In the execution results you should see something like this, having the timestamp added:</p>
<pre><code class="lang-json">Response
{
  <span class="hljs-attr">"statusCode"</span>: <span class="hljs-number">200</span>,
  <span class="hljs-attr">"body"</span>: <span class="hljs-string">"{\"name\": \"John Doe\", \"email\": \"johndoe@example.com\", \"timestamp\": 1706616281}"</span>
}
</code></pre>
<h2 id="heading-api-gateway">API Gateway</h2>
<p>At this point you have a lambda function that can be used within AWS.</p>
<p>In order to call it from outside your architecture, you can have an API Gateway endpoint triggering your Lambda function.</p>
<p>Something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706623292540/24b34600-e83e-42e0-8848-240c46ba4b0e.png" alt class="image--center mx-auto" /></p>
<p>API gateway will give us the power to restrict the incoming calls using API keys.</p>
<p>So as a next step...create a trigger in your lambda function and select API gateway.</p>
<p>Create a REST Api:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706625355643/8c5b2f2e-49d3-4e96-9c3c-c616e8c38158.png" alt class="image--center mx-auto" /></p>
<p>Then visit your newly created API.</p>
<p>Deploy this to any stage, I used default, then take a note of the invoke URL that's something similar to this, and append your function name to it:</p>
<p>https://123456asdfg.execute-api.us-east-1.amazonaws.com/default/timestamper</p>
<p>You can use this URL to test your endpoint.</p>
<p>Here's the fun part, let's try it!</p>
<p>You can curl the endpoint from any terminal:</p>
<pre><code class="lang-bash">curl -X POST <span class="hljs-string">"https://12345asdfg.execute-api.us-east-1.amazonaws.com/default/timestamper"</span> \
     -H <span class="hljs-string">"Content-Type: application/json"</span> \
     -d <span class="hljs-string">"{\"name\": \"John Doe\", \"email\": \"johndoe@example.com\"}"</span>
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   129  100    77  100    52    180    122 --:--:-- --:--:-- --:--:--   304{<span class="hljs-string">"name"</span>: <span class="hljs-string">"John Doe"</span>, <span class="hljs-string">"email"</span>: <span class="hljs-string">"johndoe@example.com"</span>, <span class="hljs-string">"timestamp"</span>: 1706626702}
</code></pre>
<p>As you see the response has come back and now our gateway endpoint is public 😱</p>
<p>What we need to do now is to limit the access of it via API key.</p>
<h2 id="heading-gateway-api-keys">Gateway API keys</h2>
<p>Within <code>API Gateway =&gt; APIs =&gt; API keys =&gt; Create API</code> key section, create an API key. You only need to add a name to it and autogenerate it.</p>
<p>Similarly, just above that, you can add a <code>API Gateway =&gt; APIs =&gt; Usage plan =&gt; Create usage plan</code></p>
<p>And this is where the magic happens.</p>
<p>Right here you can restrict the API in various ways, as an example:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706627467223/ca0bda19-3dc8-4229-865a-dfa6a93e9aa3.png" alt class="image--center mx-auto" /></p>
<p>This will provide</p>
<ul>
<li><p>10 calls a day with this API key</p>
</li>
<li><p>max. 1 call a second</p>
</li>
<li><p>max. 1 call at a time</p>
</li>
</ul>
<p>Of course it is fairly restrictive, but this will prevent anyone calling your 100s of times a second racking up a nice Lambda and API Gateway bill for you.</p>
<p>Once done, go back to your API key and add it to the usage plan. You can find this option under the "Actions" button.</p>
<p>You can now go back to your API and modify the resource with the EDIT button:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706628199605/d671bc23-7774-4919-be8c-a4e8b591ffb2.png" alt class="image--center mx-auto" /></p>
<p>Once clicked, you can turn on "Api key required", then save.</p>
<p>Before you re-try your curl, make sure you "Deploy API" and wait a few minutes.</p>
<p>After that, your response should be something like this:  </p>
<pre><code class="lang-bash">  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    75  100    23  100    52     59    133 --:--:-- --:--:-- --:--:--   193{<span class="hljs-string">"message"</span>:<span class="hljs-string">"Forbidden"</span>}
</code></pre>
<p>Great! Now our endpoint requires an API key.</p>
<p>That's great, but how do I send the API key? 🤔</p>
<p>Well, here's how:</p>
<pre><code class="lang-bash">curl -X POST <span class="hljs-string">"https://12345asdfg.execute-api.us-east-1.amazonaws.com/default/timestamper2"</span> \
      -H <span class="hljs-string">"Content-Type: application/json"</span> \
      -H <span class="hljs-string">"x-api-key: 1234567890asdfghjkl"</span> \
      -d <span class="hljs-string">"{\"name\": \"John Doe\", \"email\": \"johndoe@example.com\"}"</span>
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   129  100    77  100    52    138     93 --:--:-- --:--:-- --:--:--   232{<span class="hljs-string">"name"</span>: <span class="hljs-string">"John Doe"</span>, <span class="hljs-string">"email"</span>: <span class="hljs-string">"johndoe@example.com"</span>, <span class="hljs-string">"timestamp"</span>: 1706632463}
</code></pre>
<p>If we send request too quickly we'll get this response:</p>
<pre><code class="lang-bash">  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    83  100    31  100    52     71    120 --:--:-- --:--:-- --:--:--   193{<span class="hljs-string">"message"</span>:<span class="hljs-string">"Too Many Requests"</span>}
</code></pre>
<p>And if we run out of the daily limit we'll get this:</p>
<pre><code class="lang-bash">  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    80  100    28  100    52     81    151 --:--:-- --:--:-- --:--:--   234{<span class="hljs-string">"message"</span>:<span class="hljs-string">"Limit Exceeded"</span>}
</code></pre>
<h2 id="heading-closing-thoughts">Closing thoughts</h2>
<p>There's of course plenty of other ways to secure your lambda function, to name a few:</p>
<ul>
<li><p>AWS WAF for an extra security layer. It’s like having a guard to block unwanted traffic. With this, you can selectively deny traffic.</p>
</li>
<li><p>Control Access with IAM who can use your Lambda functions.</p>
</li>
<li><p>Use CloudWatch for detailed logging and alerts.</p>
</li>
<li><p>Set limits on how many instances of your Lambda function can run at once via Lambda concurrency. This prevents your system from getting overwhelmed by too much traffic.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to set up an example AWS environment with secure EC2 access]]></title><description><![CDATA[Introduction
This guide will provide a simple way to set up an EC2 and give secure access to a user without opening any ports on the EC2.
What we'll end up with is:

an EC2 machine in our current VPC with appropriate permissions

additional permissio...]]></description><link>https://krisfeher.com/how-to-set-up-an-example-aws-environment-with-secure-ec2-access</link><guid isPermaLink="true">https://krisfeher.com/how-to-set-up-an-example-aws-environment-with-secure-ec2-access</guid><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[systems manager]]></category><category><![CDATA[Security]]></category><category><![CDATA[cloudformation]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Tue, 30 Jan 2024 08:00:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706602377375/4d1f74a5-de71-4132-99e8-7ca042981eb9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>This guide will provide a simple way to set up an EC2 and give secure access to a user without opening any ports on the EC2.</p>
<p>What we'll end up with is:</p>
<ul>
<li><p>an EC2 machine in our current VPC with appropriate permissions</p>
</li>
<li><p>additional permissions to a chosen user that allows sessions to above EC2</p>
</li>
</ul>
<p>Prerequisites:</p>
<ul>
<li><p>A VPC</p>
</li>
<li><p>A user with access key and ID set up on your local machine</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">AWS cli</a> on your local machine to access</p>
</li>
</ul>
<p>The advantages of this setup:</p>
<ul>
<li><p>As it'll be CloudFormation, it's easy to tear it down once not used</p>
</li>
<li><p>No inbound ports need open for the EC2 =&gt; no chance to hack in via open port 22 or any other port</p>
</li>
<li><p>still full access from your own local terminal application</p>
</li>
</ul>
<h2 id="heading-the-cloudformation-template">The CloudFormation template</h2>
<pre><code class="lang-json">AWSTemplateFormatVersion: '<span class="hljs-number">2010</span><span class="hljs-number">-09</span><span class="hljs-number">-09</span>'
Description: EC2 with Session access to a user

Parameters:
  InstanceType:
    Type: String
    Description: EC2 instance type
    Default: t2.micro
  VPCId:
    Type: AWS::EC2::VPC::Id
    Description: VPC ID where the instance will be launched
  AMIId:
    Type: String
    Description: AMI ID for the EC2 instance
    Default: ami<span class="hljs-number">-0</span>a3c3a20c09d6f377
  IAMUser:
    Type: String
    Description: The name of the existing IAM user

Resources:
  MyInstance:
    Type: 'AWS::EC2::Instance'
    Properties:
      ImageId: !Ref AMIId
      InstanceType: !Ref InstanceType
      IamInstanceProfile: !Ref SSMInstanceProfile

  SSMInstanceProfile:
    Type: 'AWS::IAM::InstanceProfile'
    Properties:
      Path: <span class="hljs-string">"/"</span>
      Roles:
        - !Ref SSMRole

  SSMRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '<span class="hljs-number">2012</span><span class="hljs-number">-10</span><span class="hljs-number">-17</span>'
        Statement:
          - Effect: Allow
            Principal:
              Service: ec2.amazonaws.com
            Action: 'sts:AssumeRole'
      Path: <span class="hljs-string">"/"</span>
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore

  IAMUserPolicy:
    Type: 'AWS::IAM::Policy'
    Properties:
      PolicyName: 'StartSessionPolicy'
      Users:
        - !Ref IAMUser
      PolicyDocument:
        Version: '<span class="hljs-number">2012</span><span class="hljs-number">-10</span><span class="hljs-number">-17</span>'
        Statement:
          - Effect: <span class="hljs-string">"Allow"</span>
            Action: <span class="hljs-string">"ssm:StartSession"</span>
            Resource: !Sub <span class="hljs-string">"arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/${MyInstance}"</span>
          - Effect: <span class="hljs-string">"Allow"</span>
            Action:
              - <span class="hljs-string">"ssm:DescribeSessions"</span>
              - <span class="hljs-string">"ssm:GetConnectionStatus"</span>
              - <span class="hljs-string">"ssm:DescribeInstanceProperties"</span>
              - <span class="hljs-string">"ec2:DescribeInstances"</span>
            Resource: <span class="hljs-string">"*"</span>
          - Effect: <span class="hljs-string">"Allow"</span>
            Action:
              - <span class="hljs-string">"ssm:TerminateSession"</span>
              - <span class="hljs-string">"ssm:ResumeSession"</span>
            Resource: !Sub <span class="hljs-string">"arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:session/${IAMUser}-*"</span>

Outputs:
  InstanceId:
    Description: The Instance ID
    Value: !Ref MyInstance
</code></pre>
<p>To run this, go to CloudFormation and upload a new template.</p>
<p>You'll be requested to enter a few details:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706542533173/3cfb81da-aca1-4bfb-974f-69ea765cc3ad.png" alt class="image--center mx-auto" /></p>
<p><strong>Stack name</strong>: doesn't matter, let your imagination free 🙂<br /><strong>IAMUser</strong>: the user you want to assign permissions to access the new EC2<br /><strong>AMIId</strong> : it's the ID of the amazon machine image that you should use. You can get this by manually trying to create an EC2 machine. This will be unique to your operating system requirements and region, but feel free to copy mine (us-east-1).<br /><strong>InstanceType</strong>: it's the type and size of instance you want to deploy.<br /><strong>VPCId</strong>: it's going to be a dropdown with all your VPCs. Pick one.</p>
<p>Once you filled this in, click next a few times and accept the checkbox about creating IAM resources.</p>
<p>Wait a few minutes until the stack completes and you'll be presented with your EC2 machine.</p>
<p>This machine will have the appropriate security groups, instance profile and your user will receive the necessary permissions to access this machine.</p>
<h2 id="heading-accessing-the-machine">Accessing the machine</h2>
<p>Port 22 SSH access will not work, as no port is open, however you can access your machine the following way from your terminal:</p>
<ol>
<li><p>Make sure your and <code>credentials</code> file contains your access key and ID and your <code>config</code> file contains the correct region.</p>
</li>
<li><p>make a note of the instance ID. You can see that in the "outputs" tab of your CloudFormation stack.</p>
</li>
<li><p>in your console <code>aws ssm start-session --target i-1234567890abcdef</code></p>
</li>
<li><p>this will give you the default shell. If you want say bash, then run <code>/bin/bash</code></p>
</li>
</ol>
<p>You should now have an EC2 with secure access.</p>
<h2 id="heading-tearing-everything-down">Tearing everything down</h2>
<p>Go back to your CloudFomation and click the "Delete" button on the stack.<br />This will remove all resources that was created beforehand.</p>
]]></content:encoded></item><item><title><![CDATA[Dynamic Image content resizing with Lambda]]></title><description><![CDATA[When should you read this guide
If:

You have a significant amount, possibly millions of images with various usage patterns that you serve to website guests

If these images are not static images, but dynamic, generated on the user request

If you ca...]]></description><link>https://krisfeher.com/dynamic-image-content-resizing-with-lambda</link><guid isPermaLink="true">https://krisfeher.com/dynamic-image-content-resizing-with-lambda</guid><category><![CDATA[AWS]]></category><category><![CDATA[lambda]]></category><category><![CDATA[images]]></category><category><![CDATA[image processing]]></category><category><![CDATA[SEO]]></category><category><![CDATA[API Gateway]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Mon, 29 Jan 2024 11:31:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706520446962/6c57f0df-6990-4d20-8438-1f7ea20bf64c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-when-should-you-read-this-guide">When should you read this guide</h2>
<p>If:</p>
<ol>
<li><p>You have a significant amount, possibly millions of images with various usage patterns that you serve to website guests</p>
</li>
<li><p>If these images are not static images, but dynamic, generated on the user request</p>
</li>
<li><p>If you care about load times/SEO impact and cost</p>
</li>
</ol>
<p>You should not use this guide:</p>
<ol>
<li><p>If you have a few static images that can be easily pre-sized to appropriate size</p>
</li>
<li><p>If you're unable to make front-end code changes in image fetching.</p>
</li>
</ol>
<h2 id="heading-introduction">Introduction</h2>
<p>When transferring images in web and mobile applications, it's common to face challenges with image size optimization, as images are often sent without proper resizing, causing long transfer times, especially when handling multiple images.</p>
<p>This is typically not a problem with static images that can be pre-sized to various resolutions. However, if your web app doesn't yet know which images it needs to render, this issue can become more significant.</p>
<p>This solution involves enabling any front-end to request images in various sizes. This would be determined by the visitor’s device resolution, allowing for the delivery of an image that is optimally sized for each specific device.</p>
<p>Let's do first 4 different images sizes, however it's up to you how many you pick:</p>
<ul>
<li><p>Small</p>
</li>
<li><p>Medium</p>
</li>
<li><p>Large</p>
</li>
<li><p>Extra Large</p>
</li>
</ul>
<p>Users can request the size most suitable for their needs. For instance, a mobile device might require a 'small' image, whereas a desktop could be better served with an 'extra large' one.</p>
<p>Each size category scales the original image as follows, while maintaining the aspect ratio for height:</p>
<ul>
<li><p>Small: 150px width</p>
</li>
<li><p>Medium: 300px width</p>
</li>
<li><p>Large: 600px width</p>
</li>
<li><p>Extra Large: 1200px width</p>
</li>
</ul>
<p>Again, these sizes can be different.</p>
<p>Avoiding exact measurements for each image, such as 500x300, is strategic. It prevents the creation of an excessive number of image sizes (we'll store them later on in S3), which would otherwise increase storage costs and potentially slow down network transfers.</p>
<h2 id="heading-aws-tech-well-use">AWS tech we'll use</h2>
<ul>
<li><p>S3 (static site, redirection rules, generic S3 knowledge)</p>
</li>
<li><p>Lambda function (node js, lambda layers)</p>
</li>
<li><p>API gateway (generic API gateway knowledge)</p>
</li>
<li><p>Npm, build node projects, etc.</p>
</li>
<li><p>Cloudfront (for caching images)</p>
</li>
</ul>
<h2 id="heading-high-level-overview">High level overview</h2>
<p>There’re 3 players in this game.</p>
<p>1. S3 bucket (this stores the original images, and the resized images)<br />2. API gateway (this is used as an entry point for non-resized images)<br />3. Lambda function (this is used to resize images and redirect to the new image)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706524847297/1626827e-c08b-4090-ad57-a849c9577d4e.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>User requests image via Cloudfront URL. Something like <a target="_blank" href="https://d12nyqr7iviw59.cloudfront.net/small/img/d23edd174f789a1f37706f28ed59eb78.png"><code>https://somethingsomething.cloudfront.net/small/img/catimage.png</code></a></p>
</li>
<li><p>Cloudfront checks its own cache if it has the image. If yes, it serves it back to the user. In this case, the process stops.</p>
</li>
<li><p>If Cloudfront doesn't have it in its cache, it goes to S3 to retrieve the resized file. If S3 has the resized file, it returns it to Cloudfront, which caches it and returns it to the user.</p>
</li>
<li><p>If S3 also doesn't have the resized image, it results in a 404, and hence forwards the call to API gateway.</p>
</li>
<li><p>API gateway in return forwards the call to the Lambda function, which resizes the image</p>
</li>
<li><p>Lambda then places it in the S3 bucket and returns a new URL to the user.</p>
</li>
</ol>
<p>As you see this would only need to <strong>execute once for each image</strong>. Any further calls would then get the resized image from Cloudfront.</p>
<h2 id="heading-the-setup"><strong>The setup</strong></h2>
<p>Below you’ll find the steps to set up the solution.</p>
<p>I’ll not detail a generic S3, Lambda or API Gateway setup or permissions. There’re plenty of tutorials online. I’ll mention all the peculiarities though.</p>
<h3 id="heading-s3-initial-setup">S3 initial setup</h3>
<ol>
<li><p>Set up an S3 bucket and dump some images on it</p>
</li>
<li><p>Set up public access to it</p>
</li>
<li><p>Set the permissions to allow public access (later on you can limit this to Cloudfront)</p>
</li>
<li><p>Enable static website hosting</p>
</li>
</ol>
<p>At this point, you should be able to access all files in the bucket from anywhere.  </p>
<h3 id="heading-lambda-api-gateway-setup">Lambda + API gateway setup</h3>
<ol>
<li><p>create a function and attach a trigger to API gateway.</p>
</li>
<li><p>Leave the gateway open to public.</p>
</li>
<li><p>Set up Lambda permissions to S3 bucket</p>
</li>
</ol>
<p>Here’s the policy I used (for logging and S3 <code>PutObject</code>)</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"logs:CreateLogGroup"</span>,
                <span class="hljs-string">"logs:CreateLogStream"</span>,
                <span class="hljs-string">"logs:PutLogEvents"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:logs:*:*:*"</span>
        },
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:PutObject"</span>,
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::img.yourbucket/*"</span>
        }
    ]
}
</code></pre>
<p>4. Create a node project on your local PC and add an index.js file with these details. This is the code that does the resizing. Feel free to modify it as you wish. We'll upload this to Lambda.</p>
<pre><code class="lang-javascript"><span class="hljs-meta">'use strict'</span>;

<span class="hljs-keyword">const</span> AWS = <span class="hljs-built_in">require</span>(<span class="hljs-string">'aws-sdk'</span>);
<span class="hljs-keyword">const</span> S3 = <span class="hljs-keyword">new</span> AWS.S3({
  <span class="hljs-attr">signatureVersion</span>: <span class="hljs-string">'v4'</span>,
});
<span class="hljs-keyword">const</span> Sharp = <span class="hljs-built_in">require</span>(<span class="hljs-string">'sharp'</span>);

<span class="hljs-keyword">const</span> BUCKET = process.env.BUCKET;
<span class="hljs-keyword">const</span> URL = process.env.URL;

<span class="hljs-built_in">exports</span>.handler = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">event, context, callback</span>) </span>{
  <span class="hljs-keyword">const</span> key = event.queryStringParameters.key;
  <span class="hljs-keyword">const</span> match = key.match(<span class="hljs-regexp">/(small|medium|large|xlarge)\/(.*)\.(jpg|jpeg|png)/</span>);

  <span class="hljs-keyword">const</span> scale = match[<span class="hljs-number">1</span>];
  <span class="hljs-keyword">const</span> imageName = match[<span class="hljs-number">2</span>];
  <span class="hljs-keyword">const</span> imageExtension = match[<span class="hljs-number">3</span>];
  <span class="hljs-keyword">const</span> imagePath = imageName + <span class="hljs-string">'.'</span> + imageExtension;

  <span class="hljs-keyword">const</span> sizes = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span>();
  sizes.set(<span class="hljs-string">'small'</span>, <span class="hljs-number">150</span>);
  sizes.set(<span class="hljs-string">'medium'</span>, <span class="hljs-number">300</span>);
  sizes.set(<span class="hljs-string">'large'</span>, <span class="hljs-number">600</span>);
  sizes.set(<span class="hljs-string">'xlarge'</span>, <span class="hljs-number">1200</span>);

  <span class="hljs-keyword">const</span> newSize = sizes.get(scale);
  <span class="hljs-keyword">let</span> contentType;
  <span class="hljs-keyword">if</span>  ([<span class="hljs-string">"png"</span>].includes(imageExtension.toLowerCase())) contentType = <span class="hljs-string">'image/png'</span>;
  <span class="hljs-keyword">if</span>  ([<span class="hljs-string">"jpg"</span>, <span class="hljs-string">"jpeg"</span>].includes(imageExtension.toLowerCase())) contentType = <span class="hljs-string">'image/jpeg'</span>;


  S3.getObject({<span class="hljs-attr">Bucket</span>: BUCKET, <span class="hljs-attr">Key</span>: imagePath}).promise()
    .then(<span class="hljs-function"><span class="hljs-params">data</span> =&gt;</span> Sharp(data.Body)
      .resize({
        <span class="hljs-attr">fit</span>: Sharp.fit.contain,
        <span class="hljs-attr">width</span>: newSize
      })
      .toBuffer()
    )
    .then(<span class="hljs-function"><span class="hljs-params">buffer</span> =&gt;</span> S3.putObject({
        <span class="hljs-attr">Body</span>: buffer,
        <span class="hljs-attr">Bucket</span>: BUCKET,
        <span class="hljs-attr">ContentType</span>: contentType,
        <span class="hljs-attr">Key</span>: key,
      }).promise()
    )
    .then(<span class="hljs-function">() =&gt;</span> callback(<span class="hljs-literal">null</span>, {
        <span class="hljs-attr">statusCode</span>: <span class="hljs-string">'301'</span>,
        <span class="hljs-attr">headers</span>: {<span class="hljs-string">'location'</span>: <span class="hljs-string">`<span class="hljs-subst">${URL}</span>/<span class="hljs-subst">${key}</span>`</span>},
        <span class="hljs-attr">body</span>: <span class="hljs-string">''</span>,
      })
    )
    .catch(<span class="hljs-function"><span class="hljs-params">err</span> =&gt;</span> callback(err))
}
</code></pre>
<p>Here’s the package.json.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"image-resize"</span>,
  <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0.0"</span>,
  <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Serverless image resizing"</span>,
  <span class="hljs-attr">"readme"</span>: <span class="hljs-string">"Serverless image resizing"</span>,
  <span class="hljs-attr">"main"</span>: <span class="hljs-string">"index.js"</span>,
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"build-copy"</span>: <span class="hljs-string">"npm install &amp;&amp; mkdir -p nodejs &amp;&amp; cp -r node_modules nodejs/ &amp;&amp; zip -r  {file-name}.zip nodejs"</span>
  },
  <span class="hljs-attr">"devDependencies"</span>: {
    <span class="hljs-attr">"aws-sdk"</span>: <span class="hljs-string">"^2.1046.0"</span>,
    <span class="hljs-attr">"sharp"</span>: <span class="hljs-string">"^0.29.3"</span>
  }
}
</code></pre>
<p>5. <code>npm install --arch=x64 --platform=linux</code> This will create a node_modules folder and a package lock file.<br />Because Sharp package has a different package for arch linux (which is used by lambda), you’d need to install this type. Please see <a target="_blank" href="https://javascript.plainenglish.io/image-manipulation-with-sharp-aws-lambda-functions-layers-and-claudia-js-876d3dadcdb4">here</a>.</p>
<p>6. Create a nodejs folder and copy node_modules folder into it. Zip the whole nodejs folder<br />7. Create a new layer in lambda and upload this zip as your layer. This is needed, so you can separate your dependencies from your actual code.</p>
<p>8. Zip your package.json, your json lock and index.js. Upload this file as your code to Lambda</p>
<p>9. Attach the previously created layer to this code. This can be version one.</p>
<p>10. Add new environment variables. One for the bucket and one for the url path. See example below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706526423103/1bf3cc1d-ea42-4d43-91e5-e9f6af306ec3.png" alt class="image--center mx-auto" /></p>
<p>11. At this point your lambda function should work and you can test it by calling your API gateway EP. You can see your error logs for lambda in Cloudwatch.<br />The URL will be something like this: <strong>https://123456789.execute-api.us-west-1.amazonaws.com/default/image-resizer</strong></p>
<p>Where /default/ is your stage you set and /image-resizer is your lambda function name.</p>
<h3 id="heading-add-redirection-to-s3-bucket">Add redirection to S3 bucket</h3>
<p>To have this all working together, you need to tell the S3 bucket to redirect calls that fail to retrieve any object.</p>
<p>Add a redirection rule to your static site settings:</p>
<pre><code class="lang-json">[
    {
        <span class="hljs-attr">"Condition"</span>: {
            <span class="hljs-attr">"HttpErrorCodeReturnedEquals"</span>: <span class="hljs-string">"404"</span>
        },
        <span class="hljs-attr">"Redirect"</span>: {
            <span class="hljs-attr">"HostName"</span>: <span class="hljs-string">"123456789.execute-api.eu-west-1.amazonaws.com"</span>,
            <span class="hljs-attr">"HttpRedirectCode"</span>: <span class="hljs-string">"307"</span>,
            <span class="hljs-attr">"Protocol"</span>: <span class="hljs-string">"https"</span>,
            <span class="hljs-attr">"ReplaceKeyPrefixWith"</span>: <span class="hljs-string">"default/image-resizer?key="</span>
        }
    }
]
</code></pre>
<p>Where <code>HttpErrorCodeReturnedEquals</code> is a condition that triggers when a 404 (not found) is returned from the bucket on a particular path.</p>
<p><code>HostName</code> is your API gateway hostname</p>
<p><code>HttpRedirectCode</code> is what you tell the user’s browser on the response (307 is temp. redirect)</p>
<p><code>Protocol</code> is the secure http protocol</p>
<p><code>ReplaceKeyPrefixWith</code> is used to replace incoming paths with other paths (along with <code>KeyPrefixEquals</code> tag in the condition block, which isn’t there now). In our case it simply adds the <code>default/image-resizer?key=</code> just after the domain name, making the rest of the path a query parameter instead, which we can parse in the Lambda function.</p>
<h3 id="heading-add-cloudfront-https">Add Cloudfront + HTTPS</h3>
<p>Add a CloudFront distribution in front of your S3 bucket that contains the images. When you add your S3 bucket, make sure you <strong>don’t pre-select the bucket AWS offers you</strong>.<br />If you do so regardless, AWS will not use the static site endpoint (that has the redirection set up). The issue you’ll face will be a tricky one, as your available images will actually serve perfectly fine via Cloudfront, only the new (not yet resized) images will give “Access Denied” response. This response is actually quite misleading because when an object doesn’t exist on S3, you get this error from CloudFront.</p>
<p>So to avoid that, make sure you use the static site EP URL.</p>
<p>Additionally, change your lambda functions URL environment variable to point to Cloudfront instead of the S3 bucket. This will help your browser serve HTTPS images at all times (even on first call/cache)</p>
<p>This will have the following advantages:</p>
<ul>
<li><p>it'll save your money using Cloudfront instead of S3</p>
</li>
<li><p>speed up network requests for users</p>
</li>
</ul>
<h2 id="heading-additional-improvements-to-consider">Additional improvements to consider</h2>
<h3 id="heading-add-lifecycle-rules-for-resized-images-only">Add Lifecycle rules for resized images only</h3>
<p>This would be quite handy, as some images are only accessed once, and never again. Still, we’ll end up paying for the storage costs of those images. Deleting those images when not needed anymore would “free up space” on the bucket.</p>
<p>Reason why it’s <strong>recommended</strong>:<br />- not using an S3 bucket for storing an image that isn't used means saving money.</p>
<h3 id="heading-add-webp-conversion-recommended-but-later">Add WEBP conversion (recommended, but later)</h3>
<p>WebP is a modern image format designed for the web. It is both smaller than jpg and better quality.<br />(see: <a target="_blank" href="https://developers.google.com/speed/webp/docs/webp_study">WebP Compression Study  |  Google for Develop</a><a target="_blank" href="https://developers.google.com/speed/webp/docs/webp_study">ers )</a></p>
<p>Sharp gives you the option to convert images to WebP on the fly, further reducing the image size.</p>
<p>Reason why it’s <strong>recommended</strong>:<br />- smaller image size means smaller load times for users and faster pages<br />- extremely easy to implement and switch to webP at any time on our lambda function<br />- if you replace all your jpg and png with WebP, you'll save money on storage costs in S3</p>
]]></content:encoded></item><item><title><![CDATA[Essential tips for securing your first AWS account]]></title><description><![CDATA[At account creation
A secure password
At account creation make sure you pick a strong password. Understandably "strong password" is a vague explanation, but you type in something similar to a service like this:https://bitwarden.com/password-strength/...]]></description><link>https://krisfeher.com/essential-tips-for-securing-your-first-aws-account</link><guid isPermaLink="true">https://krisfeher.com/essential-tips-for-securing-your-first-aws-account</guid><category><![CDATA[AWS]]></category><category><![CDATA[Security]]></category><category><![CDATA[IAM]]></category><category><![CDATA[accounting]]></category><category><![CDATA[MFA]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Wed, 24 Jan 2024 14:49:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706520768838/b087e236-5923-4f0f-bf07-7bed93897443.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-at-account-creation">At account creation</h2>
<h3 id="heading-a-secure-password">A secure password</h3>
<p>At account creation make sure you pick a strong password. Understandably "strong password" is a vague explanation, but you type in something similar to a service like this:<br /><a target="_blank" href="https://bitwarden.com/password-strength/">https://bitwarden.com/password-strength/</a></p>
<p>Please note that even though Bitwarden is a trusted site, I would by no means enter any real password of mine. You can type in something similar in a similar format and length to give you a rough estimate how secure of a password you picked.</p>
<p>Here's an example I tried. It's deceptively easy to remember:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706096058978/1dbf488f-584d-4e48-bc59-4917148f267c.png" alt class="image--center mx-auto" /></p>
<p>Once you've gone through account creation, you can log into your root account.</p>
<h2 id="heading-after-logging-in-with-root-account">After logging in with root account</h2>
<h3 id="heading-setting-up-mfa">Setting up MFA</h3>
<p>Root accounts for AWS are something that generally isn't meant to be used for day-to-day operations.<br />This account has all the power associated with your account. It can do literally anything.</p>
<p>To avoid abuse of this, the 2nd thing you should do is enable 2FA on your root account.</p>
<ul>
<li>To do this, navigate to your security credentials:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706096296260/c0247389-8cf3-4ade-b1f7-4ac2af31fcb8.png" alt="security credentials menu screenshot" class="image--center mx-auto" /></p>
<ul>
<li><p>At the <strong>Multi-factor authentication (MFA)</strong> section click on "<strong>Add MFA Device</strong>"</p>
</li>
<li><p>Add a name above to associate it with the device and click on Authenticator APP</p>
</li>
<li><p>Follow the on-screen instructions to add your device:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706096581592/ca2d8bdf-2749-49d6-8ce2-5d6a1f5bb6d7.png" alt="screenshot of set up device option in AWS" class="image--center mx-auto" /></p>
</li>
<li><p>It's asking for 2x MFA code, to get the 2nd code, just wait 30 seconds in your MFA application.</p>
</li>
<li><p>Once done, you can log out of your account and test your MFA login  </p>
</li>
</ul>
<h3 id="heading-setting-up-a-budget-alert">Setting up a budget alert</h3>
<p>To alert you if your spending goes beyond a certain threshold, you can set up budget alerts.</p>
<p>To do this, navigate to <code>Billing and Cost Management =&gt; Budgets =&gt; Create budget</code></p>
<p>You can then either set a monthly amount you're comfortable with along with an alert email.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706103277103/4a6bf6e6-a041-4cae-827e-24c0c590857f.png" alt class="image--center mx-auto" /></p>
<p>This will alert you any time you're close to reaching your budget, however it'll do nothing else!</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">❗</div>
<div data-node-type="callout-text">These alerts will <strong>NOT </strong>send you email immediately once threshold reached, but rather at a random time through the day.</div>
</div>

<p>Update Billing Preferences</p>
<p>Navigate to <code>Billing and Cost Management =&gt; Billing Preferences</code></p>
<p>Select both these options:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706103725671/1bf476a1-8412-45ac-9c20-33c4a6dfc566.png" alt class="image--center mx-auto" /></p>
<p>The first option will provide you alerts if you exceed Free Tier limits<br />The second option enables Cloudwatch alerts to be created around billing</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Cloudwatch billing alerts are at most 6h late. They're not immediate.</div>
</div>

<h3 id="heading-create-a-cloudwatch-billing-alarm">Create a cloudwatch billing alarm</h3>
<p>Navigate to <code>CloudWatch =&gt; Alarms =&gt; Create alarm</code></p>
<p>For metric, select <code>All =&gt; Billing =&gt; Total Estimated Charge</code></p>
<p>Specify metric to <code>maximum =&gt; 6h</code> . As noted above, anything quicker than 6h will cause the below warning, as billing updates aren't more frequent than 6h.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706104143860/2ebba00a-67d3-4254-a2de-1c802be688f1.png" alt class="image--center mx-auto" /></p>
<p>Choose a condition to trigger when it's greater than your set amount. As an example:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706104260847/6eea0d7d-ec01-4d64-9421-c2cc49e13ffa.png" alt class="image--center mx-auto" /></p>
<p>Click next, and on the next page create a target notification for the alarm.</p>
<p>Create a new topic and set your email address there as below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706104410836/2a75b784-ad7c-469a-ab1a-1897c64c1f64.png" alt class="image--center mx-auto" /></p>
<p>Create this topic.</p>
<p>On the next page, give it a name and then review your alarm.</p>
<p>Once you did that, you'll receive an email that you need to confirm.</p>
<p>After all this, your alarm is active!</p>
<h3 id="heading-do-nots">Do NOTs</h3>
<ul>
<li><p>do not use the root account for day-to-day operations</p>
</li>
<li><p>do not create an access key for the root account</p>
</li>
<li><p>do not share your credentials with anyone</p>
</li>
</ul>
<h3 id="heading-creating-a-new-day-to-day-user">Creating a new day-to-day user</h3>
<p>To avoid using your AWS root account, create a new user for logging in and performing daily activities.</p>
<p>Go to <code>IAM =&gt; Users =&gt; Create user</code><br />Create a new user with <code>AWS Management Console access</code>, and select<br /><code>I want to create an IAM user</code></p>
<p>Set them a password, then on the permissions page assign <a target="_blank" href="https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAdministratorAccess"><code>AdministratorAccess</code></a> to them.</p>
<p>Click on create user and then log out and log back in with your newly created user.</p>
<p>On the newly created user, enable MFA just like you did with the root account.</p>
<p>Once you've done the above, your <code>IAM =&gt; Dashboard</code> should look like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706106035004/155bd678-0e6d-4cd8-8390-67ba13456065.png" alt class="image--center mx-auto" /></p>
<p>And that's done!</p>
<p>Of course, there's much more you can do to enhance security, but the steps outlined above provide a solid starting point for those just beginning their journey.</p>
]]></content:encoded></item><item><title><![CDATA[How to do cross-cloud backup replication]]></title><description><![CDATA[Introduction
A good practice to do backups is to follow the 3-2-1 rule. That's having

3 backups of the same data

2 of them on separate media

1 of them off-site


Translating this to the cloud we can do:

3 backups of the same data

2 of them are o...]]></description><link>https://krisfeher.com/how-to-do-cross-cloud-backup-replication</link><guid isPermaLink="true">https://krisfeher.com/how-to-do-cross-cloud-backup-replication</guid><category><![CDATA[S3]]></category><category><![CDATA[GCP]]></category><category><![CDATA[cloud functions]]></category><category><![CDATA[backups]]></category><category><![CDATA[storage transfer service]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Fri, 03 Nov 2023 09:36:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706521188160/5a479113-baea-4632-8b9b-48dd5391d8eb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>A good practice to do backups is to follow the 3-2-1 rule. That's having</p>
<ul>
<li><p>3 backups of the same data</p>
</li>
<li><p>2 of them on separate media</p>
</li>
<li><p>1 of them off-site</p>
</li>
</ul>
<p>Translating this to the cloud we can do:</p>
<ul>
<li><p>3 backups of the same data</p>
</li>
<li><p>2 of them are on separate clouds</p>
</li>
<li><p>1 of them offline</p>
</li>
</ul>
<p>In this guide we'll deal with the "separate cloud" approach.</p>
<h2 id="heading-options-you-may-have">Options you may have</h2>
<p>There are various options out there and generally, you have a category of these options:</p>
<ol>
<li><p>Self-build scripts</p>
</li>
<li><p>Native Cloud products (AWS, GCP, Azure, etc.)</p>
</li>
<li><p>Off-the-shelf 3rd party products</p>
</li>
</ol>
<p>While all of the above could be a solution to your problem, this guide will focus on #2, and specifically AWS =&gt; GCP</p>
<h2 id="heading-initial-setup">Initial setup</h2>
<p>You'll need the following:</p>
<ul>
<li><p>An S3 bucket on AWS with something to back up</p>
</li>
<li><p>A GCP bucket you'll clone the files to</p>
</li>
</ul>
<p>The tools we'll use:</p>
<ul>
<li><p>AWS S3</p>
</li>
<li><p>AWS IAM</p>
</li>
<li><p>GCP Storage tranfer service</p>
</li>
<li><p>GCP Cloud Storage</p>
</li>
<li><p>GCP Pub/Sub</p>
</li>
<li><p>GCP Cloud Functions</p>
</li>
<li><p>(GCP Event Arc)</p>
</li>
</ul>
<p>How it will work?</p>
<ol>
<li><p>Item is uploaded to AWS S3</p>
</li>
<li><p>GCP Storage Transfer Service fires weekly/daily/etc. to copy over files</p>
</li>
<li><p>On fail/success, Storage service publishes a message on GCP Pub/Sub</p>
</li>
<li><p>On a Pub/Sub message, EventArc is triggered, which calls GCP Cloud Functions</p>
</li>
<li><p>GCP Cloud function takes the event, extracts the status and sends it to Slack</p>
</li>
</ol>
<h2 id="heading-step-by-step-tutorial-on-how-to-do-the-above">Step-by-step tutorial on how to do the above</h2>
<h3 id="heading-create-a-gcp-bucket">Create a GCP bucket</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699000681015/4d4db3f1-d880-4b5f-b0b0-d33c666c597a.png" alt class="image--center mx-auto" /></p>
<p>This is a fairly simple task, you can pick a name, storage class, region, etc. for your bucket.</p>
<p>Make sure the bucket isn't public, and its name is unique.</p>
<h3 id="heading-create-an-iam-user-and-access-key-in-aws">Create an IAM user and access key in AWS</h3>
<p>This will be used by your GCP service.</p>
<p>Here’s an example permission:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"s3:GetObject"</span>,
                <span class="hljs-string">"s3:GetObjectAcl"</span>,
                <span class="hljs-string">"s3:ListBucket"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:s3:::your-db-backups"</span>,
                <span class="hljs-string">"arn:aws:s3:::your-db-backups/*"</span>
            ]
        }
    ]
}
</code></pre>
<p>Then create an Access Key and Secret Key for this user and take (temporary) note of it.</p>
<p>The job is done with AWS for this tutorial. To GCP!</p>
<h3 id="heading-create-a-storage-transfer-service-job">Create a Storage Transfer service job</h3>
<ol>
<li><p>Choose source (S3) and destination (GCP)</p>
</li>
<li><p>Set up the source details with the Access key you generated in the previous step</p>
</li>
<li><p>Choose destination GCP bucket (you may need to enable permissions here for the service account)</p>
</li>
<li><p>Choose how often to run the job (or just run it once)</p>
</li>
<li><p>Choose further misc. options (like deletion, overwrite, etc.) and make sure "<code>Get transfer operation status updates via Cloud Pub/Sub notifications</code>" is clicked.<br /> Create a new Pub/Sub topic here and select that.</p>
</li>
<li><p>Done. At this point you can already test the job.</p>
</li>
</ol>
<h3 id="heading-create-a-cloud-function">Create a Cloud function</h3>
<ol>
<li><p>Go to your Pub/Sub topic that was created on storage service creation</p>
</li>
<li><p>Click on “Trigger Cloud Function”, which will prompt you to create a new function there</p>
</li>
<li><p>Add anything there (for now). This will create a basic function along with EventArc trigger, and hook them up together</p>
</li>
<li><p>Go to your function and write the code to call slack. Here’s the one I used:</p>
</li>
</ol>
<pre><code class="lang-json">const axios = require('axios');

const functions = require('@google-cloud/functions-framework');
const url = `to be filled in`;

functions.cloudEvent('notifySlack', cloudEvent =&gt; {
  let payload;

  try {
    console.log('CloudEvent:', JSON.stringify(cloudEvent, null, 2));

    payload = {
      status: cloudEvent?.data?.message?.attributes?.eventType,
      description: `JOB: ${cloudEvent?.data?.message?.attributes?.transferJobName}`
    }
  }
  catch (error) {
    payload = {
      status: <span class="hljs-string">"GCP db backup status"</span>,
      description: <span class="hljs-string">"An exception has happened in GCP CloudFunctions, backups has failed."</span>
    }
  }


  axios.post(url, payload)
    .then(response =&gt; {
      callback(null, response.data);
    })
    .catch(error =&gt; {
      callback(error);
    });
});
</code></pre>
<ol>
<li>You can test it via CloudShell and see the console log outputting some text.</li>
</ol>
<h3 id="heading-create-your-slack-webhook">Create your Slack Webhook</h3>
<p>You can of course create any other notification method, or call and endpoint, but here we'll use Slack's built in webhooks.</p>
<ol>
<li><p>Find the "Workflow builder" in Slack Automations and create a new "Workflow"</p>
</li>
<li><p>As trigger create a Webhook and take a note of it</p>
</li>
<li><p>As an action send a message to your slack channel</p>
</li>
<li><p>Go back to your Cloud function and add the webhook URL</p>
</li>
<li><p>Done!</p>
</li>
</ol>
<p>At this point you should be able to test your implementation.</p>
<p>A good way to test it without much effect is to disable the Access keys in S3.<br />This will fail the Storage Transfer job and will send a message to Slack.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699003833165/5475aa8c-e43c-49b5-98eb-94205958dd0a.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[How to set up automatic SQL dumps from an EC2 hosted DB to S3 bucket]]></title><description><![CDATA[This guide
This guide covers automated, regular SQL dumps to S3.
The AWS services to use for this are:

Systems manager “runcommand” feature to start the backup

S3 to have the backup stored

A Lambda function that does the runcommand operation

Even...]]></description><link>https://krisfeher.com/how-to-set-up-automatic-sql-dumps-from-an-ec2-hosted-db-to-s3-bucket</link><guid isPermaLink="true">https://krisfeher.com/how-to-set-up-automatic-sql-dumps-from-an-ec2-hosted-db-to-s3-bucket</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[Backup]]></category><category><![CDATA[AWS EventBridge]]></category><category><![CDATA[S3]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Mon, 31 Jul 2023 13:56:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706521505904/ae793082-dc2d-477d-b51e-fdfb8858a812.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-this-guide">This guide</h2>
<p>This guide covers automated, regular SQL dumps to S3.</p>
<p>The AWS services to use for this are:</p>
<ul>
<li><p><strong>Systems manager</strong> “runcommand” feature to start the backup</p>
</li>
<li><p><strong>S3</strong> to have the backup stored</p>
</li>
<li><p>A <strong>Lambda</strong> function that does the runcommand operation</p>
</li>
<li><p><strong>Eventbridge</strong> to schedule the lambda execution</p>
</li>
</ul>
<p>Prerequisites:</p>
<p><a target="_blank" href="https://hashnode.com/post/clkcfi7l3000709l89zkfebv2">- AWS Session manager</a> (within Systems manager)  </p>
<p>Advantages</p>
<ul>
<li><p>You're not required to have a separate machine doing backups or use the DB host EC2 for this purpose.</p>
</li>
<li><p>No need to open any inbound ports (beyond outbound port 443 to systems manager client EP)</p>
</li>
<li><p>You can add logging and alerts to Cloudwatch to complement your existing logging</p>
</li>
<li><p>All backup would still be at a central place within Lambda and Eventbridge</p>
</li>
<li><p>Data traffic goes straight to database to S3 (no intermediary)</p>
</li>
</ul>
<p>Disadvantages</p>
<ul>
<li><p>A little long to set this up</p>
</li>
<li><p>Costs a tiny bit of money<br />  <em>No charge for</em> =&gt; RunCommand, Session manager, EventBridge(free tier), Lambda (free tier),<br />  <em>Small charge for</em> =&gt; S3, depending on DB size</p>
</li>
</ul>
<h2 id="heading-set-up">Set up</h2>
<h3 id="heading-session-manager">Session manager</h3>
<p>Make sure you have Session manager enabled and working with the EC2 that hosts the database</p>
<h3 id="heading-lambda-setup">Lambda setup</h3>
<p>Create a lambda with the following code:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">from</span> botocore.exceptions <span class="hljs-keyword">import</span> ClientError

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    ssm = boto3.client(<span class="hljs-string">'ssm'</span>)
    instance_id = <span class="hljs-string">'i-123456789'</span> 
    database_user = <span class="hljs-string">'db_user'</span> 
    database_password = <span class="hljs-string">'db_password'</span> 
    database_name = <span class="hljs-string">'db_name'</span>
    bucket_name = <span class="hljs-string">'db-backups-bucket'</span>
    backup_file = <span class="hljs-string">'backup_filename'</span> 

    <span class="hljs-comment"># The command to backup the database and upload to S3</span>
    backup_command = <span class="hljs-string">f"mysqldump -h 127.0.0.1 -u <span class="hljs-subst">{database_user}</span> -p<span class="hljs-subst">{database_password}</span> <span class="hljs-subst">{database_name}</span> | aws s3 cp - s3://<span class="hljs-subst">{bucket_name}</span>/<span class="hljs-subst">{backup_file}</span>_$(date +%Y%m%d).sql"</span>

    response = <span class="hljs-string">''</span>
    <span class="hljs-keyword">try</span>:
        response = ssm.send_command(
            InstanceIds=[
                instance_id,
            ],
            DocumentName=<span class="hljs-string">'AWS-RunShellScript'</span>,  <span class="hljs-comment"># runs commands in the EC2 instance</span>
            Parameters={
                <span class="hljs-string">'commands'</span>: [backup_command]
            },
        )
        print(<span class="hljs-string">f"Command sent to instance <span class="hljs-subst">{instance_id}</span>. Response: <span class="hljs-subst">{response}</span>"</span>)
    <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"Unexpected error when sending command to EC2 instance: <span class="hljs-subst">{e}</span>"</span>)

    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>,
        <span class="hljs-string">'body'</span>: <span class="hljs-string">f"Command sent to instance <span class="hljs-subst">{instance_id}</span>. Command ID: <span class="hljs-subst">{response[<span class="hljs-string">'Command'</span>][<span class="hljs-string">'CommandId'</span>]}</span>"</span>
    }
</code></pre>
<p>This will create an IAM ROLE for this Lambda function. Add this extra permission to the role (beyond the Log creation one that was added automatically on creation)</p>
<pre><code class="lang-python">{
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: <span class="hljs-string">"ssm:SendCommand"</span>,
            <span class="hljs-string">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:ec2:region:account:instance/i-123456789"</span>,
                <span class="hljs-string">"arn:aws:ssm:region::document/AWS-RunShellScript"</span>
            ]
        }
    ]
}
</code></pre>
<p>This will give permission to your lambda function to run a shell script on a particular EC2 machine.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Lamba will ask you for logging. If you use Cloudwatch for logs, turn the log retention down to your liking, as the default is "never expire".</div>
</div>

<p>You can test this code by clicking "Test" and run it with no test input.</p>
<p><em>You'll either see an error</em> =&gt; in this case it's probably a permission issue</p>
<p><em>You see no error</em> =&gt; your shell script run, however it may have failed.</p>
<p>You can look at the execution logs in Run Command within Systems Manager and see something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690810643434/32b70d8a-565d-4c8b-af04-d8ec0c3e7010.png" alt class="image--center mx-auto" /></p>
<p>Clicking on the Failed item, you can see some details about the error:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690810757067/fde2c401-9977-42d2-a4a5-46924d7af124.png" alt class="image--center mx-auto" /></p>
<p>As at this point this isn't set up fully, you'll probably see an error corresponding to not having access to S3. Let's do that next!</p>
<h2 id="heading-s3-setup">S3 setup</h2>
<p>Make sure you have a bucket you want to place the backups in. Please don't have this bucket public 🙂</p>
<p>Add an IAM role to the EC2 machine to allow upload to S3 (this way Lambda isn't required to handle large amounts of data or wait until execution finishes)</p>
<p>This policy worked for me to access the bucket. It omits delete permission for safety.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"s3:ListAllMyBuckets"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::*"</span>
        },
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"s3:ListBucket"</span>,
                <span class="hljs-string">"s3:GetBucketLocation"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::db-backups-bucket"</span>
        },
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"s3:PutObject"</span>,
                <span class="hljs-string">"s3:PutObjectAcl"</span>,
                <span class="hljs-string">"s3:GetObject"</span>,
                <span class="hljs-string">"s3:GetObjectAcl"</span>,
                <span class="hljs-string">"s3:AbortMultipartUpload"</span>,
                <span class="hljs-string">"s3:ListMultipartUploadParts"</span>,
                <span class="hljs-string">"s3:ListBucketMultipartUploads"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:s3:::db-backups-bucket"</span>,
                <span class="hljs-string">"arn:aws:s3:::db-backups-bucket/*"</span>
            ]
        }
    ]
}
</code></pre>
<p>Once this is done, re-test your lambda and it should work.</p>
<p>You can verify that the SQL backup file is in S3</p>
<h2 id="heading-eventbridge">Eventbridge</h2>
<p>Go to Eventbridge Scheduler (it's a new feature).</p>
<p>Create a daily schedule to run a Lambda function:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690811429902/2dc531c1-29be-49d4-9638-eb9515b60948.png" alt class="image--center mx-auto" /></p>
<p>Eventbridge then will create an IAM role for itself to give itself permissions to execute the Lambda function (or you can use your own)</p>
<p>Once this is done, you have a functioning daily backup.</p>
<h2 id="heading-some-things-to-improve">Some things to improve</h2>
<ul>
<li><p>As currently S3 will keep the backup forever, there's an option here to delete the latest one regularly within S3 lifecycle roles.</p>
</li>
<li><p>Create an alarm via cloudwatch to notify of any issues with the backup</p>
</li>
<li><p>Regularly check the integrity of the backups</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to whitelist a large number of IPs for an EC2]]></title><description><![CDATA[Introduction
You may occasionally come across services that needs access to your AWS infrastructure, or an EC2 machine.
One example of this is allowing Bitbucket build machines to SSH into EC2 within AWS.
A quick guide

create a file called bitbucket...]]></description><link>https://krisfeher.com/how-to-whitelist-a-large-number-of-ips-for-an-ec2</link><guid isPermaLink="true">https://krisfeher.com/how-to-whitelist-a-large-number-of-ips-for-an-ec2</guid><category><![CDATA[cli]]></category><category><![CDATA[IAM]]></category><category><![CDATA[Security]]></category><category><![CDATA[Bitbucket]]></category><category><![CDATA[IP]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Mon, 24 Jul 2023 13:10:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706521613816/04cf54ef-04a5-45af-9c80-73abcb72a9ec.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>You may occasionally come across services that needs access to your AWS infrastructure, or an EC2 machine.</p>
<p>One example of this is allowing Bitbucket build machines to SSH into EC2 within AWS.</p>
<h2 id="heading-a-quick-guide">A quick guide</h2>
<ol>
<li><p>create a file called bitbucket_whitelist.txt</p>
<p> For convenience, here's the full list:<br /> <a target="_blank" href="https://support.atlassian.com/bitbucket-cloud/docs/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall/">https://support.atlassian.com/bitbucket-cloud/docs/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall/</a></p>
</li>
</ol>
<p>This will give you one IP address / line.</p>
<ol>
<li>create an .sh file with the following:</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

SECURITY_GROUP_ID=sg-123456789 <span class="hljs-comment"># This is your existing security group ID</span>

<span class="hljs-keyword">while</span> <span class="hljs-built_in">read</span> -r CIDR
<span class="hljs-keyword">do</span>
    aws ec2 authorize-security-group-ingress --group-id <span class="hljs-variable">$SECURITY_GROUP_ID</span> --protocol tcp --port 22 --cidr <span class="hljs-variable">$CIDR</span>
<span class="hljs-keyword">done</span> &lt; bitbucket_whitelist.txt
</code></pre>
<p>Make sure you save this with linux line endings. In visual studio you can find this option on the bottom right corner:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690203970268/3d11b458-45fa-4f1e-9659-7530342185ed.png" alt class="image--center mx-auto" /></p>
<p>3. Add yourself permission in IAM:</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: <span class="hljs-string">"ec2:AuthorizeSecurityGroupIngress"</span>,
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:ec2:region:accountnumber:security-group/sg-123456789"</span>
        }
    ]
}
</code></pre>
<p>Done.</p>
<p>Once you run this, it should one-by-one add the CIDR ranges to the inbound allow list of your given security group.</p>
]]></content:encoded></item><item><title><![CDATA[How to set up Session manager with existing Linux EC2s]]></title><description><![CDATA[Enable the EC2 to be controlled by Systems Manager
Enabling session manager and accessing EC2 terminal is a safer way than doing it via port 22 and SSH-ing into the machine. For the below, you don't need to enable any incoming ports, however you need...]]></description><link>https://krisfeher.com/how-to-set-up-session-manager-with-linux-ec2s</link><guid isPermaLink="true">https://krisfeher.com/how-to-set-up-session-manager-with-linux-ec2s</guid><category><![CDATA[AWS]]></category><category><![CDATA[Session]]></category><category><![CDATA[Security]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Fri, 21 Jul 2023 10:18:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706544013298/f6a74759-1ca9-4b84-aa28-61be0d9271c4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-enable-the-ec2-to-be-controlled-by-systems-manager">Enable the EC2 to be controlled by Systems Manager</h2>
<p>Enabling session manager and accessing EC2 terminal is a safer way than doing it via port 22 and SSH-ing into the machine. For the below, you don't need to enable any incoming ports, however you need (at least) port 443 to be open for outbound traffic.</p>
<ol>
<li>Install session manager agent to the EC2 machine. Amazon Linux 2 machines have this default installed, however the version may be old, so it’s still worth updating it.</li>
</ol>
<pre><code class="lang-bash">sudo systemctl status amazon-ssm-agent <span class="hljs-comment"># check if it's already enabled </span>
wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb sudo dpkg -i amazon-ssm-agent.deb <span class="hljs-comment"># install </span>
sudo systemctl <span class="hljs-built_in">enable</span> amazon-ssm-agent <span class="hljs-comment"># enable the agent </span>
sudo systemctl status amazon-ssm-agent <span class="hljs-comment"># check if it's working OK</span>
</code></pre>
<p>Alternatively if the above doesn't work, here's the snap installation:</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-ubuntu-64-snap.html">Install SSM Agent on Ubuntu Server 22.04 LTS, 20.10 STR &amp; 20.04, 18.04, and 16.04 LTS 64-bit (Snap) - AWS Systems Manager</a></p>
<ol>
<li>Add the IAM role to the EC2 machine that’d allow access to session manager:</li>
</ol>
<p><a target="_blank" href="https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html">AmazonSSMManagedInstanceCore</a></p>
<p>Make sure you wait a few minutes until the permissions changes.<br />You may need to restart the session manager agent for this to pick it up instantly.</p>
<pre><code class="lang-bash">sudo systemctl restart amazon-ssm-agent
<span class="hljs-comment"># or </span>
sudo systemctl restart snap.amazon-ssm-agent.amazon-ssm-agent.service
</code></pre>
<p>After this, you should see the instance coming up in<br />AWS Systems Manager ===&gt; Fleet Manager</p>
<h2 id="heading-enable-the-users-to-log-into-the-ec2-terminal">Enable the users to log into the EC2 terminal</h2>
<p>Grant the following permission to the desired user. Remember to replace the placeholder with your EC2 instance ID.</p>
<pre><code class="lang-bash">{
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"ssm:StartSession"</span>
            ],
            <span class="hljs-string">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:ec2:REGION:ACCOUNT:instance/INSTANCEID"</span>
            ]
        },
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"ssm:DescribeSessions"</span>,
                <span class="hljs-string">"ssm:GetConnectionStatus"</span>,
                <span class="hljs-string">"ssm:DescribeInstanceProperties"</span>,
                <span class="hljs-string">"ec2:DescribeInstances"</span>
            ],
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"*"</span>
        },
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"ssm:TerminateSession"</span>,
                <span class="hljs-string">"ssm:ResumeSession"</span>
            ],
            <span class="hljs-string">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:ssm:*:*:session/<span class="hljs-variable">${aws:userid}</span>-*"</span>
            ]
        }
    ]
}
</code></pre>
<p>At this point your users can now access the terminal of your EC2.</p>
<p>However, you'll notice that the default shell is used. To switch it to bash, go to the following:</p>
<p>AWS Systems Manager =&gt; Session Manager and add a linux shell profile: <code>/bin/bash</code></p>
<p>You'll now be greeted with this: <code>ssm-user@ip-172-31-4-102:/usr/bin$</code></p>
<h2 id="heading-guide-for-users-to-access-terminal">Guide for users to access terminal</h2>
<p>what you need</p>
<ol>
<li><p>AWS CLI</p>
</li>
<li><p>AWS user credentials</p>
</li>
</ol>
<p>For of all you need to install AWS CLI:<br /><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions">https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions</a></p>
<p>run <code>aws configure</code> to bring up the interactive setup where you can add your AWS credentials and region.</p>
<p>Alternatively, you can find your <code>credentials</code> and <code>config</code> file and just modify it manually.<br />In Windows it's in c:\Users\username\.aws\ , for other operating systems, you can find it here: <a target="_blank" href="https://docs.aws.amazon.com/sdkref/latest/guide/file-location.html">https://docs.aws.amazon.com/sdkref/latest/guide/file-location.html</a></p>
<p>You also need to install the session manager plugin for your PC:<br /><a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html">https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html</a></p>
<p>Once you've done it, you can access the EC2 terminal by :</p>
<p><code>aws ssm start-session --target i-01fc8012335d9639b</code></p>
<p>This gets you into your EC2 machine.</p>
]]></content:encoded></item><item><title><![CDATA[Track free disk space with Cloudwatch]]></title><description><![CDATA[Introduction
In this guide I'll explain how to send data from a NON-AWS linux server to Cloudwatch to track disc space.
This guide though doesn't include creating alerts based on the metric.This guide will also omit creating guide on basic IAM policy...]]></description><link>https://krisfeher.com/track-free-disk-space-with-cloudwatch</link><guid isPermaLink="true">https://krisfeher.com/track-free-disk-space-with-cloudwatch</guid><category><![CDATA[AWS]]></category><category><![CDATA[#CloudWatch]]></category><category><![CDATA[observability]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[IAM]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Tue, 20 Jun 2023 13:10:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706544162904/d3b2097a-1d27-414d-9f19-ef722c748835.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In this guide I'll explain how to send data from a NON-AWS linux server to Cloudwatch to track disc space.</p>
<p>This guide though doesn't include creating alerts based on the metric.<br />This guide will also omit creating guide on basic IAM policy creation.</p>
<h2 id="heading-configure-cwagent">Configure CWAgent</h2>
<p>Download the latest version:</p>
<pre><code class="lang-bash">wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
</code></pre>
<p>Install the agent</p>
<pre><code class="lang-bash">sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
</code></pre>
<p>Create a file here. This will be your configuration file the agent will consume.</p>
<pre><code class="lang-bash">sudo vim /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
</code></pre>
<p>Here's a simple example config file.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"agent"</span>: {
    <span class="hljs-attr">"metrics_collection_interval"</span>: <span class="hljs-number">60</span>,
    <span class="hljs-attr">"run_as_user"</span>: <span class="hljs-string">"cwagent"</span>
  },
  <span class="hljs-attr">"metrics"</span>: {
    <span class="hljs-attr">"metrics_collected"</span>: {
      <span class="hljs-attr">"disk"</span>: {
        <span class="hljs-attr">"measurement"</span>: [
          <span class="hljs-string">"used_percent"</span>
        ],
        <span class="hljs-attr">"metrics_collection_interval"</span>: <span class="hljs-number">60</span>,
        <span class="hljs-attr">"resources"</span>: [
          <span class="hljs-string">"/"</span>
        ]
      }
    }
  }
}
</code></pre>
<h2 id="heading-permissions">Permissions</h2>
<p>If there's already an AWS user set up on your Linux machine, you can use that.</p>
<p>Your AWS credentials are in <code>~/.aws/credentials</code> , and your config is in <code>~/.aws/config</code></p>
<p>The AWS credentials file should look something like this:</p>
<pre><code class="lang-yaml">[<span class="hljs-string">default</span>]
<span class="hljs-string">aws_access_key_id</span> <span class="hljs-string">=</span> <span class="hljs-string">your_access_key_id</span>
<span class="hljs-string">aws_secret_access_key</span> <span class="hljs-string">=</span> <span class="hljs-string">your_secret_key</span>
</code></pre>
<p>[default] is the profile name. Feel free to - alternatively - duplicate the above and add another profile below with different access_key_id and secret_access_key, like so:</p>
<pre><code class="lang-yaml">[<span class="hljs-string">default</span>]
<span class="hljs-string">aws_access_key_id</span> <span class="hljs-string">=</span> <span class="hljs-string">your_access_key_id</span>
<span class="hljs-string">aws_secret_access_key</span> <span class="hljs-string">=</span> <span class="hljs-string">your_secret_key</span>
[<span class="hljs-string">CWagent</span>]
<span class="hljs-string">aws_access_key_id</span> <span class="hljs-string">=</span> <span class="hljs-string">your_CWagent_access_key_id</span>
<span class="hljs-string">aws_secret_access_key</span> <span class="hljs-string">=</span> <span class="hljs-string">your_CWagent_secret_key</span>
</code></pre>
<p>Your config file mentioned earlier should look something like this:</p>
<pre><code class="lang-yaml">[<span class="hljs-string">default</span>]
<span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">eu-west-1</span>
</code></pre>
<p>Similarly, you can duplicate the profile and add a different region.</p>
<p>If you don't have these files, you're missing AWS CLI. You can follow the installation steps to download and install it: <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html</a></p>
<p>The above mentioned profiles with their respective access_keys need to correspond to an IAM user with appropriate permisson.</p>
<p>The permission you want to give to that user in order to create logs is :<br /><a target="_blank" href="https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchAgentServerPolicy.html">https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchAgentServerPolicy.html</a></p>
<p>That's the only permission it requires.</p>
<p>Once you have the permissions and you've configured AWS CLI with the above, with an appropriate profile, you need to modify a file in the agent config:</p>
<pre><code class="lang-bash">sudo vim /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml
</code></pre>
<p>You'll see something like this. Uncomment the [credentials] and add your own path and profile:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687265816893/97bd1d61-4aca-4d40-8a40-4a5adf8e7262.png" alt class="image--center mx-auto" /></p>
<p>That's all the configuration!</p>
<p>Once done, you can start CWagent:</p>
<pre><code class="lang-bash">sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s
</code></pre>
<p>This will start CWagent with the appropriate config file you crafted above.<br />At some point CWagent "consumes" this file and creates a .toml file from it, so don't be surprised if it disappears.<br />You should also see some commands running in your terminal as it starts up.</p>
<p>You can then check the status after that with:</p>
<pre><code class="lang-bash">sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m onPremise -a status
</code></pre>
<p>which should have an output like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687266077060/9914897b-c8aa-47e9-a725-d73420f4232a.png" alt class="image--center mx-auto" /></p>
<p>You may see "stopped" or "unconfigured", which then indicates that's something wrong with your config file. In that case, re-create the json file above and start CWAgent once again.</p>
<p>You can also check the logs:</p>
<pre><code class="lang-bash">cat /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log
</code></pre>
<p>Which usually tells you what's the issue.</p>
<p>Done!</p>
<p>Once this is done, you can go to Cloudwatch metrics =&gt; CWagent</p>
]]></content:encoded></item><item><title><![CDATA[Give minimal IAM permissions for S3 actions]]></title><description><![CDATA[Why a separate guide on this?
AWS offers the option to generate permissions based on Cloudtrail logs.

That's great for management events, however it doesn't work well for data events. But why?
AWS' policy generation gives you nothing:

And Cloudtrai...]]></description><link>https://krisfeher.com/give-minimal-iam-permissions-for-s3-actions</link><guid isPermaLink="true">https://krisfeher.com/give-minimal-iam-permissions-for-s3-actions</guid><category><![CDATA[Security]]></category><category><![CDATA[AWS]]></category><category><![CDATA[cloudtrail]]></category><category><![CDATA[S3]]></category><category><![CDATA[IAM]]></category><dc:creator><![CDATA[Kris F]]></dc:creator><pubDate>Fri, 09 Jun 2023 08:29:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706544461189/2d4ce9a5-4246-42c9-8d91-75f0c0cb0050.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-why-a-separate-guide-on-this">Why a separate guide on this?</h2>
<p>AWS offers the option to generate permissions based on Cloudtrail logs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686296905978/fb55b817-ed3a-4f48-be20-3eda4c0da36c.png" alt class="image--center mx-auto" /></p>
<p>That's great for management events, however it doesn't work well for data events. But why?</p>
<p>AWS' policy generation gives you nothing:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686297212146/5e3927de-7ece-40bd-804d-86f9a6b8c14e.png" alt class="image--center mx-auto" /></p>
<p>And Cloudtrail doesn't record data events in the Event History tab (only Management events).</p>
<p>So when would you even need this?<br />Here's the specific use case:</p>
<blockquote>
<p><em>"I want to find out what S3 buckets a user accesses and what kind of actions it requires, so I can generate the minimum amount of IAM permissions for the user/users"</em></p>
</blockquote>
<h2 id="heading-how-to">How-to</h2>
<ol>
<li><p>You need to have Cloudtrail log already set up. I'll not cover it in this guide (maybe later), but once you have it, let it run for a few days/weeks to allow some actions to be generated. These logs will be saved in an S3 bucket. Keep a note of it.</p>
</li>
<li><p>Go to <em>Cloudtrail</em> =&gt; <em>Event history</em> and click on <em>Create Athena table</em></p>
</li>
<li><p>This will generate you a create table SQL command, which for me was wrong. Make sure you select a <em>Storage location</em> is set to the bucket that holds your data events and make sure on the bottom the LOCATION is correct. (it wasn't for me)</p>
</li>
<li><p>if it's correct, go ahead and create a table, then go to Athena</p>
</li>
<li><p>if not, then copy the create table SQL, go to Athena and try to create a table there pasting in your SQL query with the modified LOCATION property (unfortunately you can't edit the query from Cloudtrail 🤷‍♂️</p>
</li>
<li><p>once there, you can use this SQL to give you the buckets and the actions that are executed on it:</p>
<pre><code class="lang-sql"> <span class="hljs-keyword">SELECT</span> 
     <span class="hljs-keyword">DISTINCT</span> 
     split_part(resource.arn, <span class="hljs-string">'/'</span>, <span class="hljs-number">1</span>) <span class="hljs-keyword">AS</span> <span class="hljs-keyword">bucket</span>, 
     eventname <span class="hljs-keyword">AS</span> <span class="hljs-keyword">action</span>
 <span class="hljs-keyword">FROM</span> 
     <span class="hljs-string">"default"</span>.<span class="hljs-string">"bucketxzy_cloudtrail_logs_s3_events"</span> 
 <span class="hljs-keyword">CROSS</span> <span class="hljs-keyword">JOIN</span> 
     <span class="hljs-keyword">UNNEST</span>(resources) <span class="hljs-keyword">AS</span> t(<span class="hljs-keyword">resource</span>)
 <span class="hljs-keyword">WHERE</span> 
     useridentity.arn = <span class="hljs-string">'arn:aws:iam::123456789012:user/john'</span> 
 <span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span> 
     <span class="hljs-keyword">bucket</span>, <span class="hljs-keyword">action</span>;
</code></pre>
</li>
<li><p>which will result in something like this:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686298585190/48d207cd-87b0-4c59-91a6-bb5b26af4f89.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Once you have this, you can generate an IAM policy from them, which will look something like this (you'd need to find the appropriate permission for each action above):</p>
<pre><code class="lang-json"> {
     <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
     <span class="hljs-attr">"Statement"</span>: [
         {
             <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
             <span class="hljs-attr">"Action"</span>: [
                 <span class="hljs-string">"s3:DeleteObject"</span>,
                 <span class="hljs-string">"s3:GetObject"</span>,
                 <span class="hljs-string">"s3:ListBucket"</span>,
                 <span class="hljs-string">"s3:PutObject"</span>
             ],
             <span class="hljs-attr">"Resource"</span>: [
                 <span class="hljs-string">"arn:aws:s3:::your_bucket/*"</span>
             ]
         }
     ]
 }
</code></pre>
</li>
</ol>
<h2 id="heading-further-considerations">Further considerations</h2>
<p>Now the above gives you an overview of what buckets a user is accessing, but you can take this further. As an example:</p>
<ul>
<li><p>you can modify the SQL to bring back all users rather than just one</p>
</li>
<li><p>you can request the full path to the object that's accessed, creating an even more granular permission set.</p>
</li>
</ul>
<p>Additionally, please note, none of these services are free. Athena, S3 and Cloudtrail all cost money.</p>
]]></content:encoded></item></channel></rss>