In this post, I will leverage a nodejs based tool called s3-website that I had discovered in an earlier project.

Install awscli

As mentioned here, install awscli using:

curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

Check awscli version using:

which aws
aws --version

Install s3-website

npm install -g s3-website

This requires valid credentials to be present in ~/.aws/credentials.

Create an s3 bucket

s3-website create www.n0c0de.com
s3-website create blog.n0c0de.com

Setting up www.n0c0de.com

create a new hugo site

hugo new site --force  www-n0c0de

add a theme

cd www-n0c0de
git submodule add https://github.com/jugglerx/hugo-serif-theme.git themes/hugo-serif-theme

add theme to hugo site

echo 'theme = "hugo-serif-theme"' >> config.toml

add example content from the theme

cp -a themes/hugo-serif-theme/exampleSite/. .

start the hugo server

hugo server -t hugo-serif-theme
hugo server -p8086 -D

build hugo

HUGO_ENV=production hugo
# hugo -D

create a certificate

Create an AWS SSL certificate at https://console.aws.amazon.com/acm/home?region=us-east-1#/privatewizard

DNS verify the certificate using the Manage DNS section in Namecheap

Setup CDN

As mentioned here, create a CDN for www.n0c0de.com

Using Cloudfront we use their CDN, enforce only access the bucket throught Cloudfront and HTTPS usage, and handle pretty URLs in combination with a Lambda Edge function.

  • Go to https://console.aws.amazon.com/cloudfront/

  • Click Create distribution

  • In the Delivery method for your content page, select the Web/Get Started button

  • Enter ONLY the following data:

    • origin: Click inside the text input and a list of your buckets will appear, select www.n0c0de.com.s3.amazonaws.com
    • Restrict Bucket Access: Select Yes; more options will appear
      • Origin Access Identity: Select or create an Access Identity
        • i used create a new identity
      • Grant Read Permissions on Bucket: Select Yes, Update Bucket Policy so Amazon automatically handle your S3 bucket permissions
      • In Default Cache Behavior Settings section:
        • Viewer Protocol Policy: Select Redirect HTTP to HTTPS
        • Compress objects automatically: Select Yes to Compress Content when possible
      • In Distribution Settings section:
        • Alternate Domain Names (CNAMEs): enter www.n0c0de.com
        • In Default Root Object: index.html.
        • Select Custom SSL Certificate (www.n0c0de.com)
        • And press the Request or Import a Certificate with ACM button. You will be redirected to AWS Certificate Manager to create a new certificate, in this page add the two domain names: n0c0de.com www.n0c0de.com
        • Then click Next and validate your certificate.
        • After you have the certificate, go back to Cloudfront settings page and select your newly created certificate from the list at Distribution Settings/SSL Certificate/Custom SSL Certificate.
      • Click Create distribution at the bottom of the page
      • Make CloudFront able to access to your S3 bucket by going to the Origins and Origin Group tab, edit the existing origin, in Grant Read Permissions on Bucket select Yes, Update Bucket Policy, and save changes, it will automatically generate the following policy:

Handle Pretty URLs

As mentioned here, we can handle pretty URLs for the hugo website.

First a bit of background.

Hugo by default generates web pages like /index.html at its /public directory.

As we have our S3 bucket private, accessing a webpage like www.n0c0de.com/hello/ won’t request our www.n0c0de.com/hello/index.html. To handle this we use a Lambda@Edge function.

This function does (source):

  • URI paths that end in .../index.html are redirected to .../ with an HTTP status code 301 Moved Permanently. (This is the same as an “external” redirect by a webserver).

  • URI paths that do not have an extension and do not end with a / are redirected to the same path with an appended / with an HTTP status code 301 Moved Permanently. (This is an “external” redirect)

4.1 Lambda@Edge Function Installation

We use the function standard-redirects-for-cloudfront, to install it via the Serverless Application Repository:

  1. Go to AWS Serverless Application Repository

  2. Press the Deploy button to use the application standard-redirects-for-cloudfront.

  3. It opens a description of the app, hit Deploy again to finish deploying it.

  4. After it has been created, locate the button View CloudFormation stack or go directly to the Cloudformation Console

  5. In the Resources tab, locate the AWS::IAM::Role and open the Physical ID, it will open up the IAM console

  6. Go to Trust Relationship tab and choose Edit the trust relationship to allow CloudFront to execute this function as a Lambda@Edge function., set the policy to:

  7. Go back to the Cloudformation’s Stack Detail page and in the Output tab, locate the key StandardRedirectsForCloudFrontVersionOutput and note down its Value (it will look something like: arn:aws:lambda:us-east-1:XXXXXXXXXXX:function:aws-serverless-repository-StandardRedirectsForClou-XXXXXXXXXXXX:2 ). We will use it in the next steps as this is the ARN (Amazon Resource Name) for the Lambda function that we will use in Cloudfront.

  8. Go back to the CloudFront console, select the [example.com](http://example.com/) distribution

  9. Go to the Behaviour tab and edit the default Behavior.

  10. Now we use the Lambda function, in Lambda Function Association select Event Type/Origin Request and enter the Lambda function’s StandardRedirectsForCloudFrontVersionOutput ARN value from the previous step.

  11. Wait for the CloudFront distribution to deploy.

5. Error page

If we try to access a URL that doesn’t exist on our S3 bucket, like https://example.com/not-existing-page we will get a 403 Forbidden error code because Cloudfront tries to access a object2 that doesn’t exists, so to properly handle this, we should return a 404 error response to the request.

Setting up the error page on S3 wouldn’t have any effect because this is an error that should be handled by Cloudfront.

To do this, we configure CloudFront to respond to requests using Hugo’s custom error page located at /layouts/404.html, when your origin returns an HTTP 403 permission denied.

  1. Go to Cloudfront console: https://console.aws.amazon.com/cloudfront
  2. Select your [example.com](http://example.com/) distribution
  3. Choose the Error Pages tab.
  4. Press Create Custom Error Response button.
  • In HTTP Error Code, select: 403: Forbidden
  • Customize Error Response: Yes
  • Response Page Path: /404.html
  • HTTP Response Codeo: 404: Not Found

Deploy to s3

s3-website deploy public

Create a Certificate

Create an AWS SSL certificate at https://console.aws.amazon.com/acm/home?region=us-east-1#/privatewizard

For DNS verification on NameCheap that hosted my domain, I used the following for the CNAME record

HOST: _2d07d8f@#$#@$#@$#c667546480.blog. (NOTE, I omitted everything after the subdomain!)
VALUE: _669cb@#$#@$#@$#cdb.zdx@#$#@$#@$#@gtt.acm-validations.aws. (This is as provided in the cert wizard!)

Copy the certid: ef3#@@#2-@#$8-#@$1-@#$d-5b@#$@%$2803

Setup a CDN for blog.n0c0de.com

Follow instructions at Deploying a Hugo website to AWS in 6 steps (CDN+HTTPS) | Simple IT 🤘 Rocks

Build hugo

HUGO_ENV=production hugo
# hugo -D

Deploy to s3

s3-website deploy public

TO DOs