RNG

thoughts on software development and everything else

Hugo in the pipeline

2018-09-14

Setting up a simple CI/CD for my website was easier than I expected. I had experience working with Gitlab Pipelines before so I used Gitlab as my remote git repository and have a simple gitlab-ci.yml as so (variables omitted):

hugo build:
  stage: build
  only: 
  - master
  image: monachus/hugo
  script:
    - hugo
  artifacts:
    paths:
      - public

deploy to production:
  stage: deploy
  only: 
  - master
  environment:
    name: production
    url: https://www.ronniegane.kiwi
  image: garland/aws-cli-docker
  script: 
  - aws configure set preview.cloudfront true
  - aws s3 sync ./public s3://$S3_BUCKET_NAME --delete
  - aws cloudfront create-invalidation --distribution-id $CF_DISTRO_ID  --paths "/*"

pages:
  stage: deploy
  except:
  - master
  environment: 
    name: staging
    url: $GPAGES_BASE_URL
  image: monachus/hugo
  script:
    - hugo --baseUrl=$GPAGES_BASE_URL
  artifacts:
    paths:
    - public

The Gitlab CI documents are pretty well written for exactly this use case - creating a static website and deploying it to S3.

Deployment is a one-step process in staging, and two-step for production. The production task is only carried out in master branch.

The staging deployment is hosted via Gitlab Pages.

To build for staging:

  1. hugo is called with a base URL parameter. This is important because when hosting on gitlab pages the base URL contains a path after the TLD. For example: http://username.gitlab.io/projectname/index.html. Without setting a base URL param, all the links generated by Hugo will be relative, and will be broken.
  2. To host on gitlab pages, all we need is for this one step to output a static website to the public folder, and for the task to be called pages. That’s it.

For production:

  1. hugo is called in a simple Docker image with Hugo installed which outputs the static site files to /public/
  2. The second task takes place in a docker image with AWS CLI installed. It does two things:
    1. Syncs all the public folder files with the s3 bucket
    2. Invalidates the root cache in CloudFront, so the newer version of the website will distributed to all the CloudFront endpoints.