Setting up a simple CI/CD for my website was easier than I expected. I had experience working with Gitlab Pipelines before so I used Gitlab as my remote git repository and have a simple
gitlab-ci.yml as so (variables omitted):
hugo build: stage: build only: - master image: monachus/hugo script: - hugo artifacts: paths: - public deploy to production: stage: deploy only: - master environment: name: production url: https://www.ronniegane.kiwi image: garland/aws-cli-docker script: - aws configure set preview.cloudfront true - aws s3 sync ./public s3://$S3_BUCKET_NAME --delete - aws cloudfront create-invalidation --distribution-id $CF_DISTRO_ID --paths "/*" pages: stage: deploy except: - master environment: name: staging url: $GPAGES_BASE_URL image: monachus/hugo script: - hugo --baseUrl=$GPAGES_BASE_URL artifacts: paths: - public
The Gitlab CI documents are pretty well written for exactly this use case - creating a static website and deploying it to S3.
Deployment is a one-step process in staging, and two-step for production. The production task is only carried out in master branch.
The staging deployment is hosted via Gitlab Pages.
To build for staging:
hugois called with a base URL parameter. This is important because when hosting on gitlab pages the base URL contains a path after the TLD. For example:
http://username.gitlab.io/projectname/index.html. Without setting a base URL param, all the links generated by Hugo will be relative, and will be broken.
- To host on gitlab pages, all we need is for this one step to output a static website to the
publicfolder, and for the task to be called
pages. That’s it.
hugois called in a simple Docker image with Hugo installed which outputs the static site files to
- The second task takes place in a docker image with AWS CLI installed. It does two things:
- Syncs all the
publicfolder files with the s3 bucket
- Invalidates the root cache in CloudFront, so the newer version of the website will distributed to all the CloudFront endpoints.
- Syncs all the