Making a Hugo Website The Full Stack Way pt 3 - Basic Infrastructure as Code (IaC) with Terraform

In the previous tutorial, we deployed a Google Cloud Storage bucket my manually click a bunch of items in the Google Cloud website to create resources in the cloud. This is fine for a basic project, but what if we wanted to utilize more complex resources in the cloud or had multiple people working on the same project?

As we make more manual edits, it can get harder and harder to keep track of the state of our cloud infrastructure (and the associated billing!). What if we could declare our infrastructure as code so that we could requisition (and tear down) cloud resources at will? This would also helpful for project involving multiple people since a record of changes and the current state of our infrastructure will be recorded on Github for all to see. Enter Terraform.

What is Terraform?

Terraform is a tool for declaring Infrastructure as Code (IaC). Unlike other similar tools, Terraform can be used across multiple different cloud providers. It is also modular and has a declarative syntax which means that you don’t have to worry about sucessive deployments using the same code causing issues. For example, if you ask for 3 buckets, Terraform won’t add 3 more to however many there currently are. It will simply check how many buckets there currently are and either add or remove however many it takes to reach 3 buckets.

Setting up Terraform

First install Terraform for your OS.

Then create a terraform directory and sub-directories using the following command in your project root.

# No spaces!
$ mkdir -p terraform/{modules/bucket,prod}

The directory structure should look like this:

├── modules
│   └── bucket
└── prod

For security purposes, we will also want to add some Terraform specific file extentions to our .gitignore. You can copy the following .gitignore from Github

Creating a module for our static site bucket

Under modules/bucket/ create three files

$ touch
  • defines the inputs we use to declare our “bucket” infrastructure
  • defines the actual resources
  • defines the outputs.

We can think of our bucket module as being like a function with clearly defined inputs( and outputs (, with the innards largely abstracted for simplicity.

Defining our module does as follows simple: holds the inputs which we can alter in the future to deploy different websites in different projects

variable "project_id" {
  type = string

variable "website_domain_name" {
  type = string
  description = "Domain name and bucket name for site. ie:"

variable "bucket_location" {
  type = string
  default = "us-west-1b"

variable "storage_class" {
  type = string
  default = "STANDARD"
} holds the bucket definition configured for website hosting:

resource "google_storage_bucket" "static-site" {
    name          = var.website_domain_name
    location      = "US"
    storage_class = var.storage_class
    force_destroy = true
    uniform_bucket_level_access = true
      main_page_suffix = "index.html"
      not_found_page   = "404.html"
    cors {
      origin = [ "*" ]
      method          = ["GET", "HEAD", "PUT", "POST", "DELETE"]
      response_header = ["*"]
      max_age_seconds = 3600

resource "google_storage_bucket_iam_member" "viewers" {
  bucket   = var.website_subdomain_name
  role     = "roles/storage.objectViewer"
  member   = "allUsers"
  depends_on = [

And finally, we can (optionally) output some information on our website in

output "bucket_link" {
    description = "Website static site link"
    value = google_storage_bucket.static-site.self_link

Using the bucket module

Now we need to create a root module from which to run our bucket module.

Create a and under prod/

Under add a Google provider and a bucket

    google = {
      source = "hashicorp/google"
      version = "4.32.0"

provider "google" {
  region = var.region
  credentials = file(var.key_file)
  project = var.project_id

module "bucket" {
  source = "../modules/bucket"
  project_id = var.project_id
  website_domain_name = var.website_domain_name
  storage_class = "STANDARD"

Notice how we can reference our bucket module with ../modules/bucket and alter some of the variables for that module.

Just like with the bucket module, we will add variables:

variable "project_id" {
    type = string
    description = "Project id"

variable "region" {
    type = string
    description = "Availability Zone region. See:"

variable "key_file" {
    type = string
    description = "Keyfile for bucket authentication. Used for GCloud "

variable "website_domain_name" {
    type = string
    description = "Domain name and bucket name for site. ie:"

Finally we will need to create a terraform.tfvars to actually populate our variables:

# Shared Vars
project_id = "mysite-123"
region = "us-west-1b"
key_file = "~/gcp/access_keys.json"

# Bucket vars
website_domain_name = ""

Warning! For security, we generally don’t want the terraform.tfvars to wind up under source control, so make sure to put it into your .gitignore!

Also notice how we pass our key_file to terraform in this module. This key_file should correspond to your service account.

(Important) Pre-requisite - Updating service account permissions

The service account used by our IaC tool (Terraform) will need to change permissions on the bucket using IAM. Without this ability, our service account (and hence Terraform) will be unable to update the bucket permissions to be visible on the internet. This will have to be done in Google Cloud:

Allowing the service account used by Terraform to update IAM permissions

Deploying our bucket

The project structure should now look like:

└── terraform
    ├── modules
    │   └── bucket
    │       ├──
    │       └──
    └── prod

To deploy:

$ cd terraform/prod

View your deployment plan

$ terraform plan -var-file terraform.tfvars -out terraform.tfplan

And apply your plan

$ terraform apply -var-file terraform.tfvars

If you encounter any issues at this step, you may need to confirm domain ownership with your service account

At this point, if everything went smoothly, you should see a new bucket in the Google Cloud Storage interface!

Google Cloud Storage

If you followed tutorial 1 of this series, you know you can upload your site with

$ GOOGLE_APPLICATION_CREDENTIALS=$HOME/gcp/my_access_keys.json hugo deploy --target=$DEPLOYMENT_TARGET

Now, If you want to take down all infrastructure for your project, simply

$ terraform plan -destroy -var-file terraform.tfvars -out terraform.tfplan
$ terraform apply terraform.tfplan


The power behind Terraform is its modularity. Because we structured our bucket as a module, we can use it in different projects (ie: deploying different websites in different domains) simply by altering terraform.tfvars. Although our bucket example is simple enough that it does save us much time, with larger more complex projects and with multiple contributors, tools like Terraform become essential for managing infrastructure (and costs).

To learn how to use Terraform to automate CI/CD

Read Part 4 of this series -> Using Terraform + Github Actions for CI/CD.

Related Posts

Making an ECS WebAssembly Game with Rust and Bevy

Why Rust for games specifically? To follow-up on my previous write-up wherein I describe the rationale for learning Rust, I decided to tackle the learning experience through writing a game.

Read more

Why Learn Rust?

Recently, I decided to take some time to learn the Rust programming language. In my day-to-day job as a machine learning engineer working in bio-tech, largely using Python, I’ve started to notice the limitations and faults of using weakly-typed poor performance languages for production.

Read more

Stable Diffusion - De-painting with Stable Diffusion img2img

Stable diffusion has been making huge waves recently in the AI and art communities (if you don’t know what that is feel free to check out this earlier post).

Read more