Files
notifications-api/terraform/README.md
2024-04-10 19:30:05 -07:00

8.4 KiB

Terraform

This directory holds the Terraform modules for maintaining Notify.gov's infrastructure. You can read about the structure or get set up to develop.

Retrieving existing bucket credentials

📗 New developers start here!

Assuming initial setup is complete — which it should be if Notify.gov is online — Terraform state is stored in a shared remote backend. If you are going to be writing Terraform for any of our deployment environments you'll need to hook up to this backend. (You don't need to do this if you are just writing code for the development module, becase it stores state locally on your laptop.)

  1. Enter the bootstrap module with cd bootstrap
  2. Run ./import.sh to pull existing terraform state into the local state
  3. Follow instructions under Use bootstrap credentials

Use bootstrap credentials

  1. Run ./run.sh show -json.
  2. In the output, locate access_key_id and secret_access_key within the bucket_creds resource. These values are secret, so, don't share them with anyone or copy them to anywhere online.
  3. Add the following to ~/.aws/credentials:
    [notify-terraform-backend]
    aws_access_key_id = <access_key_id>
    aws_secret_access_key = <secret_access_key>
    
  4. Check which AWS profile you are using with aws configure list. If needed, use export AWS_PROFILE=notify-terraform-backend to change to profile and credentials you just added.

These credentials will allow Terraform to access the AWS/Cloud.gov bucket in which developers share Terraform state files.

Initial setup

These instructions were used for deploying the project for the first time, years ago. We should not have to perform these steps again. They are provided here for reference.

  1. Manually run the bootstrap module following instructions under Terraform State Credentials
  2. Setup CI/CD Pipeline to run Terraform
    1. Copy bootstrap credentials to your CI/CD secrets using the instructions in the base README
    2. Create a cloud.gov SpaceDeployer by following the instructions under SpaceDeployers
    3. Copy SpaceDeployer credentials to your CI/CD secrets using the instructions in the base README
  3. Manually Running Terraform
    1. Follow instructions under Set up a new environment manually to create your infrastructure

Terraform state credentials

The bootstrap module is used to create an s3 bucket for later terraform runs to store their state in.

Bootstrapping the state storage s3 buckets for the first time

  1. Within the bootstrap directory, run terraform init
  2. Run ./run.sh plan to verify that the changes are what you expect
  3. Run ./run.sh apply to set up the bucket
  4. Follow instructions under Use bootstrap credentials
  5. Ensure that import.sh includes a line and correct IDs for any resources created
  6. Run ./teardown_creds.sh to remove the space deployer account used to create the s3 bucket
  7. Copy bucket from bucket_credentials output to the backend block of staging/providers.tf and production/providers.tf

To make changes to the bootstrap module

This should not be necessary in most cases

  1. Run terraform init
  2. If you don't have terraform state locally:
    1. run ./import.sh
    2. optionally run ./run.sh apply to include the existing outputs in the state file
  3. Make your changes
  4. Continue from step 2 of the boostrapping instructions

SpaceDeployers

A SpaceDeployer account is required to run terraform or deploy the application from the CI/CD pipeline. Create a new account by running:

./create_service_account.sh -s <SPACE_NAME> -u <ACCOUNT_NAME>

Set up a new environment manually

The below steps rely on you first configuring access to the Terraform state in s3 as described in Terraform State Credentials.

  1. cd to the environment you are working in

  2. Set up a SpaceDeployer

    # create a space deployer service instance that can log in with just a username and password
    # the value of < SPACE_NAME > should be `staging` or `prod` depending on where you are working
    # the value for < ACCOUNT_NAME > can be anything, although we recommend
    # something that communicates the purpose of the deployer
    # for example: circleci-deployer for the credentials CircleCI uses to
    # deploy the application or <your_name>-terraform for credentials to run terraform manually
    ./create_service_account.sh -s <SPACE_NAME> -u <ACCOUNT_NAME> > secrets.auto.tfvars
    

    The script will output the username (as cf_user) and password (as cf_password) for your <ACCOUNT_NAME>. Read more in the cloud.gov service account documentation.

    The easiest way to use this script is to redirect the output directly to the secrets.auto.tfvars file it needs to be used in

  3. Run terraform from your new environment directory with

    terraform init
    terraform plan
    

    If the terraform init command fails, you may need to run terraform init -upgrade to make sure new module versions are picked up.

  4. Apply changes with terraform apply.

  5. Remove the space deployer service instance if it doesn't need to be used again, such as when manually running terraform once.

    # <SPACE_NAME> and <ACCOUNT_NAME> have the same values as used above.
    ./destroy_service_account.sh -s <SPACE_NAME> -u <ACCOUNT_NAME>
    

Structure

The terraform directory contains sub-directories (staging, production, etc.) named for deployment environments. Each of these is a module, which is just Terraform's word for a directory with some .tf files in it. Each module governs the infrastructure of the environment for which it is named. This directory structure forms "bulkheads" which isolate Terraform commands to a single environment, limiting accidental damage.

The development module is rather different from the other environment modules. While the other environments can be used to create (or destroy) cloud resources, the development module mostly just sets up access to pre-existing resources needed for local software development.

The bootstrap directory is not an environment module. Instead, it sets up infrastructure needed to deploy Terraform in any of the environments. If you are new to the project, this is where you should start. Similarly, shared is not an environment; this module lends code to all the environments.

Files within these directories look like this:

- bootstrap/
  |- main.tf
  |- providers.tf
  |- variables.tf
  |- run.sh
  |- teardown_creds.sh
  |- import.sh
- <env>/
  |- main.tf
  |- providers.tf
  |- secrets.auto.tfvars
  |- variables.tf

In the environment-specific modules:

  • providers.tf lists the required providers
  • main.tf calls the shared Terraform code, but this is also a place where you can add any other services, resources, etc, which you would like to set up for that environment
  • variables.tf lists the variables that will be needed, either to pass through to the child module or for use in this module
  • secrets.auto.tfvars is a file which contains the information about the service-key and other secrets that should not be shared

In the bootstrap module:

  • providers.tf lists the required providers
  • main.tf sets up s3 bucket to be shared across all environments. It lives in prod to communicate that it should not be deleted
  • variables.tf lists the variables that will be needed. Most values are hard-coded in this module
  • run.sh Helper script to set up a space deployer and run terraform. The terraform action (show/plan/apply/destroy) is passed as an argument
  • teardown_creds.sh Helper script to remove the space deployer setup as part of run.sh
  • import.sh Helper script to create a new local state file in case terraform changes are needed

Troubleshooting

Expired token

The token expired, was revoked, or the token ID is incorrect. Please log back in to re-authenticate.

You need to re-authenticate with the Cloud Foundry CLI

cf login -a api.fr.cloud.gov --sso

You may also need to log in again to the Cloud.gov website.