Terraform

Table of Contents

Overview

Reference

Configurations

backend

resource "aws_s3_bucket" "bucket" {
  bucket = "my-bucket"
}

resource "aws_dynamodb_table" "locktable" {
  name           = "my-locktable"
  read_capacity  = 5
  write_capacity = 5
  hash_key       = "LockID"

  attribute {
    name = "LockID" # Reserved name by terraform
    type = "S"
  }
}
terraform {
  backend "s3" {
    bucket         = "my-bucket"
    dynamodb_table = "my-locktable"
    key            = "my.tfstate"
    region         = "ap-northeast-1"
  }
}

path

source_file = "${path.cwd}/main.py"   # Path to the current working directory
source_file = "${path.root}/main.py"  # Path to the root module
source_file = "${path.module}/main.py" # Path to the current module

depends_on

resource "aws_eip" "ip" {
  instance   = "${aws_instance.example.id}"

  # Use 'depends_on' to make the dependency explicit.
  # It's redundant in this case because this resource already depends on 'aws_instance.example'
  # by referencing its id
  depends_on = ["aws_instance.example"]
}

element

vars {
  ip = "${element(aws_instance.cluster.*.private_ip, count.index)}"
}

ignore_changes

There was a problem when I defined multiple aws_eip s which are associated to aws_instance s.

resource "aws_instance" "foo" {
  count = 10
  ..
}

resource "aws_eip" "bar" {
  count = 10
  instance = "${element(aws_instance.foo.*.i, count.index}"
}

Terraform plans to change the all association when I only change the count. To work around this, use ignore_changes

resource "aws_eip" "bar" {
  count = 10
  instance = "${element(aws_instance.foo.*.i, count.index}"
  lifecycle {
    ignore_changes = ["instance"]
  }
}

null_resource

resource "null_resource" "docker_run" {
  count = "${var.count}"

  triggers {
    cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
  }

  connection {
    type        = "ssh"
    user        = "ubuntu"
    host        = "${element(aws_instance.cluster.*.private_ip, count.index)}"
    private_key = "${file(var.key_path)}"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo docker stop etcd || true",
      "sudo docker rm -f etcd || true",
      "${element(data.template_file.docker_run_command.*.rendered, count.index)}",
    ]
  }
}

local-exec

provisioner "local-exec" {
  command = "run.sh ${var.args}"
}

remote-exec

connection {
  type        = "ssh"
  user        = "ubuntu"
  host        = "${aws_instance.main.private_ip}" # can omit if within the instance
  private_key = "${file(var.key_path)}"
}

provisioner "remote-exec" {
  inline = [
    "curl -sSL https://get.docker.com/ | sh",
  ]
}

archive_file

data "archive_file" "code" {
  type        = "zip"
  source_file = "${path.module}/main.py"
  output_path = "${path.module}/lambda.zip"
}
resource "aws_lambda_function" "main" {
  function_name    = "foo"
  filename         = "${data.archive_file.code.output_path}"
  source_code_hash = "${data.archive_file.code.output_base64sha256}"
  ...
}

template_file

data "template_file" "curl" {
  count    = "${var.count}"
  template = "curl http://$${ip}"
  vars {
    ip = "${element(aws_instance.cluster.*.private_ip, count.index)}"
  }
}

Commands

plan

terraform plan
terraform plan -var 'access_key=foo' -var 'secret_key=bar'
terraform plan -var 'amis={us-east-1 = "foo", us-west-2 = "bar"}'
terraform plan -out=my.plan

apply

terraform apply
terraform apply 'my.plan'

import

terraform import aws_instance.main i-abcd1234

state

  1. mv

taint

terraform taint aws_instance.main                                                                              1 ↵
terraform taint -module=my_module aws_instance.main                                                                              1 ↵

force-unlcok

Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
        status code: 400, request id: <...>
Lock Info:
  ID:        abcdef01-ef34-abcd-5678-abc123def456
  Path:      <...>
  Operation: OperationTypePlan
  Who:       <...>
  Version:   0.9.8
  Created:   2017-06-13 11:00:23.886816353 +0000 UTC
  Info:

...
terraform force-unlock "abcdef01-ef34-abcd-5678-abc123def456"

Topics

Terraform files

Resources and Data Sources

The use case of these things are following:

You can provision servers by defining them as resources.
For specifying server configurations, you can reference existing security groups, VPCs, and the like by defining them as data sources.

State file

Variable files

Resource Addressing

[module path][resource spec]
module.A.module.B.module.C...
resource_type.resource_name[N]
resource "aws_instance" "web" {
  # ...
  count = 4
}
aws_instance.web[3]
aws_instance.web

Interpolation

${self.private_ip_address}  # attributes of their own
${aws_instance.web.id}
${aws_instance.web.0.id}    # a specific one when the resource is plural('count' attribute exists)
${aws_instance.web.*.id}    # this is a list
${module.foo.bar}           # outputs from module
.. and many more including some functions

Limitations

No dynamic interpolation over count

  1. Update

    • Interpolations other than computing required can be used for count.
    • Now

      count = "${length(var.other_List)}"
      

      is valid.

  2. Previous limitation

    For now, you can't use interpolation for referencing other resources to specify count because of the way that terraform handles count.

    variable my_count {
      default = 10
    }
    
    resource "something" "foo" {
      count = "${var.my_count}"   # ok
    }
    
    resource "something" "bar" {
      count = "${something.foo.count}"  # error
    }
    

    We should definitely do this, the tricky part comes from the fact that count expansion is currently done statically, before the primary graph walk, which means we can't support "computed" counts right now. (A "computed" value in TF is one that's flagged as not known until all its dependencies are calculated.)

No integer variable

No list of maps

variable "cluster_config" {
  type = "map"
}

resource aws_elasticsearch_domain "main" {
  cluster_config = "${var.cluster_config}"  # Not supported
}

Because the actual schema is:

"cluster_config": {
      Type:     schema.TypeList,
      Optional: true,
      Computed: true,
      Elem: &schema.Resource{
          Schema: map[string]*schema.Schema{

Modules

How-to

Make manual modifications on remote backeded tfstates

# Download the backended tfstate
$ terraform state pull > terraform.tfstate

# Most terraform state commands modify './terraform.tfstate' by default
$ terraform import ADDR ID

# Push the modified tfstate back
$ terraform state push terraform.tfstate

Migrate existing AWS Infra with terraforming

Generate documents automatically

This simple tool automatically generates markdown or json document based on variable and output blocks.

Use multiple providers (or AWS regions)


provider "aws" {
  region = "ap-northeast-1"
}

provider "aws" {
  alias  = "test"
  region = "us-east-1"
}

# Set provider with alias
resource "aws_instance" "foo" {
  provider = "aws.test"

  # ...
}