Terraform
Table of Contents
Overview
Reference
Configurations
backend
resource "aws_s3_bucket" "bucket" {
bucket = "my-bucket"
}
resource "aws_dynamodb_table" "locktable" {
name = "my-locktable"
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
name = "LockID" # Reserved name by terraform
type = "S"
}
}
terraform {
backend "s3" {
bucket = "my-bucket"
dynamodb_table = "my-locktable"
key = "my.tfstate"
region = "ap-northeast-1"
}
}
path
source_file = "${path.cwd}/main.py" # Path to the current working directory
source_file = "${path.root}/main.py" # Path to the root module
source_file = "${path.module}/main.py" # Path to the current module
depends_on
depends_on
parameter which is available on any resource
resource "aws_eip" "ip" {
instance = "${aws_instance.example.id}"
# Use 'depends_on' to make the dependency explicit.
# It's redundant in this case because this resource already depends on 'aws_instance.example'
# by referencing its id
depends_on = ["aws_instance.example"]
}
element
- if
index
is greater thanlen(list)
,index
modulolen(list)
is used
vars {
ip = "${element(aws_instance.cluster.*.private_ip, count.index)}"
}
ignore_changes
There was a problem when I defined multiple aws_eip
s which are associated to aws_instance
s.
resource "aws_instance" "foo" {
count = 10
..
}
resource "aws_eip" "bar" {
count = 10
instance = "${element(aws_instance.foo.*.i, count.index}"
}
Terraform plans to change the all association when I only change the count
. To work around this, use ignore_changes
resource "aws_eip" "bar" {
count = 10
instance = "${element(aws_instance.foo.*.i, count.index}"
lifecycle {
ignore_changes = ["instance"]
}
}
null_resource
- Allows to run provionsers not directly associated with a single existing resource
resource "null_resource" "docker_run" {
count = "${var.count}"
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
}
connection {
type = "ssh"
user = "ubuntu"
host = "${element(aws_instance.cluster.*.private_ip, count.index)}"
private_key = "${file(var.key_path)}"
}
provisioner "remote-exec" {
inline = [
"sudo docker stop etcd || true",
"sudo docker rm -f etcd || true",
"${element(data.template_file.docker_run_command.*.rendered, count.index)}",
]
}
}
local-exec
provisioner "local-exec" {
command = "run.sh ${var.args}"
}
remote-exec
connection {
type = "ssh"
user = "ubuntu"
host = "${aws_instance.main.private_ip}" # can omit if within the instance
private_key = "${file(var.key_path)}"
}
provisioner "remote-exec" {
inline = [
"curl -sSL https://get.docker.com/ | sh",
]
}
archive_file
- Useful to provision resources which require zip files.
data "archive_file" "code" {
type = "zip"
source_file = "${path.module}/main.py"
output_path = "${path.module}/lambda.zip"
}
resource "aws_lambda_function" "main" {
function_name = "foo"
filename = "${data.archive_file.code.output_path}"
source_code_hash = "${data.archive_file.code.output_base64sha256}"
...
}
template_file
- Use
$$
intemplate
to escape$
data "template_file" "curl" {
count = "${var.count}"
template = "curl http://$${ip}"
vars {
ip = "${element(aws_instance.cluster.*.private_ip, count.index)}"
}
}
Commands
plan
terraform plan
terraform plan -var 'access_key=foo' -var 'secret_key=bar'
terraform plan -var 'amis={us-east-1 = "foo", us-west-2 = "bar"}'
terraform plan -out=my.plan
apply
import
state
mv
taint
- You can taint resources within modules
- It looks like that tainting a whole module is currently impossible
force-unlcok
LockID
will be printed out when commands fail
Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
status code: 400, request id: <...>
Lock Info:
ID: abcdef01-ef34-abcd-5678-abc123def456
Path: <...>
Operation: OperationTypePlan
Who: <...>
Version: 0.9.8
Created: 2017-06-13 11:00:23.886816353 +0000 UTC
Info:
...
Topics
Terraform files
- All
.tf
files are loaded .tf
files are declarative, so the order of loading files doesn't matter, except for Override files- Override files are
.tf
files named asoverride.tf
or{name}_override.tf
- Override files are loaded last in alphabetical order
- Configurations in override files are merged into the existing configuration, not appended.
Resources and Data Sources
- Resources are infrastructures managed by
terraform
- Data sources are not managed by
terraform
The use case of these things are following:
You can provision servers by defining them as resources.
For specifying server configurations, you can reference existing security groups, VPCs, and the like by defining them as data sources.
State file
- State about the real managed infrastructure
terraform.tfstate
by default- Formatted in
json
- While terraform files are about to be, state file is about as is
- State is refreshed before performing most of operations like
terraform plan
,terraform apply
- Basic modifications can be done through
terraform state [sub]
commands - Importing existing infrastructures can be done using
terraform state import
- Importing is related to
resources
, notdata sources
- Which means
terraform
can destroy the existing infrastructures once they are imported
- Importing is related to
Variable files
- A file named
terraform.tfvars
is automatically loaded - Use
-var-file
flag to specify other.tfvars
files
Resource Addressing
[module path][resource spec]
module.A.module.B.module.C...
resource_type.resource_name[N]
resource "aws_instance" "web" {
# ...
count = 4
}
aws_instance.web[3]
aws_instance.web
Interpolation
${self.private_ip_address} # attributes of their own
${aws_instance.web.id}
${aws_instance.web.0.id} # a specific one when the resource is plural('count' attribute exists)
${aws_instance.web.*.id} # this is a list
${module.foo.bar} # outputs from module
.. and many more including some functions
Limitations
No dynamic interpolation over count
Update
- Interpolations other than
computing required
can be used forcount
. Now
count = "${length(var.other_List)}"
is valid.
- Interpolations other than
Previous limitation
For now, you can't use interpolation for referencing other resources to specify
count
because of the way that terraform handlescount
.variable my_count { default = 10 } resource "something" "foo" { count = "${var.my_count}" # ok } resource "something" "bar" { count = "${something.foo.count}" # error }
We should definitely do this, the tricky part comes from the fact that count expansion is currently done statically, before the primary graph walk, which means we can't support "computed" counts right now. (A "computed" value in TF is one that's flagged as not known until all its dependencies are calculated.)
No integer variable
No list of maps
- The type of most mapping arguments are actually the list of maps
variable "cluster_config" {
type = "map"
}
resource aws_elasticsearch_domain "main" {
cluster_config = "${var.cluster_config}" # Not supported
}
Because the actual schema is:
"cluster_config": {
Type: schema.TypeList,
Optional: true,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
Modules
- When you run
terraform apply
, the current working directory holding the Terraform files is called the root module. - With Local File Paths, Terraform will create a symbolic link to the original directory. Therefore, any changes are automatically available.
How-to
Make manual modifications on remote backeded tfstates
# Download the backended tfstate
$ terraform state pull > terraform.tfstate
# Most terraform state commands modify './terraform.tfstate' by default
$ terraform import ADDR ID
# Push the modified tfstate back
$ terraform state push terraform.tfstate
Migrate existing AWS Infra with terraforming
Generate documents automatically
This simple tool automatically generates markdown or json document based on variable
and output
blocks.
Use multiple providers (or AWS regions)
provider "aws" {
region = "ap-northeast-1"
}
provider "aws" {
alias = "test"
region = "us-east-1"
}
# Set provider with alias
resource "aws_instance" "foo" {
provider = "aws.test"
# ...
}