Which Way to Write IAM Policy Documents in Terraform

There are many ways to write IAM policy documents in terraform. In this article, we’ll cover each of them and explain why we use it or why we don’t.

For each pattern, we’ll create an example policy using the last statement of this AWS example. It’s a good test case because it references both an S3 bucket name and an IAM user name, which we’ll handle differently.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::bucket-name/home/${aws:username}",
                "arn:aws:s3:::bucket-name/home/${aws:username}/*"
            ]
        }
    ]
}

Table of Contents

Inline jsonencode() Function

This is what we use. You’ll also see it in HashiCorp examples.

resource "aws_s3_bucket" "test" {
  bucket_prefix = "test"
  acl           = "private"
}

resource "aws_iam_policy" "jsonencode" {
  name = "jsonencode"
  path = "/"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "s3:*",
        ]
        Effect = "Allow"
        Resource = [
          "${aws_s3_bucket.test.arn}/home/$${aws:username}",
          "${aws_s3_bucket.test.arn}/home/$${aws:username}/*"
        ]
      },
    ]
  })
}
  • ${aws_s3_bucket.test.arn} interpolates the ARN of the bucket we’re granting access to.
  • $${aws:username} escapes interpolation to render a literal ${aws:username} string. ${aws:username} is an AWS IAM policy variable. IAM’s policy variable syntax collides with terraform’s string interpolation syntax. We have to escape it, otherwise terraform expects a variable named aws:username.
  • If you need it, the policy JSON can be referenced with aws_iam_policy.jsonencode.policy (not shown here).

Why we like this pattern:

  • It declares everything in one resource.
  • The policy is written in HCL. Terraform handles the conversion to JSON.
  • There are no extra lines or files like there are in the following patterns. It only requires the lines to declare the resource and the lines that will go into the policy.

aws_iam_policy_document Data Source

The next-best option is the aws_iam_policy_document data source. It’s 95% as good as jsonencode().

resource "aws_s3_bucket" "test" {
  bucket_prefix = "test"
  acl           = "private"
}

data "aws_iam_policy_document" "test" {
  statement {
    actions = [
      "s3:*",
    ]
    resources = [
      "${aws_s3_bucket.test.arn}/home/&{aws:username}",
      "${aws_s3_bucket.test.arn}/home/&{aws:username}/*",
    ]
  }
}

resource "aws_iam_policy" "aws_iam_policy_document" {
  name = "aws_iam_policy_document"
  path = "/"

  policy = data.aws_iam_policy_document.test.json
}
  • The bucket interpolation works the same as in the jsonencode() pattern above.
  • &{aws:username} is an alternate way to escape interpolation that’s specific to this resource. See note in the resource docs. Like above, it renders a literal ${aws:username} string. You can still use $${} interpolation in these resources. The &{} syntax is just another option.

Why we think this is only 95% as good as jsonencode():

  • It requires two resources instead of one.
  • It requires several more lines of code.
  • The different options for escaping interpolation can get mixed together in one declaration, which makes for messy code.
  • The alternate interpolation escape syntax is specific to this resource. If it’s used as a reference when writing other code, it can cause surprises.

These aren’t big problems. We’ve used this resource plenty of times without issues. It’s a fine way to render policies, we just think the jsonencode() pattern is a little cleaner.

Template File

Instead of writing the policy directly in one of your .tf files, you can put them in .tpl template files and render them later with templatefile(). If you don’t need any variables, you could use file() instead of templatefile().

First, you need a template. We’ll call ours test_policy_jsonencode.tpl.

${jsonencode(
  {
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = "s3:*",
        Resource = [
          "${bucket}/home/$${aws:username}",
          "${bucket}/home/$${aws:username}/*"
        ]
      }
    ]
  }
)}

Then, you can render the template into your resources.

resource "aws_s3_bucket" "test" {
  bucket_prefix = "test"
  acl           = "private"
}

resource "aws_iam_policy" "template_file_jsonencode" {
  name = "template_file_jsonencode"
  path = "/"

  policy = templatefile(
    "${path.module}/test_policy_jsonencode.tpl",
    { bucket = aws_s3_bucket.test.arn }
  )
}
  • The interpolation and escape syntax is the same as in the jsonencode() example above.
  • The jsonencode() call wrapped around the contents of the .tpl file allows us to write HCL instead of JSON.
  • You could write a .tpl file containing raw json instead of using jsonencode() around HCL, but then you’d be mixing another language into your module. We recommend standardizing on HCL and letting terraform convert to JSON.
  • templatefile() requires you to explicitly pass every variable you want to interpolate in the .tpl file, like bucket in this example.

Why we don’t use this pattern:

  • It splits the policy declaration across two files. We find this makes modules harder to read.
  • It requires two variable references for every interpolation. One to pass it through to the template, and another to resolve it into the policy. These are tedious to maintain.

In the past, we used these for long policies to help keep our .tf files short. Today, we use the jsonencode() pattern and declare long aws_iam_policy resources in dedicated .tf files. That keeps the policy separate but avoids the overhead of passing through variables.

Heredoc Multi-Line String

You can use heredoc multi-line strings to construct JSON. The HashiCorp docs specifically say not to do this. Because they do, we won’t include an example of using them to construct policy JSON. If you have policies rendered in blocks like this:

<<EOT
{
    "Version": "2012-10-17",
    ...
}
EOT

We recommend replacing them with the jsonencode() pattern.

Happy automating!

Operating Ops

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Allowing AWS IAM Users to Manage their Passwords, Keys, and MFA

We do these three things for IAM users that belong to humans:

  • Set a console access password and rotate it regularly. We don’t manage resources in the console, but its graphical UI is handy for inspection and diagnostics.
  • Create access keys and rotate them regularly. We use these with aws-vault to run things like terraform.
  • Enable a virtual Multi-Factor Authentication (MFA) device. AWS accounts are valuable resources. It’s worthwhile to protect them with a second factor of authentication.

There’s much more to managing IAM users, like setting password policies and enforcing key rotation. These are just three good practices we follow.

Users with the AdministratorAccess policy can do all three, but that’s a lot of access. Often, we don’t need that much. Maybe we’re just doing investigation and ReadOnlyAccess is enough. Maybe users have limited permissions and instead switch into roles with elevated privileges (more on this in a future article). In cases like those, we need a policy that allows users to manage their own authentication. Here’s what we use.

This article is about enabling human operators to responsibly manage their accounts. Service accounts used by automation and security policy enforcement are both topics for future articles.

Table of Contents

Console Access Policy Statements

This one is easy. The AWS docs have a limited policy that works.

{
    "Sid": "GetAccountPasswordPolicy",
    "Effect": "Allow",
    "Action": "iam:GetAccountPasswordPolicy",
    "Resource": "*"
},
{
    "Sid": "ChangeSelfPassword",
    "Effect": "Allow",
    "Action": "iam:ChangePassword",
    "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
}

Access Key Policy Statements

This one is also easy. The AWS docs have a limited policy that works. We made a small tweak.

{
    "Sid": "ManageSelfKeys",
    "Effect": "Allow",
    "Action": [
        "iam:UpdateAccessKey",
        "iam:ListAccessKeys",
        "iam:GetUser",
        "iam:GetAccessKeyLastUsed",
        "iam:DeleteAccessKey",
        "iam:CreateAccessKey"
    ],
    "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
}
  • The AWS policy uses * in the account ID component of the ARN. We like to set the account ID so we’re granting the most specific access we can. Security scanning tools also often check for * characters, and removing them reduces the number of flags.
  • Like above, ${aws:username} is an IAM policy variable. See links there for how to handle this in terraform.
  • We changed the sid from “ManageOwn” to “ManageSelf” so it doesn’t sound like it allows taking ownership of keys for other users.

MFA Device Policy Statements

This one was trickier. We based our policy on an example from the AWS docs, but we made several changes.

{
    "Sid": "ManageSelfMFAUserResources",
    "Effect": "Allow",
    "Action": [
        "iam:ResyncMFADevice",
        "iam:ListMFADevices",
        "iam:EnableMFADevice",
        "iam:DeactivateMFADevice"
    ],
    "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
},
{
    "Sid": "ManageSelfMFAResources",
    "Effect": "Allow",
    "Action": [
        "iam:DeleteVirtualMFADevice",
        "iam:CreateVirtualMFADevice"
    ],
    "Resource": "arn:aws:iam::[account id without hyphens]:mfa/${aws:username}"
}
  • Like we talked about above, our goal is to enable users to follow good practices. We selected statements that enable but not ones that require.
  • The AWS example included arn:aws:iam::*:mfa/* in the resources for iam:ListMFADevices. According to the the AWS docs for the IAM service’s actions, this permission only supports user in the resources list. We removed the mfa resource.
  • Also according to the the AWS docs for the IAM service’s actions, iam:DeleteVirtualMFADevice and iam:CreateVirtualMFADevice support different resources from iam:ResyncMFADevice and iam:EnableMFADevice. We split them into separate statements that limit each one to their supported resources. This probably doesn’t change access level, but our routine is to limit resource lists as much as possible. That helps make it clear to future readers what the policy enables.
  • Like above, ${aws:username} is an IAM policy variable. See links there for how to handle this in terraform.
  • We continued our convention from above of naming sids for “self” to indicate they’re limited to the user who has the policy.

Complete Policy Document

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GetAccountPasswordPolicy",
            "Effect": "Allow",
            "Action": "iam:GetAccountPasswordPolicy",
            "Resource": "*"
        },
        {
            "Sid": "ChangeSelfPassword",
            "Effect": "Allow",
            "Action": "iam:ChangePassword",
            "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
        },
        {
            "Sid": "ManageSelfKeys",
            "Effect": "Allow",
            "Action": [
                "iam:UpdateAccessKey",
                "iam:ListAccessKeys",
                "iam:GetUser",
                "iam:GetAccessKeyLastUsed",
                "iam:DeleteAccessKey",
                "iam:CreateAccessKey"
            ],
            "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
        },
        {
            "Sid": "ManageSelfMFAUserResources",
            "Effect": "Allow",
            "Action": [
                "iam:ResyncMFADevice",
                "iam:ListMFADevices",
                "iam:EnableMFADevice",
                "iam:DeactivateMFADevice"
            ],
            "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
        },
        {
            "Sid": "ManageSelfMFAResources",
            "Effect": "Allow",
            "Action": [
                "iam:DeleteVirtualMFADevice",
                "iam:CreateVirtualMFADevice"
            ],
            "Resource": "arn:aws:iam::[account id without hyphens]:mfa/${aws:username}"
        }
    ]
}

User Guide

  1. Replace [account id without hyphens] with the ID for your account in the policy above.
  2. Attach the policy to users (we like to do this through groups).
  3. Tell users to edit their authentication from My Security Credentials in the user dropdown. This policy won’t let them access their user through the IAM console. My Security Credentials may not appear in the dropdown if the user has switched into a role.

Happy automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Creating Terraform Resources in Multiple Regions

In most terraform modules, resources are created in one region using one provider declaration.

provider "aws" {
  region = "us-west-1"
}

data "aws_region" "primary" {}

resource "aws_ssm_parameter" "param" {
  name  = "/${data.aws_region.primary.name}/param"
  type  = "String"
  value = "notavalue"
}

Sometimes, you need to create resources in multiple regions. Maybe the module has to support disaster recovery to an alternate region. Maybe one of the AWS services you’re using doesn’t support your primary region. When this article was written, Amazon Certificate Manager certificates had to be created in us-east-1 to work with Amazon CloudFront. In cases like these, terraform supports targeting multiple regions.

We recommend using this feature cautiously. Resources should usually be created in the same region. If you’re sure your module should target multiple, here’s how to do it.

  1. Declare a provider for the alternate region. You’ll now have two providers. The original one for your primary region, and the new one for your alternate.
  2. Give the new provider an alias.
  3. Declare resources that reference the new alias in their provider attribute with the format aws.[alias]. This also works for data sources, which is handy for dynamically interpolating region names into resource properties like their name.
provider "aws" {
  alias  = "alternate_region"
  region = "us-west-2"
}

data "aws_region" "alternate" {
  provider = aws.alternate_region
}

resource "aws_ssm_parameter" "alt_param" {
  provider = aws.alternate_region

  name  = "/${data.aws_region.alternate.name}/param"
  type  = "String"
  value = "notavalue"
}

terraform plan doesn’t show what regions it’ll create resources in, so this example interpolates the region name into the resource name to make it visible.

...
Terraform will perform the following actions:

  # aws_ssm_parameter.alt_param will be created
  + resource "aws_ssm_parameter" "alt_param" {
      + arn       = (known after apply)
      + data_type = (known after apply)
      + id        = (known after apply)
      + key_id    = (known after apply)
      + name      = "/us-west-2/param"
      + tags_all  = (known after apply)
      + tier      = "Standard"
      + type      = "String"
      + value     = (sensitive value)
      + version   = (known after apply)
    }
...

To confirm the resources ended up in the right places, here are screenshots of each region’s parameters next to the region drop-down menu in the AWS web console.

We get one in us-west-1 and another in us-west-2, as expected.

Happy automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Terraform Map and Object Patterns

Terraform variables implement both a map and an object type. They mostly work the same. The docs even say, “The distinctions are only useful when restricting input values for a module or resource.” They can be defined and accessed in several ways. There’s some automatic conversion back and forth between them.

This article distills these details into patterns you can copy and paste, while highlighting some of the subtleties.

Here’s the main detail you need:

Maps contain many things of one type. Objects contain a specific set of things of many types.

This is a simplification. It doesn’t cover all the behavior of terraform’s maps and objects (like loss that can happen in conversions back and forth between them), but it’s enough for the patterns you’re likely to need day to day.

Table of Contents

Style

Key Names

You can quote the key names in map definitions.

variable "quoted_map" {
  default = {
    "key_1" = "value_1"
    "key_2" = "value_2"
  }
}

But you don’t have to.

variable "unquoted_map" {
  default = {
    key_1 = "value_1"
    key_2 = "value_2"
  }
}

We prefer the unquoted format. Partly because the syntax is lighter and partly because it only works if key names are valid identifiers, so it forces us to use ones that are. If the key names are identifiers, the interior of maps look similar to the rest of our terraform variables, and we can also use a dotted notation for referencing them.

Commas

You can separate key/value pairs with commas.

variable "comma_map" {
  default = {
    key_1 = "value_1",
    key_2 = "value_2",
  }
}

But you don’t have to.

variable "no_comma_map" {
  default = {
    key_1 = "value_1"
    key_2 = "value_2"
  }
}

We prefer no commas because the syntax is lighter.

References

You can reference values by attribute name with quotes and square brackets.

output "brackets" {
  value = var.unquoted_map["key_2"]
}

But you can also use the dotted notation.

output "dots" {
  value = var.unquoted_map.key_2
}

We prefer the dotted notation because the syntax is lighter. This also requires the key names to be identifiers, but they will be if you use the unquoted pattern for defining them.

Patterns

  • Each pattern implements a map containing a value_2 string that we’ll read into an output.
  • Examples set values with variable default values, but they work the same with tfvars, etc.
  • The types of values in these examples are known, so they’re set explicitly. There’s also an any keyword for cases where you’re not sure. We recommend explicit types whenever possible.

Untyped Flat Map

This is the simplest pattern. We don’t recommend it. Use a typed map instead.

variable "untyped_flat_map" {
  default = {
    key_1 = "value_1"
    key_2 = "value_2"
  }
}
output "untyped_flat_map" {
  value = var.untyped_flat_map.key_2
}

Typed Flat Map

This is sufficient for simple cases.

variable "typed_flat_map" {
  default = {
    key_1 = "value_1"
    key_2 = "value_2"
  }
  type = map(string)
}
output "typed_flat_map" {
  value = var.typed_flat_map.key_2
}

With the type set, if a module mistakenly passes a value of the wrong type that our code wasn’t expecting, terraform throws an error.

variable "typed_flat_map_bad_value" {
  default = {
    key_1 = []
    key_2 = "value_2"
  }
  type = map(string)
}
│ Error: Invalid default value for variable
│ 
│   on main.tf line 49, in variable "typed_flat_map_bad_value":
│   49:   default = {
│   50:     key_1 = []
│   51:     key_2 = "value_2"
│   52:   }
│ 
│ This default value is not compatible with the variable's type constraint: element "key_1": string required.

except when it doesn’t. If we set key_1 to a number or boolean, it’ll be automatically converted to a string. This is generic terraform behavior. It’s not specific to maps.

Untyped Nested Map

We don’t recommend this, either. Use a typed nested map instead.

variable "untyped_nested_map" {
  default = {
    key_1 = "value_1"
    key_2 = {
      nested_key_1 = "value_2"
    }
  }
}
output "untyped_nested_map" {
  value = var.untyped_nested_map.key_2.nested_key_1
}

Typed Nested Map, Values are Same Type

Like the flat map, this pattern protects us against types of inputs the code isn’t written to handle. This only works when the values of the keys within each map all share the same type.

variable "typed_nested_map_values_same_type" {
  default = {
    key_1 = {
      nested_key_1 = "value_1"
    }
    key_2 = {
      nested_key_2 = "value_2"
    }
  }
  type = map(map(string))
}
output "typed_nested_map_values_same_type" {
  value = var.typed_nested_map_values_same_type.key_2.nested_key_2
}

Typed Nested Map, Values are Different Types

This is where the differences between maps and objects start to show up in implementations. Remembering our distillation of the docs from the start:

Maps contain many things of one type. Objects contain a specific set of things of many types.

variable "typed_nested_map_values_different_types" {
  default = {
    key_1 = "value_1"
    key_2 = {
      nested_key_1 = "value_2"
    }
  }
  type = object({
    key_1 = string,
    key_2 = map(string)
  })
}
output "typed_nested_map_values_different_types" {
  value = var.typed_nested_map_values_different_types.key_2.nested_key_1
}

In this nested map, one value is a string and the other is a map. That means we need an object to define the constraint. We can’t do it with just a map, because maps contain one type of value and we need two.

Flexible Number of Typed Nested Maps, Values are Different Types

This is the most complex case. It lets us read in a map that has an arbitrary number of nested maps like the ones above.

variable "flexible_number_of_typed_nested_maps" {
  default = {
    map_1 = {
      key_1 = "value_1"
      key_2 = {
        nested_key_1 = "value_2"
      }
    }
    map_2 = {
      key_1 = "value_3"
      key_2 = {
        nested_key_1 = "value_4"
      }
    }
  }
  type = map(
    object({
      key_1 = string,
      key_2 = map(string)
    })
  )
}
output "flexible_number_of_typed_nested_maps" {
  value = var.flexible_number_of_typed_nested_maps.map_1.key_2.nested_key_1
}

We could add a map_3 (or as many more as we wanted) without getting type errors. Again remembering our simplification:

Maps contain many things of one type. Objects contain a specific set of things of many types.

Inside, we use objects because their keys have values that are different types. Outside, we use a map because we want an arbitrary number of those objects.

The inside objects all have the same structure. They can be defined with the same type expression. That passes the requirement that maps contain all the same type of thing.

Happy automating!

Operating Ops

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Terratest Good Practices: Table-Driven Tests

Hello!

Terratest is a common way to run integration tests against terraform modules. I use it on many of the modules I develop. If you haven’t used it before, check out its quickstart for an example of how it works.

For simple cases, the pattern in that quickstart is all you need. But, bigger modules mean more tests and pretty soon you can end up swimming in all the cases you have to define. Go has a tool to help: table-driven tests. Here’s what you need to get them set up for terratest (Dave Cheney also has a great article on them if you want to go deeper).

First, let’s look at a couple simple tests that aren’t table-driven:

package test

import (
	"testing"

	"github.com/gruntwork-io/terratest/modules/terraform"
	"github.com/stretchr/testify/assert"
)

func TestOutputsExample(t *testing.T) {
	terraformOptions := &terraform.Options{
		TerraformDir: ".",
	}
	defer terraform.Destroy(t, terraformOptions)
	terraform.InitAndApply(t, terraformOptions)

	one := terraform.Output(t, terraformOptions, "One")
	assert.Equal(t, "First.", one)
	two := terraform.Output(t, terraformOptions, "Two")
	assert.Equal(t, "Second.", two)
}

Easy. Just repeat the calls to terraform.Output and assert.Equal for each test and assert it’s what you expect. Not a problem, unless you have dozens or hundreds of tests. Then you end up with a lot of duplication.

You can de-duplicate the repeated calls by defining your test cases in a slice of structs (the “table”) and then looping over the cases. Similar to adaptive modeling. Like this:

package test

import (
	"testing"

	"github.com/gruntwork-io/terratest/modules/terraform"
	"github.com/stretchr/testify/assert"
)

func TestOutputsTableDrivenExample(t *testing.T) {
	terraformOptions := &terraform.Options{
		TerraformDir: ".",
	}
	defer terraform.Destroy(t, terraformOptions)
	terraform.InitAndApply(t, terraformOptions)

	outputTests := []struct {
		outputName    string
		expectedValue string
	}{
		{"One", "First."},
		{"Two", "Second."},
	}

	for _, testCase := range outputTests {
		outputValue := terraform.Output(t, terraformOptions, testCase.outputName)
		assert.Equal(t, testCase.expectedValue, outputValue)
	}
}

Now, there’s just one statement each for terraform.Output and assert.Equal. With only two tests it actually takes a bit more code to use a table, but once you have a lot of tests it’ll save you.

That’s it! That’s all table-driven tests are. Just a routine practice in Go that work as well in terratest as anywhere.

Happy testing,

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

CloudWatch Logs: Preventing Orphaned Log Groups

Hello!

When you need to publish logs to CloudWatch (e.g. from a lambda function), you need an IAM role with access to CloudWatch. It’s tempting to use a simple policy like the one in the AWS docs. You might write a CloudFormation template like this:

# Don't use this!
 
AWSTemplateFormatVersion: '2010-09-09'
 
Resources:
  DemoRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: '/'
      Policies:
      - PolicyName: lambda-logs
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:DescribeLogStreams
            - logs:PutLogEvents
            Resource: arn:aws:logs:*:*:*
 
  DemoFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile: |
          def handler(event, context):
              print('Demo!')
      FunctionName: demo-function
      Handler: index.handler
      Role: !GetAtt DemoRole.Arn
      Runtime: python3.7

Obviously, the role is too permissive: arn:aws:logs:*:*:*

But, there’s another problem: it grants logs:CreateLogGroup.

Here’s what happens:

  1. Launch a stack from this template
  2. Run demo-function
  3. Because we granted it permission, demo-function automatically creates /aws/lambda/demo-function log group in CloudWatch Logs
  4. Delete the stack
  5. CloudFormation doesn’t delete the /aws/lambda/demo-function log group

CloudFormation doesn’t know about the function’s log group because it didn’t create that group, so it doesn’t know anything needs to be deleted. Unless an operator deletes it manually, it’ll live in the account forever.

It seems like we can fix that by having CloudFormation create the log group:

DemoLogGroup:
  Type: AWS::Logs::LogGroup
  Properties:
    LogGroupName: /aws/lambda/demo-function
    RetentionInDays: 30

But, if the function still has logs:CreateLogGroup I’ve seen race conditions where the stack deletes the group before the lambda function and the function recreates that group before it gets deleted.

Plus, there aren’t any errors if you forget to define the group in CF. The stack launches. The lambda function runs. We even get logs, they’ll just be orphaned if we ever delete the stack.

That’s why it’s a problem to grant logs:CreateLogGroup. It allows lambda (or EC2 or whatever else is logging) to log into unmanaged groups.

All resources in AWS should be managed by CloudFormation (or terraform or whatever resource manager you use). Including log groups. So, you should never grant logs:CreateLogGroup except to your resource manager. Nothing else should need that permission.

And that’s the other reason: lambda doesn’t need logs:CreateLogGroup because it should be logging to groups that already exist. You shouldn’t grant permissions that aren’t needed.

Here’s the best practice: always manage your CloudWatch Logs groups and never grant permission to create those groups except to your resource manager.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Terraform: Get Data with Python

Sometimes I have data I need to assemble during terraform’s apply phase, and I like to use Python helper scripts to do that. Awesomely, terraform natively supports using Python to populate the data resource:

data "external" "cars_count" {
  program = ["python", "${path.module}/get_cool_data.py"]
 
  query = {
    thing_to_count = "cars"
  }
}
 
output "cars_count" {
  value = "${data.external.cars_count.result.cars}"
}

A slick, easy way to drop out of terraform and use Python to grab what you need (although it can get you in to trouble if you abuse it).

The Python script has to follow a protocol that defines formats, error handling, etc. It’s minimal but it’s fiddly, plus if you need more than one external data script it’s better to modularize than copy and paste, so I wrote a pip-installable decorator that implements the protocol for you. The source is also an example you can follow if you’d rather implement it yourself than add a dependency. Here’s how you use it:

from terraform_external_data import terraform_external_data
 
@terraform_external_data
def get_cool_data(query):
    return {query['thing_to_count']: '3'}
 
if __name__ == '__main__':
    get_cool_data()

It’s available on PyPI, just pip install terraform_external_data.

Happy terraforming!

Adam

Need more than just this article? We’re available to consult.