How to Upgrade DevOps Code to Python 3

Python 2 is going away! It’s time to upgrade.

You shouldn’t run anything in prod that’s not actively supported. If there are security flaws you won’t have a sure path to remediation. Start the upgrade now so you have time to finish before support ends.

In DevOps you’re not usually writing much raw Python. A helper lambda function. A little boto3 script. If you’re writing lots of code, you’re probably making a mistake and you should be looking for an existing tool that already implements whatever you’re doing (terraform, troposphere, Ansible, Salt, paramiko, whatever).

Because of that, migrating DevOps code to Python 3 is usually easy. There are guides and a conversion tool. I usually just switch my interpreter to 3 and fix errors until there aren’t any more. A few old features have been replaced with new ones that are worth adopting. Here are highlights from the easy migrations I’ve done (keep reading for the one that wasn’t easy):

  • Virtual environments are now in core as venv. You don’t need to install virtualenv anymore.
  • basestring was replaced with str.
  • Use the print() function instead of the print statement. Printing output isn’t usually ideal, and this may be a good opportunity to upgrade to logging.
  • ConfigParser was renamed to configparser (to match the Python convention).
  • mock is now in core as unittest.mock.
  • The new f-strings are awesome. format() and the other string formatters still work, so f-strings aren’t a migration requirement, but they make cleaner code and I recommend switching to them.

Like always, lint your code before you run it!

One migration I got into wasn’t simple: I’d bodged together a script from snippets of a library that used Python 2 sockets to implement ping so I could watch the gears turn inside the black boxes of AWS Security Groups. I got into the weeds of unicode and not-unicode strings and then decided to just live with Python 2.

If that story reminded you of any of your own code, I recommend you don’t try to migrate that code. Look for a tool that already implements whatever you’re trying to do, find a way not to need to do whatever you were doing, something. In my case, that script wasn’t part of delivering product. I was just hacking around. I finished my experiments and deleted the script.

Happy upgrading!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Python DevOps Code Error Checking: Lint with Pyflakes

Hello!

For those unfamiliar with linting (static analysis), read Dan Bader’s introduction.

There are several linters for Python, but when I’m doing DevOps I use Pyflakes. I love the opening sentence of its design principals:

Pyflakes makes a simple promise: it will never complain about style, and it will try very, very hard to never emit false positives.

I’m not generally rigid about style. And, when I enforce it, I use the code review process and not a static analysis tool. The Python interpreter doesn’t care about style. Style is for humans; humans are the best tools to analyze it. Linters turn what should be a human process into something robotic.

Style is especially hard to enforce in DevOps, where you’re often working with a mix of languages and frameworks and tools that all have different style conventions. For example, lots of folks use Chef in AWS. Chef is a Ruby framework. They also need lambda helper functions, but lambda doesn’t support Ruby so they write those functions in Python and now half their code is Ruby and half is Python. And that’s if you ignore all the HCL in their terraform modules… You can go insane trying to configure your linters to keep up with the variation.

More than that, in DevOps you’re not usually writing much code. A helper lambda function. A little boto3 script. If you’re writing lots of code, you’re probably making a mistake and you should be looking for an existing tool that already implements whatever you’re doing (terraform, troposphere, Ansible, Salt, paramiko, whatever).

Pyflakes is great because it catches syntax errors before execution time but won’t suck you in to The Bog of Style Sorrow. It’ll quickly tell you if you misplaced a quote mark, and then it exits. So if you do this:

bad_variable = 'Oops I forgot to close the string.

You get an error:

pyflakes test.py
test.py:1:51: EOL while scanning string literal
bad_variable = 'Oops I forgot to close the string.
                                                  ^

You also get some handy stuff like checking for unused imports. So if you do this:

import logging
good_variable = 'Huzzah! I remembered to close the string.'

You also get an error:

pyflakes test.py
test.py:1: 'logging' imported but unused

Crucially, Pyflakes will pass if you do this:

list_style_one = ['a', 'b']
list_style_two = [ 'a', 'b' ]

It’s a little funky to do both those patterns right next to each other, and if I were writing that code myself I’d fix it, but I don’t want my linter to error. The code works fine and I can read it easily. I prefer consistency, but not to the point that I want a robot to generate build failures.

I recommend running Pyflakes on all your Python DevOps code because it’s a quick win. Pretty much anything it errors on you should fix before you try to use the code, and it’s usually faster to run Pyflakes than to deploy a new version of the code and see if it works. I like things that are fast. 😁

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

CodePipeline: Python AWS Lambda Functions Without Timeouts

Hello!

If you’re new to CodePipeline lambda actions check out this complete example first.

There’s a gotcha when writing CodePipeline lambda functions that’s easy to miss and if you miss it your pipeline can get stuck in timeout loops that you can’t cancel. Here’s how to avoid that.

This article assumes you’re familiar with CodePipeline and lambda and that you’ve granted the right IAM permissions to both. You may also want to check out lambda function logging.

This is Python 3. Python 2 is out of support.

CodePipeline uses a callback pattern for running lambda functions: it invokes the function and then waits for that function to call back with either put_job_success_result or put_job_failure_result.

Here’s an empty lambda action:

import json
import logging
import boto3

def lambda_handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    logger.debug(json.dumps(event))

    codepipeline = boto3.client('codepipeline')
    job_id = event['CodePipeline.job']['id']

    logger.info('Doing cool stuff!')
    response = codepipeline.put_job_success_result(jobId=job_id)
    logger.debug(response)

It’s a successful no-op:

SimpleWorking

Now let’s add an exception:

import json
import logging
import boto3

def lambda_handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    logger.debug(json.dumps(event))

    codepipeline = boto3.client('codepipeline')
    job_id = event['CodePipeline.job']['id']

    logger.info('Doing cool stuff!')
    raise ValueError('Fake error for testing!')
    response = codepipeline.put_job_success_result(jobId=job_id)
    logger.debug(response)

The log shows the exception, like we’d expect:

SimpleFailing

But, the pipeline action takes 20 minutes to time out. The CodePipeline limits doc says it takes 1 hour for lambda functions to time out and that used to apply to functions that didn’t send results, I tested it. Sadly, I didn’t think to keep screenshots back then. In my latest tests it took 20 minutes: ConsistentTwentyMinuteTimeout

It doesn’t matter what the lambda function’s timeout is. Mine was set to 3 seconds. We’re hitting a timeout that’s internal to CodePipeline.

At least the action’s details link give an error saying specifically that it didn’t receive a result: NoResultReturnedErrorMinimal.png

There’s a workaround. You should usually only catch specific errors that you know how to handle. It’s an anti-pattern to use except Exception. But, in this case we need to guarantee that the callback always happens. In this one situation (not in general) we need to catch all exceptions:

import json
import logging
import boto3

def lambda_handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    logger.debug(json.dumps(event))

    codepipeline = boto3.client('codepipeline')
    job_id = event['CodePipeline.job']['id']

    try:
        raise ValueError('This message will appear in the CodePipeline UI.')
        logger.info('Doing cool stuff!')
        response = codepipeline.put_job_success_result(jobId=job_id)
        logger.debug(response)
    except Exception as error:
        logger.exception(error)
        response = codepipeline.put_job_failure_result(
            jobId=job_id,
            failureDetails={
              'type': 'JobFailed',
              'message': f'{error.__class__.__name__}: {str(error)}'
            }
        )
        logger.debug(response)

(logger.exception(error) logs the exception and its stack trace. Even though we’re catching all errors, we shouldn’t let them pass silently.)

Now the failure will be visible to CodePipeline and the action won’t get stuck waiting.

The failureDetails message will appear in the CodePipeline UI. We send the exception message so it’s visible to operators:

HealthyError2

Of course, you’ll want to remove that ValueError. It’s just to demonstrate the handling.

You should use this pattern in every lambda action: catch all exceptions and return a JobFailed result to the pipeline. You can still catch more specific exceptions inside the catchall try/except, ones specific to the feature you’re implementing, but you need that catchall to ensure the result returns when the unexpected happens.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

 

How to Paginate in boto3: Use Collections Instead

Hello!

When working with boto3, you’ll often find yourself looping. Like if you wanted to get the names of all the objects in an S3 bucket, you might do this:

import boto3

s3 = boto3.client('s3')

response = s3.list_objects_v2(Bucket='my-bucket')
for object in response['Contents']:
    print(object['Key'])

But, methods like list_objects_v2 have limits on how many objects they’ll return in one call (up to 1000 in this case). If you reach that limit, or if you know you eventually will, the solution used to be pagination. Like this:

import boto3

s3 = boto3.client('s3')

paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket='my-bucket')

for page in pages:
    for object in page['Contents']:
        print(object['Key'])

I always forget how to do this. I also feel like it clutters up my code with API implementation details that don’t have anything to do with the objects I’m trying to list.

There’s a better way! Boto3 has semi-new things called collections, and they are awesome:

import boto3

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-buycket')
objects = bucket.objects.all()

for object in objects:
    print(object.key)

If they look familiar, it’s probably because they’re modeled after the QuerySets in Django’s ORM. They work like an object-oriented interface to a database. It’s convenient to think about AWS like that when you’re writing code: it’s a database of cloud resources. You query the resources you want to interact with and read their properties (e.g. object.key like we did above) or call their methods.

You can do more than list, too. For example, in S3 you can empty a bucket in one line (this works even if there are pages and pages of objects in the bucket):

import boto3

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-buycket')
bucket.objects.all().delete()

Boom 💥. One line, no loop. Use wisely.

I recommend collections whenever you need to iterate. I’ve found the code is easier to read and their usage is easier to remember than paginators. Some notes:

  • This is just an introduction, collections can do a lot more. Check out filtering. It’s excellent.
  • Collections aren’t available for every resource (yet). Sometimes you have to fall back to a paginator.
  • There are cases where using a collection can result in more API calls than you expect. Most of the time this isn’t a problem, but if you’re seeing performance problems you might want to dig into the nuances in the doc.

Hopefully, this helps simplify your life in the AWS API.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Boto3 Best Practices: Assert to Stop Silent Failures

Good morning!

A variation of this article was given as a lighting talk at the San Diego Python Meetup:

This article covers a pattern I use to increase my confidence that my infrastructure code is working. It turns silent errors into loud ones. I’ve handled plenty of code that runs without errors but still ends up doing the wrong thing, so I’m never really sure if it’s safe to go to sleep at night. I don’t like that. I want silence to be a real indicator that everything is fine. Like The Zen of Python says:

Errors should never pass silently.

It’s easy to write assumptions that’ll create silent errors into boto code. Imagine you have an EBS volume called awesome-stuff and you need to snapshot it for backups. You might write something like this:

import datetime
 
import boto3
 
ec2 = boto3.resource('ec2')
volume_filters = [{'Name': 'tag:Name', 'Values': ['awesome-stuff']}]
volumes = list(ec2.volumes.filter(Filters=volume_filters))
volume = volumes[0]
now = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%dT%H-%M-%S%Z")
volume.create_snapshot(Description=f'awesome-stuff-backup-{now}')

Simple enough. We know our volume is named awesome-stuff, so we look up volumes with that name. There should only be one, so we snapshot the first item in that list. I’ve seen this pattern all over the boto code I’ve read.

What if there are two volumes called “awesome-stuff”? That could easily happen. Another admin makes a copy and tags it the same way. An unrelated project in the same account creates a volume with the same name because awesome-stuff isn’t super unique. It’s very possible to have two volumes with the same name, and you should assume it’ll happen. When it does, this script will run without errors. It will create a snapshot, too, but only of one volume. There is no luck in operations, so you can be 100% certain it will snapshot the wrong one. You will have zero backups but you won’t know it.

There’s an easy pattern to avoid this. First, let me show you Python’s assert statement:

awesome_list = ['a', 'b']
assert len(awesome_list) == 1

We’re telling Python we expect awesome_list to contain one item. If we run this, it errors:

Traceback (most recent call last):
    File "error.py", line 2, in <module>
assert len(awesome_list) == 1
AssertionError

This is a sane message. Anyone reading it can see we expected there to be exactly one object in awesome_list but there wasn’t.

Back to boto. Let’s add an assert to our backup script:

import datetime

import boto3

ec2 = boto3.resource('ec2')
volume_filters = [{'Name': 'tag:Name', 'Values': ['awesome-stuff']}]
volumes = list(ec2.volumes.filter(Filters=volume_filters))
assert len(volumes) == 1
volume = volumes[0]
now = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%dT%H-%M-%S%Z")
volume.create_snapshot(Description=f'awesome-stuff-backup-{now}')

Now, if there are two awesome-stuff volumes, our script will error:

Traceback (most recent call last):
    File "test.py", line 8, in <module>
assert len(volumes) == 1
AssertionError

Boom. That’s all you have to do. Now the script either does what we expect (backs up our awesome stuff) or it fails with a clear message. We know we don’t have any backups yet and we need to take action. Because we assert that there should be exactly one volume, this even covers us for the cases where that volume has been renamed or there’s a typo in our filters.

Here’s a good practice to follow in all of your code:

If your code assumes something, assert that the assumption is true so you’ll get a clear, early failure message if it isn’t.

If you’re interested in further reading or more sources for this practice, check out Jim Shore’s Fail Fast article.

In general, these are called logic errors. Problems with the way the code thinks (its “logic”). Often they won’t even cause errors, they’ll just create behavior you didn’t expect and that might be harmful. Writing code that’s resilient to these kinds of flaws will take your infrastructure to the next level. It won’t just seem like it’s working, you’ll have confidence that it’s working.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Python boto3 Logging

Hello!

If you’re writing a lambda function, check out this article instead.

The best way to log output from boto3 is with Python’s logging library. The core docs have a nice tutorial.

If you use print() statements for output, all you’ll get from boto is what you capture and print yourself. But, boto does a lot of internal logging that we can capture for free.

Good libraries, like boto, use Python’s logging library internally. If you set up a logger using the same library, it will automatically capture boto’s logs along with your own.

Here’s how I set up logging. This is a demo script, in the real world you’d parameterize the inputs, etc.

import logging
import boto3

if __name__ == '__main__':
    logging.basicConfig(
        level=logging.INFO,
        format=f'%(asctime)s %(levelname)s %(message)s'
    )
    logger = logging.getLogger()
    logger.debug('The script is starting.')
    logger.info('Connecting to EC2...')
    ec2 = boto3.client('ec2')

That’s it! The basicConfig() function sets up the root logger for you. We’ve told it what amount of output to show (the level) and to show the event time and level on each output line. The logging library docs have more info on what levels and formatting are available.

If you set the level to INFO, it’ll output anything logged with .info() (or higher) by your code and boto’s internal code. You won’t see our 'The script is starting.' line because anything logged at the DEBUG level will be excluded.

2019-08-18 07:59:20,123 INFO Connecting to EC2...
Traceback (most recent call last):
  File "demo.py", line 11, in <module>
    ec2 = boto3.client('ec2')
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/boto3/__init__.py", line 91, in client
    return _get_default_session().client(*args, **kwargs)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/boto3/session.py", line 263, in client
    aws_session_token=aws_session_token, config=config)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/session.py", line 838, in create_client
    client_config=config, api_version=api_version)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/client.py", line 86, in create_client
    verify, credentials, scoped_config, client_config, endpoint_bridge)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/client.py", line 328, in _get_client_args
    verify, credentials, scoped_config, client_config, endpoint_bridge)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/args.py", line 47, in get_client_args
    endpoint_url, is_secure, scoped_config)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/args.py", line 117, in compute_client_args
    service_name, region_name, endpoint_url, is_secure)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/client.py", line 402, in resolve
    service_name, region_name)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/regions.py", line 122, in construct_endpoint
    partition, service_name, region_name)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/regions.py", line 135, in _endpoint_for_partition
    raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.

If you change the level to DEBUG, you’ll get everything:

2019-08-18 08:28:06,189 DEBUG The script is starting.
2019-08-18 08:28:06,190 INFO Connecting to EC2...
2019-08-18 08:28:06,190 DEBUG Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
2019-08-18 08:28:06,193 DEBUG Changing event name from before-call.apigateway to before-call.api-gateway
2019-08-18 08:28:06,193 DEBUG Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
2019-08-18 08:28:06,194 DEBUG Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
2019-08-18 08:28:06,195 DEBUG Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
2019-08-18 08:28:06,195 DEBUG Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
2019-08-18 08:28:06,195 DEBUG Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
2019-08-18 08:28:06,197 DEBUG Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
2019-08-18 08:28:06,197 DEBUG Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
2019-08-18 08:28:06,197 DEBUG Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
2019-08-18 08:28:06,197 DEBUG Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
2019-08-18 08:28:06,211 DEBUG Looking for credentials via: env
2019-08-18 08:28:06,211 DEBUG Looking for credentials via: assume-role
2019-08-18 08:28:06,211 DEBUG Looking for credentials via: shared-credentials-file
2019-08-18 08:28:06,212 DEBUG Looking for credentials via: custom-process
2019-08-18 08:28:06,212 DEBUG Looking for credentials via: config-file
2019-08-18 08:28:06,212 DEBUG Looking for credentials via: ec2-credentials-file
2019-08-18 08:28:06,212 DEBUG Looking for credentials via: boto-config
2019-08-18 08:28:06,212 DEBUG Looking for credentials via: container-role
2019-08-18 08:28:06,212 DEBUG Looking for credentials via: iam-role
2019-08-18 08:28:06,213 DEBUG Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0, status=None)
2019-08-18 08:28:06,213 DEBUG Starting new HTTP connection (1): 169.254.169.254:80
2019-08-18 08:28:07,215 DEBUG Caught retryable HTTP exception while making metadata service request to http://169.254.169.254/latest/meta-data/iam/security-credentials/: Connect timeout on endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
Traceback (most recent call last):
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/util/connection.py", line 80, in create_connection
    raise err
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/util/connection.py", line 70, in create_connection
    sock.connect(sa)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/httpsession.py", line 258, in send
    decode_content=False,
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/util/retry.py", line 343, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/packages/six.py", line 686, in reraise
    raise value
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/Users/adam/.pyenv/versions/3.7.2/lib/python3.7/http/client.py", line 1229, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/awsrequest.py", line 125, in _send_request
    method, url, body, headers, *args, **kwargs)
  File "/Users/adam/.pyenv/versions/3.7.2/lib/python3.7/http/client.py", line 1275, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Users/adam/.pyenv/versions/3.7.2/lib/python3.7/http/client.py", line 1224, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/awsrequest.py", line 152, in _send_output
    self.send(msg)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/awsrequest.py", line 236, in send
    return super(AWSConnection, self).send(str)
  File "/Users/adam/.pyenv/versions/3.7.2/lib/python3.7/http/client.py", line 956, in send
    self.connect()
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/connection.py", line 181, in connect
    conn = self._new_conn()
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/urllib3/connection.py", line 164, in _new_conn
    (self.host, self.timeout))
urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPConnection object at 0x1045a1f98>, 'Connection to 169.254.169.254 timed out. (connect timeout=1)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/utils.py", line 303, in _get_request
    response = self._session.send(request.prepare())
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/httpsession.py", line 282, in send
    raise ConnectTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
2019-08-18 08:28:07,219 DEBUG Max number of attempts exceeded (1) when attempting to retrieve data from metadata service.
2019-08-18 08:28:07,219 DEBUG Loading JSON file: /Users/adam/opt/env3/lib/python3.7/site-packages/botocore/data/endpoints.json
2019-08-18 08:28:07,224 DEBUG Event choose-service-name: calling handler <function handle_service_name_alias at 0x1044b29d8>
2019-08-18 08:28:07,235 DEBUG Loading JSON file: /Users/adam/opt/env3/lib/python3.7/site-packages/botocore/data/ec2/2016-11-15/service-2.json
2019-08-18 08:28:07,258 DEBUG Event creating-client-class.ec2: calling handler <function add_generate_presigned_url at 0x104474510>
Traceback (most recent call last):
  File "demo.py", line 12, in <module>
    ec2 = boto3.client('ec2')
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/boto3/__init__.py", line 91, in client
    return _get_default_session().client(*args, **kwargs)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/boto3/session.py", line 263, in client
    aws_session_token=aws_session_token, config=config)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/session.py", line 838, in create_client
    client_config=config, api_version=api_version)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/client.py", line 86, in create_client
    verify, credentials, scoped_config, client_config, endpoint_bridge)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/client.py", line 328, in _get_client_args
    verify, credentials, scoped_config, client_config, endpoint_bridge)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/args.py", line 47, in get_client_args
    endpoint_url, is_secure, scoped_config)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/args.py", line 117, in compute_client_args
    service_name, region_name, endpoint_url, is_secure)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/client.py", line 402, in resolve
    service_name, region_name)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/regions.py", line 122, in construct_endpoint
    partition, service_name, region_name)
  File "/Users/adam/opt/env3/lib/python3.7/site-packages/botocore/regions.py", line 135, in _endpoint_for_partition
    raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.

See how it started saying where it found AWS credentials? Imagine you’re trying to figure out why your script worked locally but didn’t work on an EC2 instance; knowing where it found keys is huge. Maybe there are some hardcoded ones you didn’t know about that it’s picking up instead of the IAM role you attached to the instance. In DEBUG mode that’s easy to figure out. With print you’d have to hack out these details yourself.

This is great for simple scripts, but for something you’re going to run in production I recommend this pattern.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

CloudFormation Custom Resources: Avoiding the Two Hour Exception Timeout

If you’re new to custom resources check out this complete example first.

There’s a gotcha when writing CloudFormation Custom Resources that’s easy to miss and if you miss it your stack can get stuck, ignoring its timeout setting. It’ll fail on its own after an hour, but if it tries to roll back you have to wait a second hour. If the resource is defined in a nested stack, it’ll retry the rollback three times, adding even more hours to the delay. Here’s how to avoid this.

This post assumes you’re already working with Custom Resources and that yours are backed by lambda.

Here’s an empty custom resource:

import logging
import cfnresponse

def handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)

    if event['RequestType'] == 'Delete':
        logger.info('Deleted!')
        cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
        return

    logger.info('It worked!')
    cfnresponse.send(event, context, cfnresponse.SUCCESS, {})

It’s a successful no-op:

SuccessfulNoOp

Now let’s add an exception:

import logging
import cfnresponse

def handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)

    if event['RequestType'] == 'Delete':
        logger.info('Deleted!')
        cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
        return

    raise Exception
    logger.info('It worked!')
    cfnresponse.send(event, context, cfnresponse.SUCCESS, {})

We can see the exception in the logs:

ExceptionThreeRetries

But, then the stack gets stuck because the cfnresponse callback never happened and CF doesn’t know there was a problem:

FailureTimeouts

It took exactly an hour to fail, which suggests CF hit some internal, fallback timeout. My stack timeout was set to five minutes. We can see it retry the lambda function once a minute for three minutes, but then it never tries again in the remaining 57 minutes. I got the same delays in reverse when it tried to roll back (which is really just another update to the previous state). And, since the rollback failed, I had to manually edit the lambda function code and remove the exception to get it to finish rolling back.

Maybe this is a bug? Either way, there’s a workaround.

You should usually only catch specific errors that you know how to handle. It’s an anti-pattern to use except Exception. But, in this case we need to guarantee that the callback always happens. In this one situation (not in general) we need to catch all exceptions:

import logging
import cfnresponse

def handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)

    try:
        if event['RequestType'] == 'Delete':
            logger.info('Deleted!')
            cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
            return

        raise Exception
        logger.info('It worked!')
        cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
    except Exception:
        logger.exception('Signaling failure to CloudFormation.')
        cfnresponse.send(event, context, cfnresponse.FAILED, {})

(logger.exception(error) logs the exception and its stack trace. Even though we’re catching all errors, we shouldn’t let them pass silently.)

Now, the failure is visible to CF and it doesn’t wait:

ExceptionHandled.png

You should use this pattern in every Custom Resource: catch all exceptions and return a FAILED result to CF. You can still catch more specific exceptions inside the catchall try/except, ones specific to the feature you’re implementing, but you need that catchall to ensure the result returns when the unexpected happens.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

 

Lambda: boto3 CloudWatch Logs

Compute_GRAYSCALE_AWSLambda

Good morning!

If you’re writing a regular script (i.e. not a lambda function), check out this article.

This pattern outputs traditional delimited strings. If you want to upgrade that into output structured as JSON objects, check out this article.

For those custom cases that don’t fit into Terraform or CloudFormation, a little bit of Python and some boto3 in a lambda function can save you. Lambda captures the output of both print() and logging.Logger calls into CloudWatch so it’s easy to log information about what your code is doing. When things go wrong, though, I often find that just the output I wrote doesn’t give me enough to diagnose the problem. In those cases, it’s helpful to see the log output both for your code and boto3. Here’s how you do that.

Use the logging library. It’s a Python core library that provides standard features like timestamped prefixes and support for levels (e.g. INFO or DEBUG). For simple deployment helpers this is usually all you need:

logger = logging.getLogger(logging.INFO)
logger.info('Message at the INFO level.')
logger.debug('Message at the DEBUG level.')

This sets the root logger (which sees all log messages) to the INFO level. Normally you’d have to configure the root logger, but lambda does that automatically (which is actually annoying if you need to change your formatter, but that’s for another post). Now, logger.info() calls will show up in the logs and logger.debug() calls won’t. If you increase the level to DEBUG you’ll see both.

Because logging is the standard Python way to handle log output, maintainers of libraries like boto3 use it throughout their code to show what the library is doing (and they’re usually smart about choosing what to log at each level). By setting a level on the root logger, you’re choosing which of your output to capture and which of boto3’s output to capture. Powerful when you’re diagnosing a failure.

Here’s a demo function to show how the output looks. You might notice that it puts the logger setup calls inside the handler even though the AWS docs tell you to put them under the import. Function calls made directly in modules (e.g. not inside functions declared within the module) are import-side effects and import side-effects are an anti-pattern. I put the calls in the handler so they only run when the handler is called. This isn’t likely to matter much in a lambda function, but I like to stick to good patterns.

import logging

import boto3

def lambda_handler(event, context):
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)

    client = boto3.client('sts')
    account_id = client.get_caller_identity()['Account']

    logger.info('Getting account ID...')
    logger.debug('Account ID: {}'.format(account_id))
    return account_id

This is the full output when run at the INFO level:

START RequestId: a61471fe-c3fd-11e8-9f43-bdb22e22a203 Version: $LATEST
[INFO]	2018-09-29T15:38:01.882Z	a61471fe-c3fd-11e8-9f43-bdb22e22a203	Found credentials in environment variables.
[INFO]	2018-09-29T15:38:02.83Z	a61471fe-c3fd-11e8-9f43-bdb22e22a203	Starting new HTTPS connection (1): sts.amazonaws.com
[INFO]	2018-09-29T15:38:02.531Z	a61471fe-c3fd-11e8-9f43-bdb22e22a203	Getting account ID...
END RequestId: a61471fe-c3fd-11e8-9f43-bdb22e22a203
REPORT RequestId: a61471fe-c3fd-11e8-9f43-bdb22e22a203	Duration: 734.96 ms	Billed Duration: 800 ms Memory Size: 128 MB	Max Memory Used: 29 MB

When run at the DEBUG level it produces a ton of lines:

START RequestId: 9ea3bbef-c3fe-11e8-8eb1-730a799b5405 Version: $LATEST
[DEBUG]	2018-09-29T15:44:58.850Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.880Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable config_file from defaults.
[DEBUG]	2018-09-29T15:44:58.881Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable credentials_file from defaults.
[DEBUG]	2018-09-29T15:44:58.881Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable data_path from defaults.
[DEBUG]	2018-09-29T15:44:58.881Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable region from environment with value 'us-west-2'.
[DEBUG]	2018-09-29T15:44:58.900Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.900Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable ca_bundle from defaults.
[DEBUG]	2018-09-29T15:44:58.900Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.900Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable api_versions from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable credentials_file from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable config_file from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable metadata_service_timeout from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.901Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable metadata_service_num_attempts from defaults.
[DEBUG]	2018-09-29T15:44:58.942Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:58.960Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Looking for credentials via: env
[INFO]	2018-09-29T15:44:58.960Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Found credentials in environment variables.
[DEBUG]	2018-09-29T15:44:58.961Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading JSON file: /var/runtime/botocore/data/endpoints.json
[DEBUG]	2018-09-29T15:44:59.1Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading variable profile from defaults.
[DEBUG]	2018-09-29T15:44:59.20Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Event choose-service-name: calling handler
[DEBUG]	2018-09-29T15:44:59.60Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading JSON file: /var/runtime/botocore/data/sts/2011-06-15/service-2.json
[DEBUG]	2018-09-29T15:44:59.82Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Event creating-client-class.sts: calling handler
[DEBUG]	2018-09-29T15:44:59.100Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	The s3 config key is not a dictionary type, ignoring its value of: None
[DEBUG]	2018-09-29T15:44:59.103Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Setting sts timeout as (60, 60)
[DEBUG]	2018-09-29T15:44:59.141Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Loading JSON file: /var/runtime/botocore/data/_retry.json
[DEBUG]	2018-09-29T15:44:59.141Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Registering retry handlers for service: sts
[DEBUG]	2018-09-29T15:44:59.160Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Event before-parameter-build.sts.GetCallerIdentity: calling handler
[DEBUG]	2018-09-29T15:44:59.161Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Making request for OperationModel(name=GetCallerIdentity) (verify_ssl=True) with params: {'url_path': '/', 'query_string': '', 'method': 'POST', 'headers': {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'Boto3/1.7.74 Python/3.6.1 Linux/4.14.62-65.117.amzn1.x86_64 exec-env/AWS_Lambda_python3.6 Botocore/1.10.74'}, 'body': {'Action': 'GetCallerIdentity', 'Version': '2011-06-15'}, 'url': 'https://sts.amazonaws.com/', 'context': {'client_region': 'us-west-2', 'client_config': , 'has_streaming_input': False, 'auth_type': None}}
[DEBUG]	2018-09-29T15:44:59.161Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Event request-created.sts.GetCallerIdentity: calling handler
[DEBUG]	2018-09-29T15:44:59.161Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Event choose-signer.sts.GetCallerIdentity: calling handler
[DEBUG]	2018-09-29T15:44:59.162Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Calculating signature using v4 auth.
[DEBUG]	2018-09-29T15:44:59.180Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	CanonicalRequest:
POST
/

content-type:application/x-www-form-urlencoded; charset=utf-8
host:sts.amazonaws.com
x-amz-date:20180929T154459Z
x-amz-security-token:FQoGZXIvYXdzEKn//////////wEaDOOlIItIhtRakeAyfCLrAWPZXQJFkNrDZNa4Bny102eGKJ5KWD0F+ixFqZaW+A9mgadICpLRxBG4JGUzMtPTDeqxPoLT1qnS6bI/jVmXXUxjVPPMRiXdIlP+li0eFyB/xOK+PN/DOiByee0eu6bjQmkjoC3P5MREvxeanPY7hpgXNO52jSBPo8LMIdAcjCJxyRF7GHZjtZGAMARQWng6DJa9RAiIbxOmXpSbNGpABBVg/TUt8XMUT+p9Lm2Txi10P0ueu1n5rcuxJdBV8Jr/PUF3nZY+/k7MzOPCnzZNqVgpDAQbwby+AVIQcvVwaKsXePqubCqBTHxoh/Mo0ay+3QU=

content-type;host;x-amz-date;x-amz-security-token
ab821ae955788b0e33ebd34c208442ccfc2d406e2edc5e7a39bd6458fbb4f843
[DEBUG]	2018-09-29T15:44:59.181Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	StringToSign:
AWS4-HMAC-SHA256
20180929T154459Z
20180929/us-east-1/sts/aws4_request
7cf0af0e8f55fb1b9c0009104aa8f141097f00fea428ddf1654321e7054a920d
[DEBUG]	2018-09-29T15:44:59.181Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Signature:
c00de0a12c9ee0fce348df452f2833749b854915db58f8d106e3166545a70c43
[DEBUG]	2018-09-29T15:44:59.183Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Sending http request:
[INFO]	2018-09-29T15:44:59.201Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Starting new HTTPS connection (1): sts.amazonaws.com
[DEBUG]	2018-09-29T15:44:59.628Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	"POST / HTTP/1.1" 200 461
[DEBUG]	2018-09-29T15:44:59.628Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Response headers: {'x-amzn-requestid': '9f421e56-c3fe-11e8-b622-2d5da14a8dc9', 'content-type': 'text/xml', 'content-length': '461', 'date': 'Sat, 29 Sep 2018 15:44:58 GMT'}
[DEBUG]	2018-09-29T15:44:59.640Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Response body:
b'\n \n arn:aws:sts::268133297303:assumed-role/demo-boto3-logging/demo-boto3-logging\n AROAITTVSA67NGZPH2QZI:demo-boto3-logging\n 268133297303\n \n \n 9f421e56-c3fe-11e8-b622-2d5da14a8dc9\n \n\n'
[DEBUG]	2018-09-29T15:44:59.640Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Event needs-retry.sts.GetCallerIdentity: calling handler
[DEBUG]	2018-09-29T15:44:59.641Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	No retry needed.
[INFO]	2018-09-29T15:44:59.641Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Getting account ID...
[DEBUG]	2018-09-29T15:44:59.641Z	9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Account ID: 268133297303
END RequestId: 9ea3bbef-c3fe-11e8-8eb1-730a799b5405
REPORT RequestId: 9ea3bbef-c3fe-11e8-8eb1-730a799b5405	Duration: 813.73 ms	Billed Duration: 900 ms Memory Size: 128 MB	Max Memory Used: 29 MB

boto3 can be very verbose in DEBUG so I recommend staying at INFO unless you’re actively troubleshooting.

Happy debugging!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

3 Tools to Validate CloudFormation

Hello!

This article is about functional testing in CloudFormation, if you’re looking for security testing, check out this.

I run three tools before applying CF templates. Here they are!

#1 AWS CLI’s validator

This is the native tool. It’s ok. It’s really only a syntax checker, there are plenty of errors you won’t see until you apply a template to a stack. Still, it’s fast and catches some things.

aws cloudformation validate-template --template-body file://./my_template.yaml

Notes:

  • The CLI has to be configured with access keys or it won’t run the validator.
  • If the template is JSON, this will ignore some requirements (e.g. it’ll allow trailing commas). However, the CF service ignores the same things.

#2 cfn-nag cfn-lint

cfn-lint is, like you’d expect, a linter for CloudFormation. I only started using it recently, but so far it’s pretty helpful.

cfn-lint my_template.yaml

Notes:

  • Before cfn-lint came out, I was using cfn-nag. I switched for two reasons:
    • Cfn-nag is a security testing tool, not a validator in general. Check out my article on using it to help write limited-privilege IAM policies.
    • It was a Ruby gem so you needed a whole extra dependency chain (and ideally a tool like RVM) to install it). Cfn-lint is a Python app available on PyPI, like the AWS CLI and its validator. Less tooling to maintain.

#3 Python’s JSON library

In general you should only write CloudFormation templates in YAML, but, sometimes I’m stuck with legacy JSON ones that need to be maintained.

Because the AWS CLI validator ignores some JSON requirements, I like to pass JSON templates through Python’s parser to make sure they’re valid. In the past, I’ve had to do things like load and search templates for unused parameters, etc. That’s not ideal but it’s happened a couple times while doing cleanup and refactoring of legacy code. It’s easier if the JSON is valid JSON.

It’s fiddly to run this in a shell script. I do it with a heredoc so I don’t have to write multiple scripts to the filesystem:

python - <<END
import json
with open('my_template.json') as f:
    json.load(f)
END

Notes:

  • I use Python for this because it’s a dependency of the AWS CLI so I know it’s already installed. You could use jq or another tool, though.
  • I don’t do the YAML equivalent of this because it errors on CF-specific syntax like !Ref.

Happy automating!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Python on Mac OS X: One of the Good Ways

Good morning!

When I start Python development on a new Apple, I immediately hit two problems:

  1. I need a version of Python that’s not installed.
  2. I need to install a bunch of packages from PyPI for ProjectA and a different bunch for ProjectB.

Virtualenv is not the answer! That’s the first tool you’ll hear about but it only partially solves one of these problems. You need more. There are a ton of tools and a ton of different ways to use them. Here’s how I do it on Apple’s Mac OS X.

If you’re asking questions like, “Why do you need multiple versions installed? Isn’t latest enough?” or “Why not just pip install all the packages for ProjectA and ProjectB?” then this article probably isn’t where you should start. Great answers to those questions have already been written. This is just a disambiguation page that shows you which tools to use for which problems and how to use them.

Installing Python Versions

I use pyenv, which is available in homebrew. It allows me to install arbitrary versions of Python and switch between them without replacing what’s included with the OS.

Note: You can use homebrew to install other versions of Python, but only a single version of Python 2 and a single version of Python 3 at a time. You can’t easily switch between two projects each frozen at 3.4 and 3.6 (for example). There’s also a limited list of versions available.

Install pyenv:

$ brew update
$ brew install pyenv

Ensure pyenv loads when you login by adding this to ~/.profile:

$ eval "$(pyenv init -)"

Activate pyenv now by either closing and re-opening Terminal or running:

$ source ~/.profile

List which versions are available and install one:

$ pyenv install --list
$ pyenv install 3.6.4

If the version you wanted was missing, update pyenv via homebrew:

$ brew update
$ brew upgrade pyenv

If you get weird errors about missing gcc or zlib, install the XCode Command Line Tools and try again:

$ xcode-select --install

I always set my global (aka default) version to the latest 3:

$ pyenv global 3.6.4

Update 2018-10-23: If I need several versions available, for example to run tests in tox:

$ pyenv global 3.6.4 3.7.0

Setting these makes versioned Python commands available:

$ python3.6 --version
$ python3.7 --version

Pyenv has lots of great features, like support for setting a different version whenever you’re in a specific directory. Check out its commands reference.

Installing PyPI Packages

In the old days, virtualenv was always the right solution. Today, it depends on the version of Python you’re using.

Python <= 3.3 (Including 2)

This is legacy Python, when environment management wasn’t native. In these ancient times, you needed a third party tool called virtualenv.

$ pyenv global 2.7.14
$ pip install virtualenv
$ virtualenv ~/my_env
$ source ~/my_env/bin/activate
(my_env) $ pip install

This installs the virtualenv Python package into the root environment for the legacy version of Python I need, then creates a virtual Python environment where I can install project-specific dependencies.

Python >= 3.3

In PEP 405 an environment manager called venv was added to core. It works pretty much like virtualenv.

Note: Virtualenv works with newer versions of Python, but it’s better to use a core library than to add a dependency. I only use the third party tool when I have to.

$ pyenv global 3.6.4
$ python -m venv ~/my_env
$ source ~/my_env/bin/activate
(my_env) $ pip install

Happy programming!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles: