Four Guidelines for Valuable Documentation

📃 We’ve written a lot of documentation for a lot of projects. We’ve also read a lot of documentation for a lot of projects and had mixed experiences with what they taught us. Across that work, we’ve found four guidelines that make documentation easy to write and valuable to readers. Hopefully they save you some time and some frustration!

All four come from one principle:

Documentation exists to help users with generic experience learn your specific system.

Generic experience is a prerequisite. Documentation isn’t a substitute for knowing the basics of the tooling your project uses, it’s a quick way for knowledgeable readers to learn the specific ways your project uses those tools.

Don’t Write Click-by-Click Instructions

❌ This is way too much detail:

  1. Go to https://console.aws.amazon.com/cloudwatch/home
  2. Click Log Groups on the left
  3. Type “widgets-dev-async-processor” in the search box
  4. Click the magnifying glass icon
  5. Find the “widgets-dev-async-processor” in the search results
  6. Click “widgets-dev-async-processor”
  7. Click the first stream in the list
  8. Read the log entries

It’s frustratingly tedious for experienced users. Users who are so new that they need this level of detail are unlikely to get much from the logs it helps them find.

This will also go out of date as soon as the CloudWatch UI changes. You won’t always notice when it changes, and even if you do it’s easy to forget to update your docs.

Use simple text directions instead:

Open the widgets-dev-async-processor Log Group in the AWS CloudWatch web console.

That’s easy to read, tells the reader what they need and where to find it, and won’t go out of date until you change how your logs are stored.

Limit Use of Screenshots

🔍 Searches can’t see into images, so anything captured in a screenshot won’t show up in search results. Similarly, readers can’t copy/paste from images.

Also, like click-by-click instructions, screenshots are tedious for experienced readers, they don’t help new users understand the system, and they’re impractical to keep up to date.

Most of the time, simple text directions like the ones given above are more usable.

Link Instead of Duplicating

Duplicated docs always diverge. Here’s a common example:

Infrastructure code and application code live in different repos. Engineers of both need to export AWS credentials into their environment variables. Infra engineers need them to run terraform, app engineers need them to query DynamoDB tables. Trying to make it easy for everybody to find what they need, someone documents the steps in each repo. Later, the way users get their credentials changes. The engineer making that change only works on terraform and rarely uses the app repo. They forget to update its instructions. A new engineer joins the app team, follows those (outdated) instructions, and gets access errors. There’s churn while they diagnose.

It’s better to document the steps in one repo and link 🔗 to those steps from the other. Then, everyone is looking at the same document, not just the same steps. It’s easy to update all docs because there’s only one doc. Readers know they’re looking at the most current doc because there’s only one doc.

This is also true for upstream docs. For example, if it’s already covered in HashiCorp’s excellent terraform documentation, just link to it. A copy will go out of date. Always link to the specific sections of pages that cover the details your readers need. Don’t send them to the header page and force them to search.

Keep a Small Set of Accurate Documentation

If you write too many docs, they’ll eventually rot. You’ll forget to update some. You won’t have time to update others. Users will read those docs and do the wrong thing. Errors are inevitable. It’s better to have a small set of accurate docs than a large set of questionable ones. Only write as many docs as it’s practical to maintain.

Writing docs can be a lot of work. Sometimes they just cause more errors. Hopefully, these guidelines will make your docs easier to write and more valuable to your readers.

Happy documenting!

Operating Ops

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

PowerShell: The Programmer’s Shell

A couple years ago, I switched all my workstations to PowerShell. Folks often ask me why I did that. What made it worth the trouble of learning a new syntax? Here are the things about Posh that made me switch:

Mac, Linux, and Windows Support

I usually run PowerShell on Mac OS, but it also supports Linux and Windows. It gives me a standardized interface to all three operating systems.

Object Oriented Model

This is the big reason. It’s the thing that makes PowerShell a programming language like Python or Ruby instead of just a scripting language like Bash.

In Bash everything is a string. Say we’ve found something on the filesystem:

bash-3.2$ ls -l | grep tmp
drwxr-xr-x   4 adam  staff   128 Oct 15 18:10 tmp

If we need that Oct 15 date, we’d parse it out with something like awk:

bash-3.2$ ls -l | grep tmp | awk '{print $6, $7}'
Oct 15

That splits the line on whitepace and prints out the 6th and 7th fields. If the whitespacing of that output string changes (like if you run this on someone else’s workstation and they’ve tweaked their terminal), this will silently break. It won’t error, it just won’t parse out the right data. You’ll get downstream failures in code that expected a date but got something different.

PowerShell is object oriented, so it doesn’t rely on parsing strings. If we find the same directory on the filesystem:

PS /Users/adam> Get-ChildItem | Where-Object Name -Match tmp

    Directory: /Users/adam

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----          10/15/2020  6:10 PM                tmp

It displays similarly, but that’s just formatting goodness. Underneath, it found an object that represents the directory. That object has properties (Mode, LastWriteTime, Length, Name). We can get them by reference:

PS /Users/adam> Get-ChildItem | Where-Object Name -Match tmp | Select-Object LastWriteTime

LastWriteTime
-------------
10/15/2020 6:10:55 PM

We tell the shell we want the LastWriteTime property and it gets the value. It’ll get the same value no matter how it was displayed. We’re referencing a property not parsing output strings.

This makes Posh less fragile, but also gives us access to the standard toolbox of programming techniques. Its functions and variable scopes and arrays and dictionaries and conditions and loops and comparisons and everything else work similarly to languages like Python and Ruby. There’s less Weird Stuff. Ever have to set and unset $IFS in Bash? You don’t have to do that in PowerShell.

Streams

Streams are a huge feature of PowerShell, and there are already great articles that cover the details. I’m only going to highlight one thing that makes me love them: they let me add informative output similar to a DEBUG log line in Python and other programming languages. Let’s convert our search for tmp into a super-simple script:

[CmdletBinding()]
param()

function Get-Thing {
    [CmdletBinding()]
    param()
    $AllItems = Get-ChildItem
    Write-Verbose "Filtering for 'tmp'."
    return $AllItems | Where-Object Name -Match 'tmp'
}

Get-Thing

If we run this normally we just get the tmp directory:

PS /Users/adam> ./streams.ps1

    Directory: /Users/adam

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----          10/15/2020  6:10 PM                tmp

If we run it with -Verbose, we also see our message:

PS /Users/adam> ./streams.ps1 -Verbose
VERBOSE: Filtering for 'tmp'.

    Directory: /Users/adam

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----          10/15/2020  6:10 PM                tmp

We can still pipe to the same command to get the LastWriteTime:

PS /Users/adam> ./streams.ps1 -Verbose | Select-Object LastWriteTime
VERBOSE: Filtering for 'tmp'.

LastWriteTime
-------------
10/15/2020 6:10:55 PM

The pipeline reads objects from a different stream, so we can send whatever we want to the verbose stream without impacting what the user may pipe to later. More on this in a future article. For today, I’m just showing that scripts can present information to the user without making it harder for them to use the rest of the output.

The closest you can get to this in Bash is stderr, but that stream is used for more than just information, and realistically you can’t guess the impact of sending messages to it. Having a dedicated stream for verbose messages makes it trivial to provide information without disrupting behavior.

PowerShell is a big language and there’s a lot more to it than what I’ve covered here. These are just the things that I get daily value from. To me, they more than compensate for the (minimal) overhead of learning a new syntax.

Happy scripting!

Adam
Operating Ops

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Separate Work and Personal Email

Good morning!

Recent security incidents reminded me of an important rule that often doesn’t make it on to security checklists:

Separate work and personal email.

In these incidents, workers used forwarding rules to send work email to personal accounts. Attackers used those rules to collect sensitive information. This is an example of exfiltration. Company security teams can do a lot to protect the email accounts they administer, but there’s not much they can do when data is forwarded from those accounts to outside services.

Here are (just a few) common examples of sensitive information attackers might get from email:

  • Password reset links. Most accounts that aren’t protected by MFA can be accessed by a password reset process that only requires you to click a link in an email. Inboxes are the gateway to many other systems.
  • Bug reports. Information sent between engineers, project managers, or other team members about flaws in your products can help attackers craft exploits.
  • Upgrade notifications. If you get an upgrade notification about any tool your company uses, that tells attackers you’re still using an old version of that tool. They can look for known vulnerabilities in that version and use them in attacks.
  • Personal information about workers who have privileged access. Phishing and other forms of social engineering are still common. Phishing was used in the incidents that prompted this post. The more attackers know about you, the more real they can pretend to be. They only need to fool one person who has access to production.
  • Personally identifying information (PII). Customer error reports, for example. They might contain names, email addresses, real addresses, IP addresses, etc. All it takes is a copy/paste of one database entry by an engineer trying to track down the root cause of a problem with the product and you can have PII in your inbox. PII can be valuable to attackers (e.g. for scams) but it’s also subject to regulation. Sending it outside the company can cause big problems.

This applies to everyone, not just engineers. Project managers get bug reports. Customer service staff get customer error reports and any PII they contain. Upgrade notifications are often blasted out to distributions lists that include half the company. Even if you don’t have an engineering role, it’s still important to keep company email within the company.

Stay safe!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Passing Parameters to Docker Builds

Hello!

When I’m building Docker images, sometimes I need to pass data from the build agent (e.g. my CI pipeline) into the build process. Often, I also want to echo that data into the logs so I can use it for troubleshooting or validation later. Docker supports this!

These examples were all tested in Docker for Mac:

docker --version
Docker version 19.03.13, build 4484c46d9d

First, declare your build-time data as an ARG in your Dockerfile:

FROM alpine:3.7

ARG USEFUL_INFORMATION
ENV USEFUL_INFORMATION=$USEFUL_INFORMATION
RUN echo "Useful information: $USEFUL_INFORMATION"

In this example, I’ve also set an ENV variable so I can RUN a command to print out the new ARG.

Now, just build like ususal:

docker build --tag test_build_args --build-arg USEFUL_INFORMATION=1337 .
Sending build context to Docker daemon  10.24kB
Step 1/4 : FROM alpine:3.7
 ---> 6d1ef012b567
Step 2/4 : ARG USEFUL_INFORMATION
 ---> Using cache
 ---> 18d20c437445
Step 3/4 : ENV USEFUL_INFORMATION=$USEFUL_INFORMATION
 ---> Using cache
 ---> b8bbdd03a1d1
Step 4/4 : RUN echo "Useful information: $USEFUL_INFORMATION"
 ---> Running in a2161bfb75cd
Useful information: 1337
Removing intermediate container a2161bfb75cd
 ---> 9ca56256cc19
Successfully built 9ca56256cc19
Successfully tagged test_build_args:latest

If you don’t pass in a value for the new ARG, it resolves to an empty string:

docker build --tag test_build_args .
Sending build context to Docker daemon  10.24kB
Step 1/4 : FROM alpine:3.7
 ---> 6d1ef012b567
Step 2/4 : ARG USEFUL_INFORMATION
 ---> Using cache
 ---> 18d20c437445
Step 3/4 : ENV USEFUL_INFORMATION=$USEFUL_INFORMATION
 ---> Running in 63e4b0ce1fb7
Removing intermediate container 63e4b0ce1fb7
 ---> 919769a93b7d
Step 4/4 : RUN echo "Useful information: $USEFUL_INFORMATION"
 ---> Running in 73e158d1bfa6
Useful information:
Removing intermediate container 73e158d1bfa6
 ---> f928fc025270
Successfully built f928fc025270
Successfully tagged test_build_args:latest

Some details:

That’s it! Happy building,

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

PowerShell Scripts with Arguments

Hello!

I write a lot of utility scripts. Little helpers to automate repetetive work. Like going through all those YAML files and updating that one config item, or reading through all those database entries and finding the two that are messed up because of that weird bug I just found.

These scripts are usually small. I often don’t keep them very long. I also usually have to run them against multiple environments, and sometimes I have to hand them to other engineers. They need to behave predictably everywhere, and they need to be easy to read and run. They can’t be hacks that only I can use.

In my work, that means a script that takes arguments and passes them to internal functions that implement whatever I’m trying to do. Let’s say I need to find a thing with a known index, then reset it. Here’s the pattern I use in PowerShell:

[CmdletBinding()]
param(
    [int]$Index
)

function Get-Thing {
    [CmdletBinding()]
    param(
        [int]$Index
    )
    return "Thing$Index"
}

function Reset-Thing {
    [CmdletBinding()]
    param(
        [string]$Thing
    )
    # We'd do the reset here if this were a real script.
    Write-Verbose "Reset $Thing"
}

$Thing = Get-Thing -Index $Index
Reset-Thing -Thing $Thing

We can run that from a prompt with the Index argument:

./Reset-Thing.ps1 -Index 12 -Verbose
VERBOSE: Reset Thing12

Some details:

  • The param() call for the script has to be at the top. Posh throws errors if you put it down where the functions are invoked.
  • CmdletBinding() makes the script and its functions handle standard arguments like -Verbose. More details here.
  • This uses Write-Verbose to send informative output to the verbose “stream”. This is similar to setting the log level of a Python script to INFO. It allows the operator to select how much output they want to see. More details here.
  • As always, use verbs from Get-Verb when you’re naming things.
  • I could have written this with just straight commands instead of splitting them into Get and Reset functions, especially for an example this small, but it’s almost always better to separate out distinct pieces of logic. It’ll be easier to read if I have to hand it to someone else who’s not familiar with the operation. Same if I have to put it aside for a while and come back to it after I’ve forgotten how it works.

This is my starting point when I’m writing a helper script. It’s usually enough to let me sanely automate a one-off without getting derailed into full-scale application development.

Happy scripting,

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

PowerShell on OS X: Git Hooks

Hello!

PowerShell works great on Mac OS X. It’s my default shell. I usually only do things the Posh way, but sometimes the underlying system bubbles back up. Like when I’m writing git hooks.

In Posh, git hooks live in the same place and still have to be executable on your platform. That doesn’t change. But, the scripts themselves can be different. You have two options.

Option 1: Don’t Use PowerShell

Your existing hooks written in bash or zsh or whatever Linux-ey shell you were using will still work. That’s great if you already have a bunch and you don’t want to port them all.

If you’re writing anything new, though, use PowerShell. When I get into a mess on my Posh Apple, it’s usually because I mixed PowerShell with the legacy shell. You’re better off using just one.

Option 2: Update the Shebang

The shebang (#!) is the first line of executable scripts on Unix-like systems. It sets the program that’s used to run the script. We just need to write one in our hook script that points at pwsh (the PowerShell executable):

#!/usr/local/microsoft/powershell/7/pwsh

Write-Verbose -Verbose "We're about to commit!"

If you don’t have the path to your pwsh, you can find it with Get-Command pwsh.

After that, our hook works like normal:

git commit --allow-empty -m "Example commit."
VERBOSE: We're about to commit!
[master a905079] Example commit.

If you don’t set the shebang at all (leaving nothing but the Write-Verbose command in our example), your hook will run but OS X won’t treat it like PowerShell. You get “not found” errors:

git commit --allow-empty -m "Example commit."
.git/hooks/pre-commit: line 2: Write-Verbose: command not found
[master 1b2ebac] Example commit.

That’s actually good. If you have old hook scripts without shebang lines, they won’t break. Just make sure any new Posh scripts do have a shebang and everything should work.

Enjoy the Posh life!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

A Checklist for Submitting Pull Requests

Hello!

Reviewing code is hard, especially because reviewers tend to inherit some responsibility for problems the code causes later. That can lead to churn while they try to develop confidence that new submissions are ready to merge.

I submit a lot of code for review, so I’ve been through a lot of that churn. Over the years I’ve found a few things that help make it easier for my reviewers to develop confidence in my submissions, so I decided to write a checklist. ✔️

The code I write lives in diverse repos governed by diverse requirements. A lot of the items in my checklist are there to help make sure I don’t mix up the issues I’m working on or the requirements of the repos I’m working in.

This isn’t a guide on writing good code. You can spend a lifetime on that topic. This is a quick checklist I use to avoid common mistakes.

This is written for Pull Requests submitted in git repos hosted on GitHub, but most of its steps are portable to other platforms (e.g. Perforce). It assumes common project features, like a contributing guide. Adjust as needed.

The Checklist

Immediately before submitting:

  1. Reread the issue.
  2. Merge the latest changes from the target branch (e.g. master).
  3. Reread the diff line by line.
  4. Rerun all tests. If the project doesn’t have automated tests, you can still:
    • Run static analysis tools on every file you changed.
    • Manually exercise new functionality.
    • Manually exercise existing functionality to make sure it hasn’t changed.
  5. Check if any documentation needs to be updated to reflect your changes.
  6. Check the rendering of any markup files (e.g. README.md) in the GitHub UI.
    • There are remarkable differences in how markup files render on different platforms, so it’s important to check them in the UI where they’ll live.
  7. Reread the project’s contributing guide.
  8. Write a description that:
    1. Links to the issue it addresses.
    2. Gives a plain English summary of the change.
    3. Explains decisions you had to make. Like:
      • Why you didn’t clean up that one piece of messy code.
      • How you chose the libraries you used.
      • Why you expanded an existing module instead of writing a new one.
      • How you chose the directory and file names you did.
      • Why you put your changes in this repo, instead of that other one.
    4. Lists all the tests you ran. Include relevant output or screenshots from manual tests.

There’s no perfect way to submit code for review. That’s why we still need humans to do it. The creativity and diligence of the engineer doing the work are more important than this checklist. Still, I’ve found that these reminders help me get code through review more easily.

Happy contributing!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

How to Grep in PowerShell

Hello!

In oldschool Linux shells, you search files for a string with grep. You’re probably used to commands like this (example results from an OSS repo):

grep -r things .
./terraform.tfstate.backup:              "./count_things.py"
./count_things.py:def count_things(query):
./count_things.py:    count_things()
./terraform.tf:  program = ["python", "${path.module}/count_things.py"]

It outputs strings that concatenate the filename and the matching line. You can pipe those into awk or whatever other command to process them. Standard stuff.

You can achieve the same results in PowerShell, but it’s pretty different. Here’s the basic command:

Get-ChildItem -Recurse | Select-String 'things'

count_things.py:7:def count_things(query):
count_things.py:17:    count_things()
terraform.tf:6:  program = ["python", "${path.module}/count_things.py"]
terraform.tfstate.backup:25:              "./count_things.py"

This part is similar. Get-ChildItem recurses through the filesystem and passes the results to Select-String, which searches those files for the string things. The output looks the same. File on the left, matching line on the right. That’s just friendly formatting, though. Really what you’re getting is an array of objects that each represent one match. Posh summarizes that array with formatting that’s familiar, but actually processing these results is completely different.

We could parse out details the Linux way by piping into Out-String to convert the results into strings, splitting on :, and so on, but that’s not idiomatic PowerShell. Posh is object-oriented, so instead of manipulating strings we can just process whichever properties contain the information we’re searching for.

First, we need to know what properties are available:

Get-ChildItem -Recurse | Select-String 'things' | Get-Member

   TypeName: Microsoft.PowerShell.Commands.MatchInfo

Name               MemberType Definition
----               ---------- ----------
Equals             Method     bool Equals(System.Object obj)
GetHashCode        Method     int GetHashCode()
GetType            Method     type GetType()
RelativePath       Method     string RelativePath(string directory)
ToEmphasizedString Method     string ToEmphasizedString(string directory)
ToString           Method     string ToString(), string ToString(string directory)
Context            Property   Microsoft.PowerShell.Commands.MatchInfoContext Context {get;set;}
Filename           Property   string Filename {get;}
IgnoreCase         Property   bool IgnoreCase {get;set;}
Line               Property   string Line {get;set;}
LineNumber         Property   int LineNumber {get;set;}
Matches            Property   System.Text.RegularExpressions.Match[] Matches {get;set;}
Path               Property   string Path {get;set;}
Pattern            Property   string Pattern {get;set;}

Get-Member tells us the properties of the MatchInfo objects we piped into it. Now we can process them however we need.

Select One Property

If we only want the matched lines, not all the other info, and we can filter out the Line property with Select-Object.

Get-ChildItem -Recurse | Select-String 'things' | Select-Object 'Line'

Line
----
def count_things(query):
    count_things()
  program = ["python", "${path.module}/count_things.py"]
              "./count_things.py"

Sort Results

We can sort results by the content of a property with Sort-Object.

Get-ChildItem -Recurse | Select-String 'things' | Sort-Object -Property 'Line'

terraform.tfstate.backup:25:              "./count_things.py"
count_things.py:17:    count_things()
terraform.tf:6:  program = ["python", "${path.module}/count_things.py"]
count_things.py:7:def count_things(query):

Add More Filters

Often, I search for a basic pattern like ‘things’ and then chain in Where-Object to filter down to more specific results. It can be easier to chain matches as I go than to write a complex match pattern at the start.

Get-ChildItem -Recurse | Select-String 'things' | Where-Object 'Line' -Match 'def'

count_things.py:7:def count_things(query):

We’re not limited to filters on the matched text, either:

Get-ChildItem -Recurse | Select-String 'things' | Where-Object 'Filename' -Match 'terraform'

terraform.tf:6:  program = ["python", "${path.module}/count_things.py"]
terraform.tfstate.backup:25:              "./count_things.py"

There are tons of things you can do. The main detail to remember is that you need Get-Member to tell you what properties are available, then you can use any Posh command to process those properties.

Enjoy freedom from strings!

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Tox: Testing Multiple Python Versions with Pyenv

Hello!

I use Python’s tox to orchestrate a lot of my tests. It lets you set a list of versions in a tox.ini file (in the same directory as your setup.py), like this:

[tox]
envlist = py37, py38

[testenv]
allowlist_externals = echo
commands = echo "success"

Then you can run the tox command, it’ll create a venv for each version, and run your tests in each of those environments. It’s an easy way to ensure your code works across all the versions of Python you want to support.

But, if I install tox into a 3.8 environment and run the tox command in the directory where we created the tox.ini above, I get this:

tox
GLOB sdist-make: /Users/adam/Local/fiddle/setup.py
py37 create: /Users/adam/Local/fiddle/.tox/py37
ERROR: InterpreterNotFound: python3.7
py38 create: /Users/adam/Local/fiddle/.tox/py38
py38 inst: /Users/adam/Local/fiddle/.tox/.tmp/package/1/example-0.0.0.zip
py38 installed: example @ file:///Users/adam/Local/fiddle/.tox/.tmp/package/1/example-0.0.0.zip
py38 run-test-pre: PYTHONHASHSEED='2325607949'
py38 run-test: commands[0] | echo success
success
___________________________________________________________________________ summary ____________________________________________________________________________
ERROR:  py37: InterpreterNotFound: python3.7
  py38: commands succeeded

It found the 3.8 interpreter I ran it with, but it couldn’t find 3.7.

pyenv can get you past this. It’s a utility for installing and switching between multiple Python versions. I use it on OS X (⬅️ instructions to get set up, if you’re not already). Here’s how it looks when I have Python 3.6, 3.7, and 3.8 installed, and I’m using 3.8:

pyenv versions
  system
  3.6.11
  3.7.9
* 3.8.5 (set by /Users/adam/.pyenv/version)

Just having those versions installed isn’t enough, though. You still get the error from tox about missing versions. You have to specifically enable each version:

pyenv local 3.8.5 3.7.9
pyenv versions
  system
  3.6.11
* 3.7.9 (set by /Users/adam/Local/fiddle/.python-version)
* 3.8.5 (set by /Users/adam/Local/fiddle/.python-version)

This will create a .python-version file in the current directory that sets your Python versions. pyenv will read that file whenever you’re in that directory. You can also set versions that’ll be picked up in any folder with the pyenv global command.

Now, tox will pick up both versions:

tox
GLOB sdist-make: /Users/adam/Local/fiddle/setup.py
py37 inst-nodeps: /Users/adam/Local/fiddle/.tox/.tmp/package/1/example-0.0.0.zip
py37 installed: example @ file:///Users/adam/Local/fiddle/.tox/.tmp/package/1/example-0.0.0.zip
py37 run-test-pre: PYTHONHASHSEED='1664367937'
py37 run-test: commands[0] | echo success
success
py38 inst-nodeps: /Users/adam/Local/fiddle/.tox/.tmp/package/1/example-0.0.0.zip
py38 installed: example @ file:///Users/adam/Local/fiddle/.tox/.tmp/package/1/example-0.0.0.zip
py38 run-test-pre: PYTHONHASHSEED='1664367937'
py38 run-test: commands[0] | echo success
success
___________________________________________________________________________ summary ____________________________________________________________________________
  py37: commands succeeded
  py38: commands succeeded
  congratulations :)

That’s it! Now you can run your tests in as many verions of Python as you need.

Happy testing,

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

PowerShell on OS X: Setting Your Path Variable

Hello!

There are two tricky little problems when setting your path variable in PowerShell. Here’s how to get past them.

First, lots of guides show things like this:

$Env:Path += "PATH_STRING"

Which works on Windows but won’t work on OS X. The variable name has to be all-caps:

$Env:PATH += "PATH_STRING"

Next, the separator between path elements on Windows is ;, but on OS X it’s :. Swap them and you should be good to go:

# Windows-only, won't work:
# $Env:PATH += ";/Users/adam/opt/bin"

# Works on OS X:
$Env:PATH += ":/Users/adam/opt/bin"

Small details, but they were remarkably fiddly to figure out the first time I ran in to them. Lots of people use Posh on Windows, so lots of guides and docs won’t work on Mac. You may find similar compatibility problems in scripts, too. Hopefully this saves you from some frustration.

Happy scripting,

Adam

Need more than just this article? We’re available to consult.

You might also want to check out these related articles: