Separate Work and Personal Email

Good morning!

Recent security incidents reminded me of an important rule that often doesn’t make it on to security checklists:

Separate work and personal email.

In these incidents, workers used forwarding rules to send work email to personal accounts. Attackers used those rules to collect sensitive information. This is an example of exfiltration. Company security teams can do a lot to protect the email accounts they administer, but there’s not much they can do when data is forwarded from those accounts to outside services.

Here are (just a few) common examples of sensitive information attackers might get from email:

  • Password reset links. Most accounts that aren’t protected by MFA can be accessed by a password reset process that only requires you to click a link in an email. Inboxes are the gateway to many other systems.
  • Bug reports. Information sent between engineers, project managers, or other team members about flaws in your products can help attackers craft exploits.
  • Upgrade notifications. If you get an upgrade notification about any tool your company uses, that tells attackers you’re still using an old version of that tool. They can look for known vulnerabilities in that version and use them in attacks.
  • Personal information about workers who have privileged access. Phishing and other forms of social engineering are still common. Phishing was used in the incidents that prompted this post. The more attackers know about you, the more real they can pretend to be. They only need to fool one person who has access to production.
  • Personally identifying information (PII). Customer error reports, for example. They might contain names, email addresses, real addresses, IP addresses, etc. All it takes is a copy/paste of one database entry by an engineer trying to track down the root cause of a problem with the product and you can have PII in your inbox. PII can be valuable to attackers (e.g. for scams) but it’s also subject to regulation. Sending it outside the company can cause big problems.

This applies to everyone, not just engineers. Project managers get bug reports. Customer service staff get customer error reports and any PII they contain. Upgrade notifications are often blasted out to distributions lists that include half the company. Even if you don’t have an engineering role, it’s still important to keep company email within the company.

Stay safe!


Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

PowerShell DSC: Self Signed SSL Certs


First, this isn’t a best practices guide for SSL certificates, it’s a how-to for creating functional ones. As always, only use self-signed certs when you’ve specifically validated that they’re a sufficiently secure solution.

When I do need self-signed certs and I’m working in Windows, I generate them with PowerShell DSC and its Script Resource. It works great for my cases. There are more robust ways that may also be worth looking at, like Custom Resources.

I always need certs with no password to make it easy to start apps unattended, so that’s what these instructions create.

This assumes you’ve already installed OpenSSL. I use Chocolatey.

Pre-requisites out of the way. Now, the code:

Script SelfSignedCert {
    GetScript = {@{Result = ''}}
    SetScript = {
        New-Item -ItemType Directory -Force -Path 'C:\Tmp\Ssl\'

        # Generate PFX with a temporary password
        $Cert = New-SelfSignedCertificate `
            -CertStoreLocation 'cert:\localmachine\my' `
            -DnsName 'localhost'
        $Password = ConvertTo-SecureString `
            -String 'temppass' `
            -Force `
        Export-PfxCertificate `
            -Cert "cert:\localmachine\my\$($Cert.Thumbprint)" `
            -FilePath 'C:\Tmp\Ssl\Cert.pfx' `
            -Password $Password

        # Convert PFX to Key/PEM with no password
        C:\Program` Files\OpenSSL-Win64\bin\openssl.exe pkcs12 `
            -in 'C:\Tmp\Ssl\Cert.pfx' `
            -nocerts `
            -nodes `
            -out 'C:\Tmp\Ssl\Pkcs12.pem' `
            -passin 'pass:temppass'
        # 'openssl.exe rsa' sends 'writing RSA key' to the error stream
        # on success. We have to redirect that output or the Script
        # resource errors.
        C:\Program` Files\OpenSSL-Win64\bin\openssl.exe rsa `
            -in 'C:\Tmp\Ssl\Pkcs12.pem' `
            -out 'C:\Tmp\Ssl\Rsa.key' `
            2> Out-Null
        C:\Program` Files\OpenSSL-Win64\bin\openssl.exe pkcs12 `
            -clcerts `
            -in 'C:\Tmp\Ssl\Cert.pfx' `
            -nokeys `
            -out 'C:\Tmp\Ssl\Cert.pem' `
            -passin 'pass:temppass'

        # Clean up leftovers from the conversion
        Remove-Item 'C:\Tmp\Ssl\Cert.pfx'
        Remove-Item 'C:\Tmp\Ssl\Pkcs12.pem'
    TestScript = {Test-Path 'C:\Tmp\Ssl'}

Line 32 (highlighted) is the tricky one. The script resource fails when its code outputs to the error stream. OpenSSL’s rsa command sends the string "writing RSA key" to the error stream when it succeeds. So we get failures like these:

Stderr from the command:

powershell.exe : writing RSA key
    + CategoryInfo          : NotSpecified: (writing RSA key:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
    + CategoryInfo          : NotSpecified: (writing RSA key:) [], CimException
    + FullyQualifiedErrorId : NativeCommandError
    + PSComputerName        : localhost

I bet the reason is this: OpenSSL was designed for Linux. In Linux, it’s common to send informational output to stderr. That keeps it out of stdout and therefor keeps it from passing to other apps via pipes (|). In PowerShell, there are many streams, including a dedicated one for informative output (the “verbose” stream). That makes it an anti-pattern to send informative output to PowerShell’s error stream; you should use the verbose stream. So it makes sense for DSC to assume that anything on the error stream is a real error, unlike in Linux. The person who ported OpenSSL didn’t account for this.

The only workaround I could find was to redirect the error stream to Out-Null. OpenSSL didn’t recognize the ErrorAction flag.

I couldn’t reproduce this behavior in raw PowerShell, only in the DSC script resource. If you know the details on why that is, I’d love to hear from you.

Hope this helps! Happy automating,


Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Cloud Infrastructure: Automating For Security


The United States National Security Agency (NSA) just published guidance for mitigating cloud vulnerabilities. It reached my inbox via the United States Department of Homeland Security’s Cyber Infrastructure (CISA) mailing list.

The document covers a bunch of topics and I recommend reading the whole thing, but its “misconfiguration” section contains a guideline that’s extra-relevant to DevOps:

Use [Cloud Service Provider] tools or techniques, such as Infrastructure as Code, to reduce the risk of misconfiguration

I’ve seen misconfigurations behind major security vulnerabilities: once I found the private key of an SSL cert in an Apache web server’s DocumentRoot. It was readable by the entire Internet. This was an accident, an administrator had run the wrong copy command. Anyone in the world could have used that key to execute man-in-the-middle attacks. The NSA doc has more examples that are equally scary.

One of the biggest reasons to automate is that it makes this kind of human error harder. Humans will inevitably make errors, but when your infrastructure is deployed by code there are two new things that help protect you from them:

  1. You get more opportunities to spot the problem. A developer reading their own code will have a chance to see the bad path and fix it. When they submit that code for review by their team, the team will get another chance. Anytime anyone reads that part of the code for any reason they’ll get another chance. When deploying by hand, one person makes one mistake in a second of distraction and suddenly your private keys are public.
  2. You often import existing libraries and tools that implement their own security good practices. Instead of deploying certs and keys yourself, you are more likely to pass them as arguments to tools that are already written to deploy them safely.

I recommend automating everything. Even experiments. It’s better to bodge together low quality automation to build your test environment than to hack it together by hand. Even the exercise of writing low-quality automation will give you more chances to spot problems. I’ve also found that even dodgy automation encourages you to import well-written libraries simply because it’s easier to import than to write your own code. Robots forever! 🤖

Happy automating,


Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Securing AWS Security Groups: Restricting Egress Rules

Good afternoon!

Today’s article demonstrates a surprisingly easy way to tighten the network-layer permissions in an AWS VPC. (If you’re in AWS but you’re not in a VPC: 😡)

Security Groups have ingress and egress rules (also called inbound and outbound rules). In most SGs, the egress rules allow all traffic to everywhere. You’ve probably seen this:


That’s a problem because, someday, you will get hacked. Breaches are inevitable, perfect security doesn’t exist. Someone or some bot will get access to what that SG was protecting (EC2 instance, Fargate ENI, whatever). When that happens, they can send out anything they want. Instead, you want them to have the most limited capabilities possible. They should find walls everywhere they turn. It’s not just about what’s coming in, it’s also about what’s going out.

I like AWS’s directive from the third bullet of the Security Pillar in their Well-Architected Framework: “Apply security at all layers”. Incoming and outgoing.

Fortunately, SG outbound rules are easy to tighten!

Imagine an Autoscaling Group of Linux instances. They handle background jobs and sometimes engineers SSH into them to run diagnostics (for this example we’re imagining you haven’t set up SSM Session Manager). At boot time they yum install some packages.

As usual, you need an incoming rule to allow SSH:


Now, here’s how to determine the outgoing rules you need: if the resource protected by the SG starts the connection, you need an outgoing rule. If it only replies to connections started by someone else, you don’t need an outgoing rule. Details farther down.

For anyone who’s forgotten, yum uses HTTP/S (ports 80/443) and FTP (ports 20/21).

The instance receives and then replies to SSH requests. That means it didn’t start those connections, so we don’t need an outgoing rule. But, it sends HTTP/S and FTP requests because it has to request to download packages from the yum servers. That means it starts those connections, so we need outgoing rules:


That’s it. If you attach this SG to the instances, engineers will still be able to SSH into it and it will still be able to yum install packages. But, if the instance tries to do something else, like maybe connect to a MySQL database (port 3306), the SG will block that traffic and the connection will time out. When something Evil breaks into this instance, it won’t be able to access that database.

These three rules are enough because Security Groups are stateful. To dramatically simplify statefulness, it means that SGs know whether traffic passing through them is part of a connection the instance has already agreed to. If it is, they pass the traffic whether or not a rule is present.

I’m skipping a ton of details. This is meant to help you quickly do one round of tightening on your network. There are advantages and disadvantages to stateful filtering, and the details can take you deep into the weeds, but most of the time it’s enough to know what rules you need. If you want to go farther with this on your own, check out this project for a demonstration environment where you can experiment with variations.

Two details:

  • We have to allow yum traffic to because we don’t know the IP addresses of the upstream yum servers.
  • VPC subnet ACLs are not stateful, so you need different rules for those. I’ll cover that in another article.

In this example we weren’t able to stop whatever Evil Thing had broken into your instance from sending your Super Secret Stuff to some Evil Webserver somewhere, but we did take away some of their other tools. We’ve put up one more wall they might run in to, and every wall they hit might stop them. Walls at every turn.

Stay safe!


Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Beating AWS Security Groups


Today I’ll show you how to pass traffic through an AWS Security Group that’s configured not to allow that traffic.

This isn’t esoteric hacking, it’s a detail in the difference between config and state that’s easy to miss when you’re operating an infrastructure.

Like I showed in a previous post, AWS Security Groups are stateful. They know the difference between the first packet of a new connection and packets that are part of connections that are already established.

This statefulness is why you can let host A SSH to host B just by allowing outgoing SSH on A’s SG and incoming SSH on B’s SG. B doesn’t need to allow outgoing SSH because it knows the return traffic is part of a connection that was already allowed. Similarly for A and incoming SSH.

Here’s the detail of today’s post: if the Security Group sees traffic as part of an established connection, it’ll allow it even if its rules say not to. Ok now let’s break a Security Group.

The Lab

Two hosts, testa and testb. One SG for each, both allowing all outgoing traffic. Testb’s SG allows incoming TCP on port 4321 (a random ephemeral port I’m using for this test):


To test traffic flow, I’m going to use nc. It’s a common Linux utility that sends and receives TCP traffic:

  • Listen: nc -l [port]
  • Send: nc [host] [port]

Test Steps:

(screenshots of shell output below)

  1. Listen on port 4321 on testb.
  2. Start a connection from testa to port 4321 on testb.
  3. Send a message. It’s delivered, as expected.
  4. Remove testb’s SG rule allowing port 4321:TrafficDenied
  5. Send another message through the connection. It will get through! There’s no rule to allow it, but it still gets through.


To show nothing else was going on, let’s redo the test with the security group as it is now (no rule allowing 4321).

  1. Quit nc on testa to close the connection. You’ll see it also close on testb.
  2. Listen on port 4321 on testb.
  3. Start a connection from tests a to port 4321 on testb.
  4. Send a message. Not delivered. This time there was no established connection so the traffic was compared to the SGs rules. There was no rule to allow it, so it was denied.

Testb Output

(where we listened)


Only two messages got through.

Testa Output

(where we sent)


We sent three messages. The last two were sent while the SG had the same rules, but the first message was allowed and the second was denied.


The rules in Security Groups don’t apply to open (established) TCP connections. If you need to ensure traffic isn’t flowing between two instances you can’t just remove rules from your SGs. You have to close all open connections.

Happy securing,


Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

AWS Security Groups: Stateful Statelessness



Recently, I rediscovered a fiddly networking detail: although ICMP’s ping is stateless, AWS security groups will pass return ping traffic even when only one direction is defined in their rules. I wanted to see this in action, so I built a lab.

If you just asked, “Wat❓”, keep reading. Skip to the next section if you just want the code.


Network hosts using stateful protocols (like TCP) distinguish between packets that are part of an established connection and packets that are new. For example, when I SSH (which runs on TCP) from A to B:

  1. A asks B to start a new connection.
  2. B agrees.
  3. A and B exchange bunches of packets that are part of the connection they agreed to.
  4. A and B agree to close the connection.

There’s a difference between a new packet and a packet that’s part of an ongoing connection. That means the connection, and its packets, have state (e.g. new vs established). Stateful firewalls (vs stateless) are aware of this:

  1. A ask B to start a new connection.
  2. Firewalls in between allow these packets if there is an explicit rule allowing traffic from A to B.
  3. A and B exchange bunches of packets.
  4. Firewalls in between allow the packets from A to B because of the explicit rule above. However, they allow the return traffic from B to A even if there is no explicit rule to allow it. Since B agreed to the connection the firewall assumes that packets in that connection should be allowed.

This is why you only need an outgoing rule on A’s Security Group (SG) and an incoming rule on B’s Security Group to SSH from A to B. AWS SGs are stateful, and allow the return traffic implicitly.

Ok, here’s the gnarly bit. ICMP (the protocol behind ping) is stateless. Hosts don’t have a negotiation phase where the agree to establish a connection. They just send packets and hope. So, doesn’t that mean I need to write explicit firewall rules in the SGs to allow the return traffic? If the firewall can’t see the state of the connection, it won’t be able to implicitly figure out to allow that traffic, right?

Nope, they infer state based on timeouts and packet types. ICMP pings are ECHO requests answered by ECHO replies. If the SG has seen a request within the timeout, it makes the educated guess that replies are essentially part of “established” connections and allows them. This is what I wanted to see in action.

The Lab

I setup a VPC with two hosts, A ( and B ( They’re in different subnets but the ACLs allow all traffic so they don’t influence the test. Here are the SG rules for A ( covers the entire VPC):


And the rules for B:


A allows outgoing ICMP to B, and B allows incoming ICMP from A. The return traffic is not allowed by any rules.

The Test Script

I didn’t find a way to send just replies without requests in Linux, so I bodged together a Python script:

This is a stripped-down version of ping that allows you to send a reply without responding a request. This was needed
to test the details of how security groups handle state with ICMP traffic. You shouldn't use this for normal

The ping implementation was based on Samuel Stauffer's python-ping: (which only
works with Python 2).

This must be run as root.
You must tell the Linux kernel to ignore ICMP before you run this or it'll eat some of the traffic:
    echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

import argparse, socket, struct, time

def get_arguments():
    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--send-request', metavar='IP_ADDRESS', type=str, help='IP address to send ECHO request.')
    parser.add_argument('--receive', action='store_true', help='Wait for a reply.')
    parser.add_argument('--send-reply', metavar='IP_ADDRESS', type=str, help='IP address to send ECHO reply.')
    return parser.parse_args()

def receive(my_socket):
    while True:
        recPacket, addr = my_socket.recvfrom(1024)
        icmpHeader = recPacket[20:28]
        icmp_type, code, checksum, packetID, sequence = struct.unpack("bbHHh", icmpHeader)
        print('Received type {}.'.format(icmp_type))

def ping(my_socket, dest_addr, icmp_type):
    dest_addr = socket.gethostbyname(dest_addr)
    bytesInDouble = struct.calcsize("d")
    data = (192 - bytesInDouble) * "Q"
    data = struct.pack("d", time.time()) + data
    dummy_checksum = 1 & 0xffff
    dummy_id = 1 & 0xFFFF
    # Header is type (8), code (8), checksum (16), id (16), sequence (16)
    header = struct.pack("bbHHh", icmp_type, 0, socket.htons(dummy_checksum), dummy_id, 1)
    packet = header + data
    my_socket.sendto(packet, (dest_addr, 1))

if __name__ == '__main__':
    args = get_arguments()
    icmp = socket.getprotobyname("icmp")
    my_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
    if args.send_request:
        ping(my_socket, args.send_request, icmp_type=8)  # Type 8 is ECHO request.
    if args.receive:
    if args.send_reply:
        ping(my_socket, args.send_reply, icmp_type=0)  # Type 0 is ECHO reply.

You can skip reading the code, the important thing is that we can individually choose to listen for packets, send ECHO requests, or send ECHO replies:

python --help
usage: [-h] [--send-request IP_ADDRESS] [--receive]
               [--send-reply IP_ADDRESS]

optional arguments:
  -h, --help            show this help message and exit
  --send-request IP_ADDRESS
                        IP address to send ECHO request. (default: None)
  --receive             Wait for a reply. (default: False)
  --send-reply IP_ADDRESS
                        IP address to send ECHO reply. (default: None)<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

The Experiment

SSH to each host and tell Linux to ignore ICMP traffic so I can use the script to capture it (see docstring in the script above):

sudo su -
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

Normal Ping

I send a request from A to B and expect the reply from B to A to be allowed. Here’s what happened:

Remember A is and B is


Ok, nothing surprising. I sent a request from A to B and started listening on A, a little later I send a reply from B to A and it was allowed. You can do this test with the normal Linux ping command (but not until you tell the kernel to stop ignoring ICMP traffic). This test just validates that my bodged Python actually works.

Reply Only

First we wait a bit. The previous test sent a request from A to B, which started a timer in the SG. Until that timer expires, reply traffic will be allowed. We need to wait for that expiration before this next test is valid.


Boom! I start listening on A, without sending a request. On B I send a reply to A but it never arrives. The Security Group didn’t allow it. This demonstrates that Security Groups are inferring the state of ICMP pings by reading their type.

Other Tests

I also tried a couple other things that I’ll leave to you to reproduce in your own lab if you want to see them.

  • Start out like the normal test. Send a request from A to B and start listening on A. Then send several replies from B to A. They’re all allowed. This shows that the SG isn’t counting to ensure it only allows one reply for each request; if it has seen just one request within the timeout it allows replies even if there are multiple.
  • Edit the script above to set the hardcoded ID to be different on A than it is on B. Then nothing works at all. I’m not actually sure what causes this. Could be that the SG is looking at more than just the type, but it could also be in the kernel or the network drivers or somewhere else I haven’t thought of. If you figure it out, message me!


I had free time over the holidays this year! Realistically, understanding this demo isn’t a priority for doing good work on AWS. I just enjoy unwrapping black boxes to see how the parts move.

Be well!


Need more than just this article? We’re available to consult.

You might also want to check out these related articles: