I am not a fan of assigning DNS names to EC2 instances as I feel it goes against the ethos of
treating instances as temporary, ephemeral units of processing power. Cattle, rather than pets.
In the world of dynamic autoscaling groups, why would an instance need a domain name?
I do not need to route traffic to it – it sits behind an ELB.
I do not need to identify it by hostname in my monitoring and alerting platforms – EC2 tags are well-entrenched
in the modern DevOps ecosystem, and provide a lot more flexibility when it comes to grouping and aggregating data.
There is only one downside to this approach – sometimes a member of the herd requires some individual TLC. Just
as a cattle might require a vet’s personal attention to apply some medicine, an
EC2 instance might require the attention of a sysadmin because its response times are spiking in comparison
to the rest of the cluster.
No matter how good your deployment procedure, or how immutable your infrastructure, sometimes you just need to
hop on to an instance and do some good old-fashioned troubleshooting.
But if your instances do not have predictable hostnames, you need a way to find the generated public DNS name before you
can open your SSH session.
The most low-tech – if anything relating to AWS can be described as low-tech – method is to open up the web console,
filter the list of instances, copy the instance’s IP or public DNS name, and paste it in to the terminal. This involves
something I strongly dislike – taking my hands off of the keyboard to use the trackpad or mouse.
Another option is to use your configuration management tool to discover the instance’s IP or execute a command
on the remote instance. Most tools provide some method of addressing EC2 instances via their tags:
salt -G 'ec2_tags:environment:production' test.ping
This Salt command would cause any instances with an ‘environment=production’ tag to respond to the test ping. I could
use this to discover the public DNS name of my target instance without leaving the comfort of the terminal, but it
still involves too much typing and relies on there being a configuration management system in place.
As an ardent follower of the XKCD1319 protocol, I decided to write
a little script to help save some future typing.
ssh2ec2 is a wrapper around ssh which allows you to
use EC2 tags and metadata to select the destination hostname.
Install it with pip (
pip install ssh2ec2) and give it a try:
# Connect to an instance named Nginx ssh2ec2 --tag Name=nginx # Connect to any web instance in production ssh2ec2 --tag role=web --tag environment=production # Connect to any t2.micro instance in us-east-1a ssh2ec2 --availability-zone us-east-1a --instance-type t2.micro
To see a list of supported filters run
If there are multiple instances you will be connected to one of these at random.
Any additional arguments are passed to ssh. You can run a command on the instance with:
ssh2ec2 --tag role=web uname -a
Or on all matching instances with:
ssh2ec2 --all-matching-instances --tag role=web uname -a
This is not supposed to be a replacement for a proper configuration management system!
Due to the way Python’s argparse module handles the arguments you need to use a slightly different
syntax if you want to pass command line arguments to SSH. The following commands are functionally
ssh2ec2 --tag role=web "-i /path/to/key echo hello" ssh2ec2 --tag role=web --ssh-args "-i /path/to/key" echo hello
I hope this helps save you some typing.
Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/f0pvGTvSe1M/