Tag Archives: AWS

Use SSH Tunnels for Remote Administration


Over the last week, I spent a lot of time redesigning the infrastructure for my server environment in Amazon Web Services. Part of the architecture included building a management backbone for the servers. The design looks similar to the following diagram:

Network Diagram – Click to See Full Size

There were a few important concepts to this design:

  • The Management Server would only be turned on when remote administration work needed to be accomplished. This is done through the AWS Console.
  • Having the Management Server available keeps from having to open remote administration services (i.e. SSH) on production servers that could be exploited.
  • The Management Server would not have a static public IP address. It will be dynamically assigned on each startup.

Internet Solution

All of the servers are Linux servers, so my remote administration is done through SSH. I started to research how others work with this design and here is what I found.

When you setup Linux servers in AWS, they are configured to use certificates to authenticate to the server. With this concept, here is how you connect in and remote to the Web Server:

Step 1 – Use your certificate file to SSH in to the Management Server.

Step 2 – Make sure you have a copy of your certificate file on the Management Server so that you can then SSH from the Management Server in to the Web Server.

Here’s the problem . . . doing this requires you to store your private certificate files on a server out in the cloud. If that server should ever get compromised, then the hacker has access to the keys and therefore has access to all of your servers.

My Solution

A while back, I had explored the concept of using SSH tunnels. It has been a while since I have used them, but a little searching on the Internet and I was quickly reminded of the concepts. Here is the basic process:

Step 1 – Setup an SSH tunnel from your computer to the Management Server using your certificate file. This tunnel creates an endpoint on your local machine that acts like an interface on the Management Server.

Step 2 – Use that local interface to tunnel through your connection and access the remote server, still using the certificate file on your local machine.

With this process, the certificate keys never leave your computer. They are never stored out on the cloud.

Specific Steps

I work with Mac computers, so I use the Terminal program and the shell based SSH utility for my administration. The following steps should work for any Mac or Linux/Unix based environment.

Step 1

The first step is to establish a tunnel from your computer to the Management Server. Using the diagram from above, here is how that would look.

ssh -i MySSHCertificate.pem -L 9500: ubuntu@

Let’s breakdown the various parts of the command:

-i MySSHCertificate.pem

This should look familiar if you have done any administration with Linux servers in AWS. This simply directs the SSH client to use the MySSHCertificate.pem certificate file when authenticating to the Management Server.

-L 9500:

This sequence does a lot of things. First, it establishes a local tunnel connection at port 9500 on your local computer. This is an arbitrary number and you have a lot of flexibility to pick any valid TCP port that you wish. Next, we are going to setup the tunnel from port 9500 to go to which is the internal management IP address of the Web Server that we want to manage. Finally, to manage the Web Server, we will be connecting to port 22 which is the SSH port for the Web Server.


Finally, we are connecting to our Management Server at it’s dynamic public IP address of and we are connecting as the ubuntu user.

If you are successful, you will see a normal SSH prompt to the Management Server. For now, minimize this terminal session and let it run in the background.

Step 2

Now we need to connect to the Web Server. We are going to tunnel through the locally established TCP port of 9500 to get to the remote server. To do this, we will start up a new terminal session and use the following command:

ssh -i MySSHCertificate.pem -p 9500 ubuntu@localhost

Let’s breakdown this command into more detail:

-i MySSHCertificate.pem

Again, we are using our locally stored certificate file to authenticate to the remote server. In this example, we are using one certificate for both servers. In a more secure environment, you might have a different certificate file for each server you connect to. Either scenario will work with this technique.

-p 9500

This -p option tells SSH which TCP port to use when setting up the connection. In our example, we setup a local port at port number 9500, so we need to tell SSH to use that port.


This part may seem a little strange. Remember that we are using a tunnel that has been established at a port on our local computer. If you use a tool like netstat, you would see that there is a listening port on your computer at 9500. We use that port as our tunnel to the remote server. So when setting up this SSH connection, we are basically connecting to our local computer, but using the 9500 port to get to the remote system.


Again, if everything worked as planned, you should now have an SSH connection to the remote Web Server. You should be able to do everything that you need.

When you are finished, simply exit out of your SSH session to the Web Server. Then go back to the session that you minimized from step 1 and exit out of that session. That will shutdown your tunnel to the environment. Lastly, don’t forget to go into the AWS Console and shutdown your Management Server when it is not in use.

Need SFTP?

At some point, you will probably need to use SFTP to move files back and forth to the Web Server. Not a problem . . . the technique works basically the same way with one minor change.

Step 1 – Setup you tunnel the same way we did above.

Step 2 – Use the following command to setup the SFTP to the Web Server:

sftp -i MySSHCertificate.pem -P 9500 ubuntu@localhost

I realized in my testing that for some reason, the sftp client uses a capital “P” instead of a lower case one to define the port to connect to.

Secure Backups with Amazon S3 Storage

Amazon Web Services has revolutionized cloud computing in so many ways. The services they provide are numerous and learning all of the capabilities of this service can be mind numbing. There are two services in particular where I spend most of my time – EC2 and S3. EC2 is the virtual computing section of AWS. This is where you can build and use virtual servers of all shapes, sizes and configurations. S3 is the AWS cloud storage solution. Together, these two services can give you a lot of flexibility to build servers and backup that data easily to cloud storage.

A Wake Up Call

I began learning how you could setup a methodology for backing up virtual server data to S3 using Amazon’s command line interface, or CLI. The process was actually simpler than I thought. It only took a few steps, and I was off and running. Then a business contact of mine sent me an article that sent chills running down my spine. The article, Hacker puts ‘full redundancy’ code-hosting firm out of business (NetworkWorld – June 20, 2014), talks about how a company was hacked and how the hackers used the AWS tools to completely destroy the company’s servers and their backups. After reading the article, I understood how it was possible. It’s easy to find articles explaining a “quick and dirty” way to setup access to S3 with open access to make integration easy. However, it wasn’t easy finding a way to lock it down so that incidents like the one described in this article don’t occur. It was time to rethink my strategy.

My Security Requirements

Based on this article, I sat down and decided to architect what I wanted my backup solution to do. Out of that design, the following requirements were identified:

  • I want to keep 30 days of backup files in S3 and then purge anything older than that.
  • The user account for uploading these files should only be able to upload and download files with a specific S3 storage bucket.
  • In case the server is ever compromised, the user account should not be able to delete files from the backup server. (If a hacker could compromise the server AND delete all of the backups, then having a backup solution is a useless exercise.)
  • I want this user to have the ability to list the files that are in the S3 bucket. This simplifies the process for restoring a backup file to the server. (NOTE: This is a personal preference and many would argue that this is itself a security violation. If a hacker has compromised my system, then they already have the data. Seeing into the backup repository isn’t going to gain them much more, particularly in the context of my web servers. In a financial application, I might think differently about this approach.)
  • The user account should not have access to any other AWS functionality.

Armed with this information, I set out to research the numerous security mechanisms on AWS and build the secure backup solution that I needed. The solution isn’t nearly as “quick and dirty” as the articles I had previously read. The remainder of this blog aims to outline all of the steps that were needed to make the solution work as I designed it.

Creating an S3 Bucket for Backup Storage

The first step in this process was to define an S3 bucket specifically for holding my backup files.

  1. From the AWS console, click the S3 service icon.
  2. Once you are in the S3 Management Console, click the Create Bucket button at the top of the screen.
  3. Enter a bucket name for your backups. For the purpose of this exercise, we will name our bucket tkreiner-com-web-backups.
  4. Select a region to host this storage and then click Create.
  5. Back on the S3 Management Console, you should see the new bucket that you just created. If it isn’t already selected, click on the bucket name to select it.
  6. On the right side of the screen, you will see the properties for your bucket. Expand the section labeled as Lifecycle. This is where we will define our 30 days retention policy.
  7. Click the Add Rule button to add a new lifecycle.
  8. Step 1 of the Lifecycle wizard asks what this rule will be applied to. Keep the default option to apply this to the entire bucket and click the Configure Rule button.
  9. In Step 2, we are asked what action we will take. In the dropdown list, select Permanently Delete Only and then set the number of days box below to 30. Click the Review button.
  10. In Step 3, you are asked to give a name for your rule and verify all of the details for the rule. After entering a name, click the Create and Activate Rule button.

We now have a bucket to store our backup files in and we have a retention policy defined that only keeps 30 days worth of backups.

Create a Backup User in AWS

Here is the critical point in this process. Many people probably started out using AWS the same way I had. You register as a new user for AWS and you setup your username and password. First you create some new computers in EC2. You even start exploring S3. You learn how to setup CLI credentials so that you can communicate directly from your EC2 servers to your S3 storage. You start copying files back and forth to S3 and life is wonderful!

Here’s the problem – that first login that you create into AWS is the root level user to your environment. This is the super user of all super users. When you created your CLI credentials, you most likely created them associated to your root account. With those credentials you can do ANYTHING and EVERYTHING to your AWS environment through the CLI commands. I mean it . . . EVERYTHING!!! Picture this – imagine using the CLI interface to create and startup hundreds of new virtual servers in EC2. It happened! Another article that I read talked about a small company that didn’t know they were hacked until they received the following month’s AWS invoice that was for approximately $30,000! This is scary stuff!

Back to the process. We need to setup a user in AWS that does not have any rights in AWS and then only grant that user the rights it needs to conduct our backups. Here’s how we create that user:

  1. In the AWS console, click on the drop down menu in the top right corner of the screen where your username is.
  2. Select the Security Credentials option from this menu.
  3. You will likely receive a prompt asking you how you want to proceed. Your options will be Continue to Security Credentials or Get Started with IAM Users. The first option pertains to the security of your root level account. We want to setup a non-root user, so click the Get Started with IAM Users button to go to the IAM Users configuration.
  4. In the IAM Management Console, click the Create New Users button.
  5. In the screen that appears, you will see that you can create a couple of users in one step. We only need to create the one user, so enter a username in the first field. For our example, we will create a user called mybackup.
  6. Below the user names, leave the box checked that is labeled Generate an access key for each user. The access key is needed to setup the CLI environment.
  7. Click the Create button at the bottom of the screen to create your new user.
  8. You will see a screen where you can download the user’s security credentials. This information will be needed later to setup the CLI environment. Click the Show User Security Credentials link and copy the Access Key ID and Secret Access Key for later use. When you are finished, click Close at the bottom of the screen.

Our new user is created and by default, this user has no privileges in AWS. We will need to grant this user privileges necessary to conduct a backup. This next section will explain how to define that security.

Define a New Security Policy

In AWS, all security is handled through the use of security policies. These policies can be written in a number of different ways. We will define a simple policy that allows a user to read and write files with our S3 bucket.

  1. From the IAM Management Console, click on the Policies link from the menu on the left side of the screen.
  2. Click the Create Policy button at the top of the policy list screen.
  3. We are going to use Amazon’s Policy Generator to help aid in building our new policy. Click the Select button next to the Policy Generator option.
  4. We are first going to create the rule that allows us to list the S3 bucket contents. To start, set the Effect option to Allow.
  5. In the AWS Service dropdown list, select Amazon S3.
  6. In the Actions field, place a check next to the ListBucket action.
  7. In the Amazon Resource Name (ARN) field, we would enter arn:aws:s3:::tkreiner-com-web-backups (NOTE: This is using the example name that we provided above. Please be sure to replace tkreiner-com-web-backups with the name of your S3 bucket.)
  8. Click the Add Statement button to add this security to our new policy.
  9. Next, we are going to create the rule that allows the user to read and write files to our bucket. To start, set the Effect option to Allow.
  10. In the AWS Service dropdown list, select Amazon S3.
  11. In the Actions field, place a check next to the GetObject (read a file) and PutObject (write a file) actions.
  12. In the Amazon Resource Name (ARN) field, we would enter arn:aws:s3:::tkreiner-com-web-backups/* (NOTE: Be sure to add the final “/*” to the end of your bucket name. This tells AWS that the policy applies for any file inside of the S3 bucket.)
  13. Click the Add Statement button to add this security to our new policy.
  14. With all of our rules defined, click the Next Step button at the bottom of the screen.
  15. At the Review Policy screen, you are asked to provide a name and a description for your policy. For this example, we will call our policy AllowS3Backup. Give your policy a name and description and click the Create Policy button.

Grant Backup Policy to Backup User

Our security setup is almost complete. When we created our backup user, I said that the user does not have permission to do anything yet. We need to add this policy to our user account so that they then have the rights to conduct the backup.

  1. While you are still in the list of policies, search for the policy you just created in the previous select and click on the policy name.
  2. In the Policy Detail screen, scroll down to the section titled Attached Entities.
  3. Click the Attach button.
  4. Place a checkmark next to your backup user and click the Attach Policy button.

Where Are We At?

I said this process wasn’t easy. We have taken a lot of steps to get here, but where is “here”? Here’s a quick recap:

  • We created a new storage area in S3.
  • We set a retention policy on that S3 storage to keep contents for only 30 days.
  • We defined a new user whose credentials will be used to write the backup files to S3.
  • We created a security policy to allow the user access to a specific S3 bucket and to list, read and write to that bucket.
  • We added this security policy to our backup user.

From a security setup standpoint, we are done! The rest of this article is a brief introduction to setting up the CLI interface and copying the files to S3.

Installing and Using AWS CLI

With all of the security work done, it is now time to setup our command line interface. If you are using an Amazon imaged server, you may already have the CLI tools installed as part of the image. However, if the software is missing, see Installing the AWS Command Line Interface page on Amazon’s site for more details for installing.

With the software installed, we need to configure it to use the credentials of our new backup user. In both Windows and Linux, from a command prompt, enter the following command:

aws configure

You will first be prompted to enter an Access Key ID and Secret Access Key. Enter the information that you captured in the last step of setting up your new user. Next, you will be asked for a default region and an output format. Simply press Enter at both of these prompts.

Your CLI environment should now be ready. Let’s run through some tests.

List S3 Bucket

Let’s first see if we can see the contents of our S3 bucket. From the command prompt, enter the following:

aws s3 ls s3://tkreiner-com-web-backups

Again, remember to replace tkreiner-com-web-backups with the name of the S3 bucket that you created. When you run this command, you shouldn’t see any files, but you also shouldn’t receive any errors. So far . . . so good.

Copy Backup File to S3 Bucket

Now we should try to copy a file to our new S3 bucket. Let’s assume that you have your backup data written to a TAR or ZIP file. In this example, I will use a file called mybackup.tar. To copy the file to your S3 repository, you will use a command like the following:

aws s3 cp mybackup.tar s3://tkreiner-com-web-backups

You should see the file get copied to your backup bucket. Once the upload is complete, use the command above to list the contents of the bucket and verify that your backup copied correctly.

Retrieve Backup File from S3 Bucket

Let’s try and pull that same backup file back down to our computer. We will use a command that is similar to the command for uploading a file. The command will look something like:

aws s3 cp s3://tkreiner-com-web-backups/mybackup.tar .

Again, you should be able to see the command download the file to your current directory. When the command completes, review the files in your directory and you should see your file.


Remember that one of our requirements was that the backup user can’t delete files from the backup bucket. We should test and make sure that is the case. Let’s try to delete the mybackup.tar file using the following command:

aws s3 rm s3://tkreiner-com-web-backups/mybackup.tar

You should receive an error telling you that you don’t have sufficient permission to delete files.


If all of the commands above ran without an issue, then all of your configuration efforts have been a success! You can now begin setting up your backup scripts and jobs and start securely copying your files to Amazon’s S3 storage.

Going Further

This article serves as a guideline for setting up security for transferring files back and forth to S3. There are lots of ways to configure security policies. For example, through policies, you can limit what IP address the request is allowed from. The possibilities are endless. If you want to learn more, there is extensive documentation on the AWS Documentation pages with many examples to learn from.