All posts by Tom Kreiner

Strengthen Your IDS/IPS With Known WordPress Plugins

I maintain a dashboard in my Splunk environment to monitor for potential hacking attempts against my web servers. One way that I do this is to monitor for sources that are generating repeated 404 errors looking for exploitable web pages on the sites. During this recent review, I became aware of a number of new WordPress plugins that hackers are attempting to exploit. The URLs are easy to spot. They all begin with /wp-content/plugins/ followed by the name of plugin. The plugins that I saw are not even installed on my server, so these are obviously hackers looking for a way in.

This got me thinking about how I can use this information to strengthen my security with my intrusion detection and prevention systems.

Controlled WordPress Environment

For my particular installation, I am the only administrator for all of the hosted WordPress sites. This means that I have complete control over which plugins are installed. Because of this, I can use the solution that follows.

Identify Installed Plugins

The first step in my solution is to identify the list of currently installed plugins across all of my sites. From the directory where all of my sites are stored, I was able to use the following Linux command to get a list of plugins:

find . -type d -name plugins -exec ls {} \; | sort -u | grep -v "\.php$"

This command does the following:

  • find – Looks for all directories with the name of “plugins” and lists the contents of each directory.
  • sort – The sort command takes all of the plugin directory names, sorts them and removes duplicates.
  • grep – The plugins directory in WordPress may contain some PHP files in the directory. We can ignore these. This grep statement removes anything found to end with a .php extension.

Configure a Fail2Ban Filter

In order to secure my environment from these malicious invaders, I decided to again use Fail2Ban. If someone attempts to access a plugin that isn’t in a list of known plugins, then I want to immediately block any further access from that IP address. Fail2Ban makes this process very each.

Assuming that my list of installed plugins was pluginA, pluginB and pluginC, my filter file would look like the following:

[Definition]
failregex = ^<HOST> -.* \[.*\] "(GET|POST) \/wp-content\/plugins\/
ignoreregex = ^<HOST> .* "(GET|POST) \/wp-content\/plugins\/pluginA\/
  ^<HOST> .* "(GET|POST) \/wp-content\/plugins\/pluginB\/
  ^<HOST> .* "(GET|POST) \/wp-content\/plugins\/pluginC\/

This filter starts by triggering on any attempt made against the wp-content/plugins URL, but then provides exclusions for each plugin that is allowed.

I store this in a filter file that I named wp-plugin.conf in /etc/fail2ban/filter.d. I then add the following to my jail.local in /etc/fail2ban:

[wp-plugin]
enabled  = true
port     = http,https
filter   = wp-plugin
logpath  = /var/log/web/*access.log
maxretry = 1
bantime  = 2592000

This creates a jail with the wp-plugin filter. It monitors all of my web access logs and blocks a sender after 1 malicious attempt. Once triggered, the user is blocked for 2592000 seconds (30 days).

With this in place, restart the Fail2Ban service and a new layer of protection will be added to your security defenses.

Use SSH Tunnels for Remote Administration

Background

Over the last week, I spent a lot of time redesigning the infrastructure for my server environment in Amazon Web Services. Part of the architecture included building a management backbone for the servers. The design looks similar to the following diagram:

ManagementServerInterfaceDiagram
Network Diagram – Click to See Full Size

There were a few important concepts to this design:

  • The Management Server would only be turned on when remote administration work needed to be accomplished. This is done through the AWS Console.
  • Having the Management Server available keeps from having to open remote administration services (i.e. SSH) on production servers that could be exploited.
  • The Management Server would not have a static public IP address. It will be dynamically assigned on each startup.

Internet Solution

All of the servers are Linux servers, so my remote administration is done through SSH. I started to research how others work with this design and here is what I found.

When you setup Linux servers in AWS, they are configured to use certificates to authenticate to the server. With this concept, here is how you connect in and remote to the Web Server:

Step 1 – Use your certificate file to SSH in to the Management Server.

Step 2 – Make sure you have a copy of your certificate file on the Management Server so that you can then SSH from the Management Server in to the Web Server.

Here’s the problem . . . doing this requires you to store your private certificate files on a server out in the cloud. If that server should ever get compromised, then the hacker has access to the keys and therefore has access to all of your servers.

My Solution

A while back, I had explored the concept of using SSH tunnels. It has been a while since I have used them, but a little searching on the Internet and I was quickly reminded of the concepts. Here is the basic process:

Step 1 – Setup an SSH tunnel from your computer to the Management Server using your certificate file. This tunnel creates an endpoint on your local machine that acts like an interface on the Management Server.

Step 2 – Use that local interface to tunnel through your connection and access the remote server, still using the certificate file on your local machine.

With this process, the certificate keys never leave your computer. They are never stored out on the cloud.

Specific Steps

I work with Mac computers, so I use the Terminal program and the shell based SSH utility for my administration. The following steps should work for any Mac or Linux/Unix based environment.

Step 1

The first step is to establish a tunnel from your computer to the Management Server. Using the diagram from above, here is how that would look.

ssh -i MySSHCertificate.pem -L 9500:10.1.0.15:22 ubuntu@51.72.1.15

Let’s breakdown the various parts of the command:

-i MySSHCertificate.pem

This should look familiar if you have done any administration with Linux servers in AWS. This simply directs the SSH client to use the MySSHCertificate.pem certificate file when authenticating to the Management Server.

-L 9500:10.1.0.15:22

This sequence does a lot of things. First, it establishes a local tunnel connection at port 9500 on your local computer. This is an arbitrary number and you have a lot of flexibility to pick any valid TCP port that you wish. Next, we are going to setup the tunnel from port 9500 to go to 10.1.0.15 which is the internal management IP address of the Web Server that we want to manage. Finally, to manage the Web Server, we will be connecting to port 22 which is the SSH port for the Web Server.

ubuntu@51.72.1.15

Finally, we are connecting to our Management Server at it’s dynamic public IP address of 51.72.1.15 and we are connecting as the ubuntu user.

If you are successful, you will see a normal SSH prompt to the Management Server. For now, minimize this terminal session and let it run in the background.

Step 2

Now we need to connect to the Web Server. We are going to tunnel through the locally established TCP port of 9500 to get to the remote server. To do this, we will start up a new terminal session and use the following command:

ssh -i MySSHCertificate.pem -p 9500 ubuntu@localhost

Let’s breakdown this command into more detail:

-i MySSHCertificate.pem

Again, we are using our locally stored certificate file to authenticate to the remote server. In this example, we are using one certificate for both servers. In a more secure environment, you might have a different certificate file for each server you connect to. Either scenario will work with this technique.

-p 9500

This -p option tells SSH which TCP port to use when setting up the connection. In our example, we setup a local port at port number 9500, so we need to tell SSH to use that port.

ubuntu@localhost

This part may seem a little strange. Remember that we are using a tunnel that has been established at a port on our local computer. If you use a tool like netstat, you would see that there is a listening port on your computer at 9500. We use that port as our tunnel to the remote server. So when setting up this SSH connection, we are basically connecting to our local computer, but using the 9500 port to get to the remote system.

Voila!

Again, if everything worked as planned, you should now have an SSH connection to the remote Web Server. You should be able to do everything that you need.

When you are finished, simply exit out of your SSH session to the Web Server. Then go back to the session that you minimized from step 1 and exit out of that session. That will shutdown your tunnel to the environment. Lastly, don’t forget to go into the AWS Console and shutdown your Management Server when it is not in use.

Need SFTP?

At some point, you will probably need to use SFTP to move files back and forth to the Web Server. Not a problem . . . the technique works basically the same way with one minor change.

Step 1 – Setup you tunnel the same way we did above.

Step 2 – Use the following command to setup the SFTP to the Web Server:

sftp -i MySSHCertificate.pem -P 9500 ubuntu@localhost

I realized in my testing that for some reason, the sftp client uses a capital “P” instead of a lower case one to define the port to connect to.

Timthumb.php Remote Execution Vulnerability in WordPress

This morning, while going through my e-mails, I saw that my IDS system was seeing a lot of attempts against a timthumb.php file on my web sites. This seemed a little suspicious, so I headed out to Google to see what was going on. I started searching on “timthumb.php” and very quickly, Google gave me a suggestion of “timthumb.php exploit”. Yep, my suspicions were warranted.

Vulnerability Explained

This apparently isn’t a new vulnerability. It was a zero-day attack identified back in 2014. However, the fact that hackers are still trying to exploit it probably means that there are still web sites out on the web that haven’t been patched for this vulnerability.

The PHP script is used in a number of different themes for WordPress. The hacker exploits a bug in the software that allows the code to source an “image” from a remote web site. The following video does an excellent job of showing you the exploit.

Am I At Risk?

First task for me was to see if this is a vulnerability that affects me. I use Linux for all of my web servers. I was able to use one of the following commands to see if this code was implemented in any of my sites…

updatedb
locate timthumb.php
cd /PATH_TO_WEB_SERVER_ROOT (fill in with the actual path to your web server)
find . -name timthumb.php

Luckily, I did not find this being used on any of our themes. If you find this file in your environment, you might want to checkout the post by Sucuri Blog with tips for protecting your environment.

https://blog.sucuri.net/2014/06/timthumb-webshot-code-execution-exploit-0-day.html

Block Them Anyway!

I am a strong advocate about the fact that even though an exploit doesn’t exist on your system, you should still take action to block attempts. Someone was obviously maliciously attempt to attack your site and this is a clear piece of evidence as to how they do it. This attempt failed, but you can be sure they will keep trying until they find a way.

Another reasons to do this is to protect against site administrators loading themes with this vulnerability without your knowledge in the future. By putting the protection in place now, you can protect against future vulnerabilities created in your environment.

With that in mind, I decided to again use my Fail2Ban system to block individuals attempting to access this URL. The filter configuration file was very simple.

[Definition]
failregex = ^<HOST> - .*\/timthumb\.php
ignoreregex =

I added this filter to my jail configuration and restarted Fail2Ban. Now if anyone else attempts to access this URL that we don’t use, they will be automatically blocked from accessing our sites any further!

Intelligence in 404 Errors

I recently found myself in a conversation about Splunk. During the conversation, I was asked about which types of logs I found easiest and and most useful to ingest into the Splunk environment. Without giving it much thought, I immediately responded that web access logs were very easy to ingest and there is a lot of data that can be seen if you know where to look. Well, of course I set myself up for the next question . . . “can you give me an example?”

Understanding Access Logs

First, for those that aren’t familiar with web access logs, let’s take a moment to look at one. Below is a sample log entry from an Apache web server log:

103.249.31.189 - - [15/May/2016:21:43:02 -0400] "GET /wpfoot.php HTTP/1.1" 404 14975 "http://www.googlebot.com/bot.html" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"

At a first glance, this can look a little intimidating. But let’s break down the various parts:

  • 103.249.31.189 – This is the IP address of the computer that is making the request to your web site.
  • 15/May/2016:21:43:02 -0400 – This is the date, time and timezone of when the request was made.
  • GET – When communicating with a web site, there are a number of different actions that can be requested against that page. For most standard web traffic, those requests are for either a GET or a POST. A GET can be thought of as a request to get data from a site. When you are simply clicking around a web site, you are most likely using GET requests to retrieve that data.  A POST is used when you are trying to submit data to a site. When you are filling out a contact form or logging into a site, you are most likely sending a POST request with that data.
  • /wpfoot.php – This is the page on your site that the user was trying to access.
  • HTTP/1.1 – This is simply telling you what HTTP protocol was used by the client when requesting the page.
  • 404 – We now come to the status code for this particular request. In this example, we see that the server responded with a status code of 404. This will be important to our conversation because a status code of 404 means that the server could not find the page that was requested.
  • 14975 – This number gives you the size of the response in bytes that was sent back to the requestor.
  • http://www.googlebot.com/bot.html – Often, we see a web page here and this is the address of the referring web page. This tells us that someone tried to get to /wpfoot.php from the bot.html site.
  • Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) – Finally, we see information about the browser that was used to make the connection.

That’s A Lot Of Data!

In just that one request, we identified 9 pieces of information. Imagine trying to review a web site’s access log with hundreds, thousands, even millions of log entries! How will you ever find anything useful in this data?

This is where one of my favorite pieces of software comes into the picture – Splunk. Splunk does a phenomenal job of ingesting all types of log and data sources and giving you simple yet powerful tools to analyze that data. Start sending your web server logs to your Splunk server and let’s begin to analyze.

A Simple Query

Once Splunk has started reading your data, we can begin to develop some searches against that data. For this topic, I decided to talk about gaining intelligence based on 404 status codes. Our Splunk search is very simple:

sourcetype=access_combined status=404 | top limit=10 uri

Simple, right? This search says, “Grab all of my access_combined log data (Apache access logs) and look for any record with a status of 404. Then show me the top 10 most requested web pages that received that status code.”

When you run this search, you will see something that might look like the following:

Splunk 404 Search

You will see the top 10 web pages requested, the number of times it was requested, and what percent of the total count of requests that is made up by this count.

Now that you have this list . . . what intelligence can we gather from it? Let me give you two scenarios to consider.

Scenario #1 – Web Coding Issue

Most people are familiar with the concept of broken links. This occurs when a web site directs you to a page that doesn’t exist. Nothing is more frustrating than trying to find a resource on the web to answer a question that you have, just to be taken to that “Page Cannot Be Found” message. If your site has any broken links, they will quickly be seen with a search like this and you can begin to find and correct these.

For example, I recently came across a result that looked something like:

/www.domain.com/page.html

At first glance, this looks like a perfectly legitimate URL. But you have to remember that what you are seeing in these logs is the part of the web page that comes after the web site address. Therefore, this was actually a link for:

http://www.domain.com/www.domain.com/page.html

We quickly found the pages in our site that were coded incorrectly.

Scenario #2 – Hackers Knocking At Your Front Door

Our 2nd scenario can actually be the most important of the two scenarios. Analyzing your 404 errors can give you a huge amount of insight into the activity of hackers on the Internet.

There are a large number of sites on the Internet that advertise vulnerabilities found in software. (See reference below for CVE) The intent of these sites are to make you aware of the vulnerability and urge you to upgrade software to remedy the issue, or possibly provide work arounds until a software patch is released. This is a great tool for admins to use to monitor issues that are released about the software they administer. But, admins aren’t the only individuals using these sites. The hackers know about them too!

Often times, you will see patterns in your logs where hackers are testing your site to see what software you have installed. Maybe they are looking for certain pieces of software. Maybe they are looking for installed components in the software. Regardless, this is a trial and error effort. But the good news is, we can see this in our logs.

An Example

I recently came across this exact URL in one of my searches:

/magazine/js/mage/cookies.js

This struck me as odd because there is nothing in any of my web sites about a magazine. So my suspicion level was already pretty high. I grabbed this URL and pasted it into a Google search. It didn’t take me long to discover that this is a component of the open source Magento e-commerce system. I took this knowledge and looked to see if there were any recent vulnerabilities discovered in the software. Sure enough, there is a bulletin on the Magento site asking users to upgrade because of vulnerabilities recently found in their software.

Safe, Right?

Luckily for me, I wasn’t running the Magento software on my system, so I was safe from being hacked. Or am I?

Let me give you something to think about. You are home with your family and a stranger comes to your front door. They jiggle the door knob to see if it’s unlocked and they find that it isn’t. So they walk away. The next night, you notice this same person come to your house and try to open the front window. Again, it was locked, so they leave. The third night, you find them snooping around your back door. Lucky for you, that was locked too. How many times are you going to let this happen before you take action?

This example is no different. You have concrete evidence of someone “jiggling the door knob” and “opening your front window” on your web site. Obviously, this wasn’t going to work because you don’t use the software. But they tried anyway and left evidence of them doing something they shouldn’t be doing. What if other hackers attempt to do the same thing? You now know what to look for so that you can park that 100 lb. German Shepard at the window and door to keep them away.

This is valuable information that you should now use to protect your network. If a hacker was willing to find an exploit this way, then you can be sure they will try other ways as well. As soon as we have a good way of knowing that someone is up to no good, we should be blocking them immediately for any further access to our sites.

Fail2Ban

One good resource that I personally work with is the open source Fail2Ban project. This is an extremely simple and yet very powerful piece of software. One of the many things this software can do is look for patterns in a web log and then alter the firewall of the server in real time to block further attacks from the source IP address. I created a new filter rule:

[Definition]
failregex = ^<HOST> - .*\/magazine\/js\/mage\/cookies\.js
ignoreregex =

With this rule, I can now monitor for future attempts against this specific URL and block the offender from making any other attempts against our systems.

Conclusion

This is just one of the millions of ways that Splunk can bring valuable intelligence into your environment with very little effort. Once you start identifying sources for this data and building out the searches to aggregate that data, you will find that the data mining options are endless.

Resources

 

Finding High CPU Sources in Linux

The Problem

I recently came across a situation with one of my Linux servers. A web application on the server was getting very slow to respond. Web pages were taking 25-30 seconds to load when they usually load in less than 4 seconds. Something was obviously wrong.

Looking to Splunk for Answers

I turned to my trusty logging software, Splunk. The more I use this software, the more I love the insight it gives me into my systems. I started off with my Operational Monitoring dashboard that I have built over time. This dashboard gives me a number of graphical views into the key server management indicators. I scrolled down the page to my CPU utilization view and I saw the following:

High CPU Usage Graph

My web server, shown here as the red line, was clearly in a higher than usual CPU utilization and that explained a lot of things. But it didn’t yet tell me why this was occurring.

I started a new search in Splunk for the period around when the CPU utilization started increasing. My Splunk environment captures a lot of data, so at a first glance, there was a lot to look at.

As I was sifting through the data, I remembered that through the Splunk Add-On for *nix, the software is periodically capturing the output of the Linux ps command. This command line utility reports on what applications are currently running at that moment on the server. In particular, it also shows the amount of CPU utilization that is being used by that program. Knowing this, I crafted the following search and visualization:

host=HOSTNAME source=ps | timechart span=1h sum(pctCPU) by COMMAND

Increasing Application CPU Usage

This search gave me a graphical view of each program that was running and sums up the pctCPU value for each record in that hour. I chose to use the SUM function because any subtle increase in CPU usage would be compounded with a SUM instead of an average.

The green line in this graph clearly showed me the program that was beginning to utilize more and more CPU at the same time that the graph above showed the higher overall utilization.

Wrapping Up

Now that I knew which application was causing the problem, I started doing some research. I did find other users reporting similar CPU utilization issues with the application. I felt better knowing it wasn’t an issue specific to my environment. I am still continuing to learn about this issue and I’m sure an overall fix will soon follow.

 

Gone, But Not Forgotten

The Call

The other day, I got “the call” from a prospective client. Their web site had been hacked and the hacker changed their home page. They had made both text and graphic changes to the page to promote their own personal cause. They needed to fix this fast!

Naturally, what else occurs in an emergency? We learned that the backups haven’t been running in almost 2 months. It never fails, you always learn these things at the wrong time.

An Unlikely Solution

While working through this issue for this company, I remembered something that I had seen in one of my Certified Ethical Hacker (CEH) study guides. There is a site called The Internet Archive that periodically takes snapshots of web sites over time and stores them on their site. You can go on the site and lookup a given web page and see all of the snapshots that were taken. They call it the Wayback Machine.

Wayback Machine Screenshot

I looked and was surprised to see captures of my old consulting company web site. It’s amazing the amount of information out there.

A Happy Ending

Through this tool and other backup resources, we were able to get this customer back on the right path to recovering their home page. This tool shouldn’t be a good replacement for your backups, but it can be a tool to help in tough situations.

To learn more about the Internet Archive, go to http://www.archive.org

Secure Backups with Amazon S3 Storage

Amazon Web Services has revolutionized cloud computing in so many ways. The services they provide are numerous and learning all of the capabilities of this service can be mind numbing. There are two services in particular where I spend most of my time – EC2 and S3. EC2 is the virtual computing section of AWS. This is where you can build and use virtual servers of all shapes, sizes and configurations. S3 is the AWS cloud storage solution. Together, these two services can give you a lot of flexibility to build servers and backup that data easily to cloud storage.

A Wake Up Call

I began learning how you could setup a methodology for backing up virtual server data to S3 using Amazon’s command line interface, or CLI. The process was actually simpler than I thought. It only took a few steps, and I was off and running. Then a business contact of mine sent me an article that sent chills running down my spine. The article, Hacker puts ‘full redundancy’ code-hosting firm out of business (NetworkWorld – June 20, 2014), talks about how a company was hacked and how the hackers used the AWS tools to completely destroy the company’s servers and their backups. After reading the article, I understood how it was possible. It’s easy to find articles explaining a “quick and dirty” way to setup access to S3 with open access to make integration easy. However, it wasn’t easy finding a way to lock it down so that incidents like the one described in this article don’t occur. It was time to rethink my strategy.

My Security Requirements

Based on this article, I sat down and decided to architect what I wanted my backup solution to do. Out of that design, the following requirements were identified:

  • I want to keep 30 days of backup files in S3 and then purge anything older than that.
  • The user account for uploading these files should only be able to upload and download files with a specific S3 storage bucket.
  • In case the server is ever compromised, the user account should not be able to delete files from the backup server. (If a hacker could compromise the server AND delete all of the backups, then having a backup solution is a useless exercise.)
  • I want this user to have the ability to list the files that are in the S3 bucket. This simplifies the process for restoring a backup file to the server. (NOTE: This is a personal preference and many would argue that this is itself a security violation. If a hacker has compromised my system, then they already have the data. Seeing into the backup repository isn’t going to gain them much more, particularly in the context of my web servers. In a financial application, I might think differently about this approach.)
  • The user account should not have access to any other AWS functionality.

Armed with this information, I set out to research the numerous security mechanisms on AWS and build the secure backup solution that I needed. The solution isn’t nearly as “quick and dirty” as the articles I had previously read. The remainder of this blog aims to outline all of the steps that were needed to make the solution work as I designed it.

Creating an S3 Bucket for Backup Storage

The first step in this process was to define an S3 bucket specifically for holding my backup files.

  1. From the AWS console, click the S3 service icon.
  2. Once you are in the S3 Management Console, click the Create Bucket button at the top of the screen.
  3. Enter a bucket name for your backups. For the purpose of this exercise, we will name our bucket tkreiner-com-web-backups.
  4. Select a region to host this storage and then click Create.
  5. Back on the S3 Management Console, you should see the new bucket that you just created. If it isn’t already selected, click on the bucket name to select it.
  6. On the right side of the screen, you will see the properties for your bucket. Expand the section labeled as Lifecycle. This is where we will define our 30 days retention policy.
  7. Click the Add Rule button to add a new lifecycle.
  8. Step 1 of the Lifecycle wizard asks what this rule will be applied to. Keep the default option to apply this to the entire bucket and click the Configure Rule button.
  9. In Step 2, we are asked what action we will take. In the dropdown list, select Permanently Delete Only and then set the number of days box below to 30. Click the Review button.
  10. In Step 3, you are asked to give a name for your rule and verify all of the details for the rule. After entering a name, click the Create and Activate Rule button.

We now have a bucket to store our backup files in and we have a retention policy defined that only keeps 30 days worth of backups.

Create a Backup User in AWS

Here is the critical point in this process. Many people probably started out using AWS the same way I had. You register as a new user for AWS and you setup your username and password. First you create some new computers in EC2. You even start exploring S3. You learn how to setup CLI credentials so that you can communicate directly from your EC2 servers to your S3 storage. You start copying files back and forth to S3 and life is wonderful!

Here’s the problem – that first login that you create into AWS is the root level user to your environment. This is the super user of all super users. When you created your CLI credentials, you most likely created them associated to your root account. With those credentials you can do ANYTHING and EVERYTHING to your AWS environment through the CLI commands. I mean it . . . EVERYTHING!!! Picture this – imagine using the CLI interface to create and startup hundreds of new virtual servers in EC2. It happened! Another article that I read talked about a small company that didn’t know they were hacked until they received the following month’s AWS invoice that was for approximately $30,000! This is scary stuff!

Back to the process. We need to setup a user in AWS that does not have any rights in AWS and then only grant that user the rights it needs to conduct our backups. Here’s how we create that user:

  1. In the AWS console, click on the drop down menu in the top right corner of the screen where your username is.
  2. Select the Security Credentials option from this menu.
  3. You will likely receive a prompt asking you how you want to proceed. Your options will be Continue to Security Credentials or Get Started with IAM Users. The first option pertains to the security of your root level account. We want to setup a non-root user, so click the Get Started with IAM Users button to go to the IAM Users configuration.
  4. In the IAM Management Console, click the Create New Users button.
  5. In the screen that appears, you will see that you can create a couple of users in one step. We only need to create the one user, so enter a username in the first field. For our example, we will create a user called mybackup.
  6. Below the user names, leave the box checked that is labeled Generate an access key for each user. The access key is needed to setup the CLI environment.
  7. Click the Create button at the bottom of the screen to create your new user.
  8. You will see a screen where you can download the user’s security credentials. This information will be needed later to setup the CLI environment. Click the Show User Security Credentials link and copy the Access Key ID and Secret Access Key for later use. When you are finished, click Close at the bottom of the screen.

Our new user is created and by default, this user has no privileges in AWS. We will need to grant this user privileges necessary to conduct a backup. This next section will explain how to define that security.

Define a New Security Policy

In AWS, all security is handled through the use of security policies. These policies can be written in a number of different ways. We will define a simple policy that allows a user to read and write files with our S3 bucket.

  1. From the IAM Management Console, click on the Policies link from the menu on the left side of the screen.
  2. Click the Create Policy button at the top of the policy list screen.
  3. We are going to use Amazon’s Policy Generator to help aid in building our new policy. Click the Select button next to the Policy Generator option.
  4. We are first going to create the rule that allows us to list the S3 bucket contents. To start, set the Effect option to Allow.
  5. In the AWS Service dropdown list, select Amazon S3.
  6. In the Actions field, place a check next to the ListBucket action.
  7. In the Amazon Resource Name (ARN) field, we would enter arn:aws:s3:::tkreiner-com-web-backups (NOTE: This is using the example name that we provided above. Please be sure to replace tkreiner-com-web-backups with the name of your S3 bucket.)
  8. Click the Add Statement button to add this security to our new policy.
  9. Next, we are going to create the rule that allows the user to read and write files to our bucket. To start, set the Effect option to Allow.
  10. In the AWS Service dropdown list, select Amazon S3.
  11. In the Actions field, place a check next to the GetObject (read a file) and PutObject (write a file) actions.
  12. In the Amazon Resource Name (ARN) field, we would enter arn:aws:s3:::tkreiner-com-web-backups/* (NOTE: Be sure to add the final “/*” to the end of your bucket name. This tells AWS that the policy applies for any file inside of the S3 bucket.)
  13. Click the Add Statement button to add this security to our new policy.
  14. With all of our rules defined, click the Next Step button at the bottom of the screen.
  15. At the Review Policy screen, you are asked to provide a name and a description for your policy. For this example, we will call our policy AllowS3Backup. Give your policy a name and description and click the Create Policy button.

Grant Backup Policy to Backup User

Our security setup is almost complete. When we created our backup user, I said that the user does not have permission to do anything yet. We need to add this policy to our user account so that they then have the rights to conduct the backup.

  1. While you are still in the list of policies, search for the policy you just created in the previous select and click on the policy name.
  2. In the Policy Detail screen, scroll down to the section titled Attached Entities.
  3. Click the Attach button.
  4. Place a checkmark next to your backup user and click the Attach Policy button.

Where Are We At?

I said this process wasn’t easy. We have taken a lot of steps to get here, but where is “here”? Here’s a quick recap:

  • We created a new storage area in S3.
  • We set a retention policy on that S3 storage to keep contents for only 30 days.
  • We defined a new user whose credentials will be used to write the backup files to S3.
  • We created a security policy to allow the user access to a specific S3 bucket and to list, read and write to that bucket.
  • We added this security policy to our backup user.

From a security setup standpoint, we are done! The rest of this article is a brief introduction to setting up the CLI interface and copying the files to S3.

Installing and Using AWS CLI

With all of the security work done, it is now time to setup our command line interface. If you are using an Amazon imaged server, you may already have the CLI tools installed as part of the image. However, if the software is missing, see Installing the AWS Command Line Interface page on Amazon’s site for more details for installing.

With the software installed, we need to configure it to use the credentials of our new backup user. In both Windows and Linux, from a command prompt, enter the following command:

aws configure

You will first be prompted to enter an Access Key ID and Secret Access Key. Enter the information that you captured in the last step of setting up your new user. Next, you will be asked for a default region and an output format. Simply press Enter at both of these prompts.

Your CLI environment should now be ready. Let’s run through some tests.

List S3 Bucket

Let’s first see if we can see the contents of our S3 bucket. From the command prompt, enter the following:

aws s3 ls s3://tkreiner-com-web-backups

Again, remember to replace tkreiner-com-web-backups with the name of the S3 bucket that you created. When you run this command, you shouldn’t see any files, but you also shouldn’t receive any errors. So far . . . so good.

Copy Backup File to S3 Bucket

Now we should try to copy a file to our new S3 bucket. Let’s assume that you have your backup data written to a TAR or ZIP file. In this example, I will use a file called mybackup.tar. To copy the file to your S3 repository, you will use a command like the following:

aws s3 cp mybackup.tar s3://tkreiner-com-web-backups

You should see the file get copied to your backup bucket. Once the upload is complete, use the command above to list the contents of the bucket and verify that your backup copied correctly.

Retrieve Backup File from S3 Bucket

Let’s try and pull that same backup file back down to our computer. We will use a command that is similar to the command for uploading a file. The command will look something like:

aws s3 cp s3://tkreiner-com-web-backups/mybackup.tar .

Again, you should be able to see the command download the file to your current directory. When the command completes, review the files in your directory and you should see your file.

Delete???

Remember that one of our requirements was that the backup user can’t delete files from the backup bucket. We should test and make sure that is the case. Let’s try to delete the mybackup.tar file using the following command:

aws s3 rm s3://tkreiner-com-web-backups/mybackup.tar

You should receive an error telling you that you don’t have sufficient permission to delete files.

Success!!!

If all of the commands above ran without an issue, then all of your configuration efforts have been a success! You can now begin setting up your backup scripts and jobs and start securely copying your files to Amazon’s S3 storage.

Going Further

This article serves as a guideline for setting up security for transferring files back and forth to S3. There are lots of ways to configure security policies. For example, through policies, you can limit what IP address the request is allowed from. The possibilities are endless. If you want to learn more, there is extensive documentation on the AWS Documentation pages with many examples to learn from.

Photographing Horse Shows – Part 1

Over the last 3 years, I have had the opportunity to photograph horses and their riders competing in shows at the Prince George’s Equestrian Center in Upper Marlboro, Maryland. Much of that experience has been in the covered outdoor arena that is pictured above. Photographing fast horses, in a shaded environment, with a bright background, is not an easy task.

I have learned a lot about photography through this experience and I wanted to share some of what I have learned with others. I  decided to start a multi part series about this subject. Each post will present a different technique or lesson that I have learned at these shows.

The Problem

As I mentioned, trying to photograph a horse in shade with a brightly lit background is a challenge for a camera. If you let the camera try to meter the picture, you are likely to get a picture like this:

Under Exposed Rider
Nikon D750, f/7.1, 1/800s, ISO 1250

In photography terminology, this is an issue of dynamic range. Although your eye can see the trees and grass in the distance as well as the horse and rider in the foreground without any problem, the camera has a hard time seeing the very broad range of light differences and when it tries to meter, it meters for the brighter light that fills the majority of the picture. This end up leaving the rider and horse in darkness.

Meter The Ground

One technique that I have used to help in this situation is to meter the ground just in front of the place where you expect the rider to be. Let’s say that you want to get a picture of the horse jumping over a fence. Here is what you do:

  1. Frame up your picture with the jump fence filling the frame as you need. With a zoom lens, this means getting your zoom set to frame the picture as you need.
  2. Tilt your camera down towards the ground. Try to fill the center of your camera with the ground in front of the jump. By doing this, you are taking the bright background out of the frame so that the camera can focus on the lighting that is around the jump. Your frame will look something like the following:
    Meter Ground Near Fence
    Nikon D750, f/4, 1/800s, ISO 2200

     

  3. While the camera is pointed at the ground, use your Exposure Lock button to lock the exposure settings into your camera. On my Nikon D750, I simply press the AE-L button on the back of my camera. Check out your camera’s manual for information about how to set this with your camera.
  4. With the exposure settings now locked into the camera, reframe your shot and wait for the moment that the horse jumps over the fence. Shoot the best picture ever!

    Proper Exposed Rider
    Nikon D750, f/4, 1/800s, ISO 2500
  5. Add a little post production and you get…
    ProcessedRider

Learn More About Exposure Lock

Want to learn more about how exposure lock works on a camera? There is a great article on Photography Life’s web site titled Nikon AE-L / AF-L Button.

Simple Protection for WordPress From Malicious PHP Files

While recently auditing my web server logs for 404 errors, I came across a new pattern of hacker attempts. There were a number of attempts at opening PHP files with a specific name.  A quick search of the internet turned up an article, WordPress Security – Arbitrary File Upload in Gravity Forms by Rodrigo Escobar. The article does a fantastic job of explaining this particular exploit.

To summarize how the attack works, the hacker attempts to exploit a vulnerability identified in the Gravity Forms plugin in order to upload a malicious PHP file to the web site. The file is loaded into the wp-content/uploads path of WordPress. Upon successful upload, they can call the PHP file directly and the Apache server will execute the code. Luckily for me, I don’t have this plugin installed and the attempts on my site were all failed attempts.

The Pattern

As I researched this issue further, it dawned on me that a number of exploits against WordPress work in a similar fashion. The identify a means of exploiting a file upload mechanism within WordPress. These mechanisms are usually designed to place the uploaded file somewhere within the wp-content/uploads directory. The hacker then tries to call the file directly.

As far as I can tell, there should NEVER be a legitimate case where a PHP file needs to be uploaded to this directory and run directly from this directory. The directory was designed to be a place for storing images, videos and documents (Word, PowerPoint, PDFs, etc.). Therefore, if a PHP file does exist in this directory structure, a successful hacker attack is probably already underway.

Solution #1 – Fail2Ban

My first reaction was to setup a new rule in my Fail2Ban system. This software does a great job of monitoring system logs for recurring patterns and then taking automatic action to remedy the problem. In the case of my web applications, it directly manipulates the iptables (software firewall) to block the offender.

I wrote a new rule to identify the access of PHP files in the uploads directory. The regex expression was fairly straightforward:

^<HOST> -.*"(GET|POST).*(?i)\/wp-content\/uploads\/[^"+]*php.*".*"$

I configured the service to block a source IP after just one attempt of this URL pattern. I restarted the Fail2Ban service and did some testing. The filter worked as designed.

The problem with this solution is that if the hacker is successful the first time, they will still be able to perform the hack before the software blocks them for good. It’s unlikely that the hacker will be successful in just one shot, but I don’t want to take that risk.

Solution #2 – Deny Access in Apache Configuration

I decided that I wanted to configure Apache to deny access to any attempt at a PHP file in that directory. A quick search of the internet gave me a couple of good resources. I could make use of the FilesMatch directive in the Apache configuration file to identify a file pattern and block access to that file. I did some testing and I developed the following solution:

        <Directory "/path/to/wordpress/htdocs/wp-content/uploads">
                .............
                <FilesMatch "\.php$">
                        Order allow,deny
                        Deny from all
                </FilesMatch>
        </Directory>

This information goes directly within the VirtualHost configuration for the WordPress web site in the Apache configuration files. After making the configuration change and restarting Apache, I dropped a test PHP file into various directory levels in wp-content/uploads. I tried accessing the PHP file and Apache always gave me a Forbidden access message.

Solution #3 – All of the Above!

The second solution is obviously the better approach. Even if the hacker is successful at uploading a file through an exploited upload mechanism, Apache will make sure that they never actually run the uploaded script. However, security works best in layers. Even though solution #2 is best, keep solution #1 in place too. Even though the hacker can’t succeed in running their script, allow the Fail2Ban software to recognize the attempt and block that hacker from any further exploit attempts on your site. They are obviously up to no good, so keep them from attempting any further harm.

Security Training Should Start in IT

This past week, I attended a seminar titled A Road Map to Security Risk Management through System Source. The presentation was centered around building out a comprehensive plan for understanding your security risks and putting plans and procedures in place to mitigate those risks. There are many factors to consider when building such a plan and this seminar did a great job of getting you thinking about those factors. 

There was one theme in particular that kept coming up in our discussions…training! The attendees of this event all felt very strongly that one of their greatest security vulnerabilities was lack of education and understanding of security matters in their own staff. It is great that we have grown up in a society where we want to be trusting of our fellow citizens. However, there still are those individuals in dark and shady corners of the world who wish to deceive you and we need to take precautions. Giving our staff a better understanding of how these deceptions occur will go a long way to improving security. 

Later in the presentation, we talked about ways we as IT professionals can work to remove security vulnerabilities. For example, developers should plan to do code reviews with their peers. Another example is that network admins can periodically review their network configurations. Then it hit me…do the developers and the network admins talk and work together?

I have had the pleasure of being in environments with large IT departments and seeing what these individuals can do on a daily basis. I have also seen where the individuals in these of environments will very quickly fall into very separate and distinct roles. Unfortunately, this separation of duties often also fosters a separation of communications. We all recognize that our fellow business employees can benefit from additional training, but what about the cross training of information within the IT department.

Let me provide a good example. Recently, I had my network admin hat on and was reviewing my web server logs for any unusual amount of 404 errors. One particular source IP address had an unusually high amount, so I drilled into the detail. When I did, I quickly saw that this source was looking for the FCKEditor at any and all URL patterns imaginable. This was obviously the work of someone up to no good. I did a search on “FCKEditor vulnerability” and a found a document that outlined the exploit. (Exploiting PHP Upload Module of FCKEditor by SecurEyes) It had been identified that a hacker could make use of a null character in a file upload path in order to trick the system into uploading a malicious PHP program. Armed with this knowledge, and knowing that none of our sites made us of the FCKEditor, I took two actions. First, I blocked this particular IP from all access to our servers. Second, I coded our IDS system to recognize this pattern in the future so it could block future attempts in realtime. Another duty of the network admin completed, I moved on to my next project. 

But wait, shouldn’t this information have been shared within our team? My background is largely in programming and as a programmer, I did learn something from this. Maybe this information should be shared with other programmers on our team so that they can continue to improve their own code. What about our support staff and sales team? When they talk to end users about security, the more they understand the vulnerabilities, the easier it is to talk about the overall subject. The IT manager? Are you looking to buy that latest and greatest IDS appliance for your network? Your boss should understand why they need to approve the costs for such appliances. This also helps them communicate better with the senior managers when discussing budgets and corporate strategies.

As IT professionals, we know that the learning process never ends. But let’s not forget to support each other in our learning too. The more we share this knowledge among ourselves, the better prepared we all are for any security situation.