The cloud has made it dead simple to quickly spin up a new server without waiting for IT.
But the ease of deploying new servers — and the democratic nature of cloud management — can be a security nightmare, as a simple configuration error or administrative mistake can compromise the security of your organization’s entire cloud environment.
With sensitive data increasingly heading to the cloud, how your organization secures its instances and overall cloud infrastructure is of paramount importance.
Cloud providers, like Amazon, secure the server hardware your instances run on, but the security of the cloud infrastructure your organization sets up on that infrastructure is all on you.
A broad array of built-in security services and third-party tools are available to secure practically any workload, but you have to know how to use them.
And it all starts with proper configuration.
Analysis of real-world Amazon Web Services usage doesn’t paint a pretty picture.
Cloud security company Saviynt recently found among its customers an average of 1,150 misconfigurations in Elastic Compute Cloud (EC2) instances per AWS account.
It’s clear that the ease of spinning up EC2 instances for development and testing is coming at the expense of security controls that would otherwise be in place to protect on-premises servers.
AWS admins need to use available tools properly to ensure the security of their environments.
Here we survey some of the most common configuration mistakes administrators make with AWS.
Mistake 1: Not knowing who is in charge of security
When working with a cloud provider, security is a shared responsibility. Unfortunately, many admins don’t always know what AWS takes care of and which security controls they themselves have to apply. When working with AWS, you can’t assume that default configurations are appropriate for your workloads, so you have to actively check and manage those settings.
“It’s a straightforward concept, but nuanced in execution,” says Mark Nunnikhoven, vice president of cloud research at Trend Micro. “The trick is figuring out which responsibility is which.”
More important, AWS offers a variety of services, each of which requires distinct levels of responsibility; know the differences when picking your service.
For example, EC2 puts the onus of security on you, leaving you responsible for configuring the operating system, managing applications, and protecting data. “It’s quite a lot,” Nunnikhoven says.
In contrast, with AWS Simple Storage Service (S3) customers focus only on protecting data going in and out, as Amazon retains control of the operating system and application.
“If you don’t understand how this model works, you’re leaving yourself open to unnecessary risks,” Nunnikhoven says.
Mistake 2: Forgetting about logs
Too many admins create AWS instances without turning on AWS CloudTrail, a web service that records API calls from AWS Management Console, AWS SDKs, command-line tools, and higher-level services such as AWS CloudFormation.
CloudTrail provides invaluable log data, maintaining a history of all AWS API calls, including the identity of the API caller, the time of the call, the caller’s source IP address, the request parameters, and the response elements returned by the AWS service.
As such, CloudTrail can be used for security analysis, resource management, change tracking, and compliance audits.
Saviynt’s analysis found that CloudTrail was often deleted, and log validation was often disabled from individual instances.
Administrators cannot retroactively turn on CloudTrail.
If you don’t turn it on, you’ll be blind to the activity of your virtual instances during the course of any future investigations.
Some decisions need to be made in order to enable CloudTrail, such as where and how to store logs, but the time spent to make sure CloudTrail is set up correctly will be well worth it.
“Do it first before you need it,” says John Robel, a principle solutions architect for Evident.io.
Mistake 3: Giving away too many privileges
Access keys and user access control are integral to AWS security.
It may be tempting to give developers administrator rights to handle certain tasks, but you shouldn’t. Not everyone needs to be an admin, and there’s no reason why policies can’t handle most situations.
Saviynt’s research found that 35 percent of privileged users in AWS have full access to a wide variety of services, including the ability to bring down the whole customer AWS environment.
Another common mistake is leaving high-privilege AWS accounts turned on for terminated users, Saviynt found.
Administrators often fail to set up thorough policies for a variety of user scenarios, instead choosing to make them so broad that they lose their effectiveness.
Applying policies and roles to restrict access reduces your attack surface, as it eliminates the possibility of the entire AWS environment being compromised because a key was exposed, account credentials were stolen, or someone on your team made a configuration error.
“If you find yourself giving complete access to a service to someone, stop,” says Nunnikhoven. “Policies should include the least amount of permissions to get a job done.”
Mistake 4: Having powerful users and broad roles
AWS Identity and Access Management (IAM) is critical for securing AWS deployments, says Nunnikhoven.
The service — which is free — makes it fairly straightforward to set up new identities, users, and roles, and to assign premade policies or to customize granular permissions. You should use the service to assign a role to an EC2 instance, then a policy to that role.
This grants the EC2 instance all of the permissions in the policy with no need to store credentials locally on the instance. Users with lower levels of access are able to execute specific (and approved!) tasks in the EC2 instance without needing to be granted higher levels of access.
A common misconfiguration is to assign access to the complete set of permissions for each AWS item.
If the application needs the ability to write files to Amazon S3 and it has full access to S3, it can read, write, and delete every single file in S3 for that account.
If the script’s job is to run a quarterly cleanup of unused files, there is no need to have any read permissions, for example.
Instead, use the IAM service to give the application write access to one specific bucket in S3.
By assigning specific permissions, the application cannot read or delete any files, in or out of that bucket.
“In the event of a breach, the worst that can happen is that more files are written to your account. No data will be lost,” says Nunnkhoven.
Mistake 5: Relying heavily on passwords
The recent wave of data breaches and follow-up attacks with criminals using harvested login credentials to break into other accounts should have made it clear by now: Usernames and passwords aren’t enough.
Enforce strong passwords and turn on two-factor authentication to manage AWS instances.
For applications, turn on multifactor authentication.
AWS provides tools to add in tokens such as a physical card or a smartphone app to turn on multifactor authentication.
“Your data and applications are the lifeblood of your business,” Evident.io’s Robel warns.
Mistake 6: Exposed secrets and keys
It shouldn’t happen as often as it does, but credentials are often found hard-coded into application source code, or configuration files containing keys and passwords are stored in publicly accessible locations.
AWS keys have been exposed in public repositories over the years.
GitHub now regularly scans public repositories to alert developers about exposed AWS credentials.
Keys should be regularly rotated.
Don’t be the administrator who lets too much time pass between rotations.
IAM is powerful, but many of its features are frequently ignored.
All credentials, passwords, and API Access Keys should be rotated frequently so that in the event of compromise the stolen keys are valid only for a short, fixed time frame, thereby decreasing attacker access to your instances.
Administrators should set up policies to regularly expire passwords and prevent password reuse across instances.
“If an attacker is able to steal your keys, they can then access the resources in your account as if they were you. Use roles whenever possible,” Nunnikhoven says.
Mistake 7: Not taking root seriously
It pops up time and again.
Admins forget to disable Root API access — a highly risky practice. No one should be using the AWS root account and associated keys, let alone sharing them across users and applications. Keys to access AWS resources directly should be used sparingly, as the keys need to be tracked, managed, and protected.
In cases where root is absolutely necessary, Saviynt found that those accounts often have multifactor authentication disabled.
The root account deserves better protection than that.
Mistake 8: Putting everything in one VPC or account
The more teams and workloads added to an account or Virtual Private Cloud (VPC), the more likely you are to meet the lowest common denominator.
AWS has very generous limits on VPCs and accounts.
There’s no reason not to isolate workloads and teams into different regions, VPCs, or even accounts.
The simplest way to start is to make sure that development, testing, and production are in different accounts.
Mistake 9: Leaving wide open connections
Too many admins enable global permissions to instances. When you use 0.0.0.0/0, you are giving every machine everywhere the ability to connect to your AWS resources. “You wouldn’t leave the front door to your house open, why do you use 0.0.0.0/0?” Robel asks.
AWS Security Groups wrap around EC2 instances to permit or deny inbound and outbound traffic.
It’s tempting — and expedient! — to add broad access rules to security rules.
Fight the urge.
Give your security groups the narrowest focus possible. Use different AWS security groups as a source or destination to ensure only instances and load balancers in a specific group can communicate with another group.
One-third of the top 30 common AWS configuration mistakes identified by Saviynt involve open ports. Workloads showed open RDP, MySQL, FTP, or telnet ports via Security Groups, and Security Groups showed open RDP and SSH ports. Others were wide open to the internet.
Thanks to high-quality automation tools such as OpsWorks, Chef, Ansible, and Puppet, there’s no reason to allow remote access — such as SSH or RDP — to EC2 instances.
If an application or OS needs to be patched, it’s better to create a new image and spin up a brand-new instance with the patched applied instead of trying to connect to the instance and applying a patch in place.
If remote access is necessary, a “bastion host,” where users connect to an intermediary EC2 instance, is a safer option.
It is easier to manage all remote access connections going to a single host, then restrict what connections are possible between each instance.
It’s also possible to lock down the bastion host so that only pre-approved systems are allowed access.
Control all remote access in order to reduce your overall risk.
Mistake 10: Skimping on encryption
Many organizations don’t enable encryption in their AWS infrastructures, and the reasons vary from it’s too hard to not realizing it was important.
Saviynt found that Relational Database Service (RDS) instances were being created with encryption disabled — a potential data breach waiting to happen.
In EC2, there were workloads with unencrypted Elastic Block Storage (EBS).
Data in S3 should be protected, and traffic between EC2 instances should be secured.
Implementing encryption incorrectly is equally as bad — if not worse — than not having encryption at all, but Amazon actually offers tools to help ease the challenges.
Administrators reluctant to enable encryption over concerns of managing keys should let AWS manage those keys.
It’s always possible to migrate to the organization’s own public key infrastructure afterward.
Mistakes, not vulnerabilities
The fact that privileged users can bring down a whole AWS environment, with critical applications and sensitive information, isn’t the fault of the cloud.
It highlights the fact that for many organizations the security implementation is weak.
Administrators need to apply the same rigorous controls they have had in their datacenters to their cloud infrastructures.
Many of these configuration mistakes are not difficult to fix, and they mitigate a large range of potential issues, freeing up administrators to handle more in-depth tasks, such as running a vulnerability scanner like Amazon Inspector or Tenable Network Security’s Nessus.
But first things first, and that means bringing security hygiene to the cloud.
The cloud has made it dead simple to quickly spin up a new server without waiting for IT.