Tips on Cloud Security Vol. 2: Establishing an Amazon Web Services Cloud Security Strategy

Nearly ten years ago, Amazon officially announced that they would be selling computing time and storage capacity, and over a decade since Simple Queue Service (SQS) was launched in 2004. Since then, Amazon Web Services (AWS) has developed hundreds of new services comprising Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) solutions, and became the go-to Cloud Service Provider (CSP) for early adopters such as startups, small and medium-sized businesses and nimble organizations. Organizations that took to the cloud soon after its inception have had time to mature cloud operation methodologies and form business culture around agile and continuous delivery solutions.

Despite the explosive growth of AWS’ market share, the enterprise market has only recently begun considering public CSPs as a viable infrastructure hosting option. Extending the enterprise data center to a public CSP is not only an architectural challenge; it also requires risk and compliance considerations that must be evaluated in order to avoid weakening the security posture of the organization. Moreover, security architects must understand how to translate existing controls, maintain visibility and adapt to a new technology while ensuring that speed of delivery and agility are not compromised.

Cloud Security Considerations for Early Adopters and New Customers

Despite continued cloud usage, many early adopters are asking the same questions as enterprise customers who are just beginning their journey into the public cloud. Organizations with a great deal of experience with AWS are asking “What do we secure?” while enterprises new to public cloud services are asking “How do we secure?” Below are a few prevalent challenges and considerations of both groups seeking to secure public cloud environments:

Common Challenges and Considerations

Screen Shot 2016-01-07 at 1.01.43 PM

 

 

 

 

 

 

While there isn’t a complete out-of-the-box cloud security solution that serves everyone equally, AWS has made significant progress in making resources available to assist with implementing a cloud security strategy. Less-regulated cloud service customers who have designed products for the cloud may find native AWS tools convenient and easily integrative with current cloud infrastructure. However, building an adequate security strategy can be complicated and challenging for an enterprise customer, given that AWS tools may not be sufficient for a mature security program. However, as enterprise applications and infrastructure are being architected and engineered for AWS, the enterprise cloud service customer will be able to use cloud-native solutions within their security strategies.

Cloud Security is a Shared Responsibility

Both the early adopters and enterprise organizations must have a strong understanding of the Shared Responsibility Model. The Shared Responsibility Model helps identify the boundaries between cloud service provider and customer security responsibilities. As organizations begin to develop cloud security strategies, identifying obligations is a critical success factor.

shared_responsibility

 

 

 

 

 

 

 

 

 

 

 

 

 

*Source: https://aws.amazon.com/compliance/shared-responsibility-model/

Responsibility boundaries shift when moving between IaaS, PaaS and SaaS. While a CSP may be responsible for certain layers of the cloud platform, cloud service customers must remain knowledgeable of where their own responsibility lies. Before moving to AWS, early adopters may not have had to consider infrastructure requirements below the application and data layers; however, they are now responsible for the security of additional layers. Conversely, the enterprise organization is accustomed to owning security at all layers, but can be relieved of managing layers such as physical security and the core network.

Conclusion

Cloud security is a similar challenge to traditional on-premise security when data centers were first being built; proper security practices were often an afterthought. An additional complication to cloud security is the elasticity of the cloud. A cloud environment can become difficult to manage very quickly, and success will also depend on an organization’s ability to maintain visibility within such a dynamic environment.

Designing a comprehensive cloud security strategy within AWS will require adapting controls and risk management methodologies to an agile operations model, as well as understanding of how to utilize the resources available for maintaining visibility. Lastly, it is imperative for those new to the public cloud to understand that boundaries may shift as organizations leverage IaaS, PaaS or SaaS solutions.

Stay tuned for an upcoming post where we’ll review and discuss 7 Core Requirements for a Solid Cloud Security Strategy.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Time Inc. Highlights GuidePoint Security in the WSJ CIO Journal

Time Inc. (Time) recently mentioned GuidePoint Security (GuidePoint) in an article in the Wall Street Journal CIO Journal. Time leverages GuidePoint’s Amazon Web Services (AWS) and Payment Card Industry (PCI) expertise to guide them through the migration of applications into AWS. Specifically, GuidePoint provides expertise in implementing architectures and control frameworks that not only provide security, but also PCI compliance.

“We appreciate GuidePoint Security’s advice through this process. Their specific working knowledge of security and PCI compliance in AWS has been a great asset to us,” said Keith O’Sullivan, VP – Global Information Security for Time Inc.

Organizations are rapidly increasing their cloud-adoption, however Information Security and compliance considerations present both a challenge and an opportunity while moving to the cloud. Organizations must include Information Security and compliance experts into their project team, or risk jeopardizing their cloud-application’s security and compliance.

GuidePoint provides this expertise through our Cloud Solutions and Compliance practices. We’ve worked with numerous clients developing secure architectures, control frameworks, policies and procedures, and implementing security technologies across IaaS, PaaS, and SaaS platforms enabling our clients to leverage the benefits of the cloud while maintaining or improving their Information Security and compliance posture.

Contact sales@guidepointsecurity.com or visit www.guidepointsecurity.com to learn more about our Cloud Solutions and Compliance practices.

About GuidePoint Security

GuidePoint Security, LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Reston, Va., and with offices in Michigan, New Hampshire, Florida and North Carolina. GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM).

Security Visibility in the Cloud – Logging and Monitoring in AWS

By now we’re all well aware that there is a virtually limitless number of logging and monitoring solutions available on the market. Visit the Amazon Web Services (“AWS”) Marketplace, and you’ll find plenty of options. In fact, it gets really crazy when you start examining security monitoring versus application performance monitoring, often with solutions performing one role better than the other, or even just one of the roles altogether. What’s interesting to me is the lack of common Enterprise logging and monitoring solutions available in the AWS Marketplace. Obviously you can deploy instances to handle implementations of solutions like ArcSight, McAfee, LogRhythm, or NetIQ, but Splunk is the only well-known commercial provider with solutions available in the marketplace.

Now, that’s just the commercial side… what about open source?  Let’s cover a few terms first, for those new to centralized logging.

Shipper – a system agent that collects and forwards, or ships, system and application logs to a centralized server.

Collector / Broker – a message broker is a system that collects and queues logs as an intermediary step to indexing the logs centrally for analysis, monitoring, and alerting. Its primary purpose is to ensure you don’t lose messages when or if your indexer falls behind, crashes, or otherwise becomes unavailable to receive logs.

Collector / Indexer – a system used to collect, parse, and store logs for searching, analysis, monitoring, and alerting.

Dashboard / Visualizer – the dashboard is used to aid in log analysis by providing a search interface, and in some solutions alerting.

Open source logging and monitoring solutions abound, and like the well-known solutions missing from the AWS Marketplace, are typically implemented on purpose-built instances within your AWS Virtual Private Cloud (You are using a VPC, right?)  So what comprises an open source, centralized logging and monitoring solution?

Log shippers like Nxlog, Logstash, Lumberjack, and Fluentd. Brokers like Redis, RabbitMQ, and ZeroMQ. Indexers like Elasticsearch and… well, Elasticsearch seems to be the industry-standard as far as open source goes, but there are also a lot of folks using centralized syslog-ng, or Rsyslog.  Dashboards, such as Graylog2, Kibana (for Elasticsearch visibility, I like ElasticHQ), and security agents like OSSEC complete the architecture.

So, with all of these solutions available, why do I run into so many clients already in AWS, or moving to AWS, that have insufficient logging and monitoring, or worse, no logging and monitoring at all in their Cloud environment? Because Logging and Monitoring is Hard. Don’t get me wrong, it doesn’t require a rocket scientist on staff to get one or more of these commercial or open source solutions deployed. There’s preparation, communication, research, and other steps that have to be taken to properly implement logging and monitoring. I spent over a week researching available solutions, and building out proofs-of-concept in my virtualized lab to determine which solutions met my needs. That is the most critical point one should take away from this article; there is no right or wrong way to implement logging and monitoring in your AWS Cloud. As with all things IT, there is more than one way to accomplish your technical and business objectives. The trick is to find the right way for your organization.

Let’s look at some of the decision criteria that will come into play; this is not an exhaustive list:

People

  1. What expertise is available from my current staff – network engineering, development (if so, which languages), information security, incident handling, etc.?
  2. Do we have experience with a particular commercial solution?  A particular open source solution?
  3. Should I train existing staff, or hire staff with the relevant experience?
  4. Should I forget about managing this myself altogether and go with a Managed Services Provider?

Process

  1. Have we defined and documented the metrics we care about, and established a policy and process around ensuring this data is available and utilized?
  2. Have we defined and documented our business objectives behind logging and monitoring?
  3. Have we defined and documented regulatory mandates related to logging and monitoring? How do we keep our requirements and this documentation current?
  4. Have we determined roles and responsibilities involved in supporting the logging and monitoring initiative?

Technology

  1. Have we defined and documented technical requirements for our logging and monitoring solution? How do we architect our solution?
  2. Have we researched available options, and documented their strengths and weaknesses with regard to operating in our environment or culture?
  3. How do we facilitate a demonstration, proof-of-concept, or evaluation of the targeted solutionWhat do we log? Where do we store logs?
  4. What do we log?  Where do we store logs?
  5. How do we alert appropriate personnel a problem has been detected?

5_bucks

After extensive research, and comparison of features and functionality, I decided upon a Hybrid ELK Stack for this case study.  The ELK Stack is comprised of Elasticsearch Logstash and Kibana. I also added Graylog2 to support alerting, and OSSEC for file integrity and host intrusion prevention. There are numerous guides on the Interwebs to assist with deploying these solutions, so I will not go into installation and configuration in this post. I may write another article later to cover installation and configuration, but I’ve included links to all of the resources I used to get up and running at the bottom of this post. Note that, although this entire process covered a full week, the bulk of the final deployment was completed in about ~12 hours. I built the final environment on AWS’ Free Tier, but didn’t even complete rolling out the dashboards before the Logstash Shipper/Logstash Collector/Elasticsearch Indexer combination on the central server decimated the t1.micro instance (Ubuntu 12.04) I deployed it on (Java consumed all available memory). Rather than tune the overwhelmed box in an attempt to stabilize it, I took advantage of being in AWS and scaled up to a m1.small instance – problem solved.  In total, I spent less than $5 bucks on my, admittedly limited, proof-of-concept.

Kibana_booku_events

Figure 1: Kibana 3… Dead Sexy!

Take a look at the components I selected:

  1. Log Shipper – Logstash on Linux servers, Nxlog on Windows servers. Although Logstash is cross-platform, and is perfectly capable of shipping Windows Event Logs, IIS and MSSQL logs, the author of Nxlog convinced me Why Nxlog is better for Windows.
  2. Broker – This case study doesn’t incorporate the use of a Broker. I was originally going to include RabbitMQ in the architecture, but version dependencies led me down a path that was in danger of kludging up the whole study. In a production environment, you definitely need to use a broker to provide scalability and resiliency, but I pushed onward without including it.
  3. Indexer – Elasticsearch. Ridiculously easy decision for me, since Windows servers are in my test environment, and I was interested in testing something other than syslog.
  4. Dashboard / Visualization – Kibana 3 is dead sexy, and I’m an eye-candy kind of guy. I’d gone into this planning to just use Graylog2, since it is a great visualization tool itself, plus includes alerting capability, but after seeing screenshots of the new and improved Kibana 3.x, I couldn’t help deploying it, too. Regarding alerting, Nagios is often used in concert with Graylog2 for its ability to “roll up” alerts. If you’re interested in configuring email alerting/alarms for your Graylog2 deployment, Larry Smith has a great blog post to get you started here. Last, I also installed the ElasticHQ plugin to monitor my cluster of one’s health.
  5. As an aside, I also deployed OSSEC to the Linux and Windows servers for file integrity monitoring and intrusion prevention.
ElasticHQ

Figure 2: ElasticHQ… Elasticsearch cluster health, and a whole lot more!

A note about the final deployment; ultimately the redesigned, recently released Graylog2 v0.20.1 didn’t work out like I’d hoped. Everything was running smoothly, and based on configuration guidance and the absence of error output, it seemed I was setup properly, but I never saw the data from Elasticsearch in Graylog2. I spent the last few moments I had allocated to this project experimenting with some alternate configurations, and finally strayed so far from my working example that I had to give up. So, after a week of research and implementation time, a diagram of what we have can be seen in Figure 3.

Screen Shot 2014-04-23 at 2.49.08 PM

Figure 3: AWS Logging and Monitoring PoC Architecture

This was a trivial setup – I’m using a single box for a local Logstash shipper, an Elasticsearch index, MongoDB for Graylog2, and three different web interfaces. In a production system, ensure you use a more appropriate architecture including separating each component, utilizing multiple availability zones, inserting a broker to receive messages from log shippers, utilizing SSL, etc. etc.

Although I didn’t have enough time to sort out Graylog2, and get some alerting configured, I’m pleased with the overall outcome of my Security Visibility experiment. I found OSSEC to be an excellent “partner” in my quest for visibility, despite only utilizing and documenting the file integrity portion of its functionality.

ossec

Figure 4: OSSEC Web UI

Nxlog works perfectly for shipping Windows event logs, and of course, the lovely Kibana ties everything together and puts a nice bow on the concept of visualization.

Kibana_event_analysis

Figure 4: Analyzing events with Kibana 3

CP_Halo_deets

 

Although this was not a terribly difficult experiment, from a technical perspective, I still wondered, “Is there another | quicker | better way to gain security visibility in AWS?  Well, yes, and no. Yes, there’s an easy way to get security visibility, plus AWS automation to boot – no, because despite this gem of AWS security visibility, I will still recommend a centralized logging and monitoring platform in AWS. So, what’s this solution, you ask? CloudPassage Halo. But wait, there’s more! Halo has an API that’s made it possible for several SIEM solutions to integrate with it, sharing the Halo security visibility love in a centralized way within your existing, or planned, logging and monitoring deployment.

Halo has enough features and functionality to warrant its own blog post, so I won’t go into those here. Suffice it to say, anyone looking for security visibility, automation, or both in AWS should definitely have a look at what CloudPassage has to offer.

 

CP_win_security_events

Figure 5: Windows security events captured by Halo

Conclusion

Logging and monitoring is hard, but there are more than enough commercial and open source tools available to fit any size organization, with any size of budget. Attaining security visibility and appropriate incident handling isn’t just the right thing to do from a best practice perspective; many standards, regulations, and laws mandate them. So, regardless of the type of solution or solutions you select, choose and implement something, and gain insight into security incidents you may not have any idea are happening. After all, inadequate visibility is better than no visibility at all.

For additional information on this subject and the opportunity to ask questions, please click here to register for our Webinar titled:  Security Visibility in the Cloud – Logging and Monitoring in AWS occurring on May 1st, 2pm (EST).

 

Egress Controls in Amazon’s AWS Virtual Private Cloud (VPC)

I recently had an in-depth conversation with a client discussing security best practices in Amazon’s Web Services (AWS) Infrastructure-as-a-Service (IaaS). Specifically, the client was interested in applying egress controls to their web, application, and database tiers. Given the sensitivity of the data contained within their AWS application, my client’s largest concern was limiting a potential breach to prevent a successful attacker from exfiltrating their application’s data.

Before diving into my recommendations, it’s important to understand two key security controls provided by AWS. Those who’ve worked with AWS EC2 instances should be familiar with Security Groups. For those of you who aren’t, Security Groups equate to firewall rules that are applied to a specific (or group of) EC2 instances. What some of you may not know is that Security Groups actually perform stateful inspection (this is important to those of you with PCI implications). When your application is architected directly in EC2 (not within a Virtual Private Cloud or VPC), Security Groups can only be applied to inbound traffic. Obviously, this doesn’t help with my client’s objective of implementing egress controls.

AWS Security Group Inbound Rules

The second security control provided by AWS is Network Access Control Lists or Network ACLs. Network ACLs differ from Security Groups in that they are only available within VPCs and are generally intended to be applied to networks rather than individual EC2 instances within a VPC. For example, with Network ACLs it is common that you would say that only 1433/tcp (MS SQL) is allowed from your public subnet to your private network. While utilizing a /32 netmask will allow you to implement Network ACLs for specific hosts, you should note that Network ACLs are NOT stateful (again, remember Security Groups are). This requires you to implement matching inbound and outbound Network ACL rules.

AWS Network ACLs

So back to egress controls. Regardless of your application’s architecture within AWS (just EC2 instances or utilizing a VPC), you can apply egress controls directly on your EC2 instances (on the OS itself). However, this often increases the overhead of the EC2 instances to levels unacceptable to development teams. So what other options do we have? A lesser-known feature of VPCs is the ability to apply outbound rules to your Security Groups. For example, you can say that your MS SQL server is not allowed to communicate directly with the Internet, but is only allowed access to 80/tcp and 443/tcp for Windows Updates through a NAT server in your public subnet. Such a setup accomplishes the goal of implementing egress controls on your EC2 instances while not increasing their overhead.

AWS Security Group Outbound Rules

After explaining the enhanced security features of an AWS VPC, my client made a case to his development team in support of re-architecting the application inside of a VPC. Fortunately for my client, the security team was engaged during the design phase of their organization’s AWS application and implementing such a change was a lot less painful than re-designing an existing application. That isn’t to say that such a redesign can’t be successfully performed on an established application, but we all know it’s a lot easier to do earlier in the game.

To recap what we’ve discussed…

  • Security Groups are analogous to firewall rules and can be applied to specific EC2 instances (or groups of instances)
  • Security Groups provide stateful inspection
  • Standard EC2 instances (those not part of a VPC) allow only inbound Security Group rules
  • Network ACLs can only be applied to entire networks (subnets)
  • Network ACLs do NOT provide stateful inspection
  • Network ACLs are only available within VPCs
  • VPCs enable outbound rules to be added to Security Groups and can be applied directly to individual or groups of EC2 instances that are part of a VPC
  • Inbound and outbound Security Groups do NOT add overhead to the EC2 instances they are applied to

Hopefully you found this information helpful and it results in your further investigation into VPCs when looking at how to apply egress controls to your AWS applications.