Tips on Cloud Security Vol. 4: Is Your Data Safe in the Cloud?

Is your data safe in the cloud? Security practitioners often have very strong opinions on the security of data in the cloud. In fact, many believe that hosting data in a remote data center simply cannot be more secure than hosting data locally.

I spent 14 years working for the US Army and other government agencies. As such, I fully understand the reluctance that many feel about shifting data and operations to the cloud. The simple notion of hosting any content outside of a government-controlled enclave was, for most of my tenure, heresy.

I’d like to begin by scoping this article to include only data, which falls into the categories of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Controlling and securing data in Software as a Service (SaaS) environments, like Google Drive or Salesforce, will be discussed in another article.

The Shared Responsibility Model

In an earlier article, Jonathan Villa discussed the concept of the Shared Responsibility Model. The premise of that model is that the cloud service prover (CSP), by necessity, assumes the responsibility for some of your security burden. Depending on the service category (IaaS or PaaS), you and your CSP share a different set of responsibilities for the security of your data.

In an IaaS model (e.g. compute VMs, virtual networks), the customer is given the most flexibility in services, but also has a greater responsibility for securing the data. Inversely, with PaaS services (e.g. hosted databases, queue systems), the consumer may be somewhat restricted by the service offerings, but they also don’t (and usually can’t) control any underlying operating or security infrastructures.

But Not-So-Shared Responsibility?

The Shared Responsibility Model provides a convenient framework in which to delineate between the responsibilities of you and your CSP. However, at the end of the day, your data is your responsibility. If you suffer a data breach, the chances are that Amazon or Microsoft aren’t going out of business; but you might.

Like any other IT and security initiative, you need to look at the cloud from a risk standpoint. There will most certainly be new risk areas to consider when putting your data in the cloud. However, you must weigh those risks against your current risks; something that many fail to consider.

For example, one concern that many cloud adopters have is administrative access to the hypervisor (e.g. the management plane). In the case of AWS, access is granted to administrators when justified, logged, and audited, and the credentials are revoked when the specific work is completed. This is in sharp contrast to what happens in most traditional data centers. In a traditional data center, administrative users generally have access to all data, all the time, regardless of specific need-to-know. In this one area, access to your data is likely controlled better inside AWS than in a traditional data center. On the other hand, this presumption requires an implicit trust of the AWS personnel and key management systems. This is just a single example, however.

Compliance is another issue that AWS and Azure can help with, without much effort at your end. In many cases, moving data to the appropriate cloud services (in the right way) can satisfy a vast majority of your compliance requirements.

In the End, CSPs Are Safer Than You Think

If you’re reading this article, the chances are that you’re a security practitioner and likely very skeptical of the cloud; that’s a good thing! But, anecdotally speaking, it’s exceedingly rare to see a data center secured to comparable standards of a major CSP. While certain risks will be inherent to cloud operations (or remote hosting in general), there are other benefits and risk reduction mechanisms that must be weighed into the decision. In the end, it comes down to the larger security strategy, sound architecture and risk-based prioritization.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Tips on Cloud Security Vol. 3: 7 Core Requirements for an AWS Cloud Security Strategy

Cloud privacyIn the recent post on “Establishing an Amazon Web Services (AWS) Cloud Security Strategy,” I introduced some of the adoption challenges Cloud Service Customers looking to strengthen their cloud security posture are facing. While designing and maturing a Cloud Security Program can be complicated and challenging, I can recommend a baseline for beginning. The following high-level infrastructure and operations capabilities serve as 7 core requirements when designing a cloud security strategy.

1)    Account Management

All organizations using AWS should evaluate whether their monthly AWS spend can be identified and described. The inability to do so may be indicative of loose provisioning controls within AWS account(s) and/or poor governance around IT infrastructure expenditures. AWS has simplified cost controls through CloudWatch billing alerts, a price list API, daily usage reports, consolidated billing and the support of cost allocation tags which allow for full programmatic integration with other SaaS technologies.

How many AWS accounts does an organization need?

With the support of Consolidated Billing and AWS’ decision to include the AWS account number as an identifying factor within the Amazon Resource Name (ARN), an organization can simplify security controls and policies and gain greater flexibility by using several distinct AWS accounts. However, owning multiple AWS accounts is not a security requirement and a strong security posture can be established through a single AWS account.

Takeaway: Identify the number of AWS accounts needed and their owners, and integrate using consolidated billing. Create CloudWatch Billing Alerts.

2)    Identity Federation

Any organization using AWS is guaranteed to have multiple services in use. For example, GitHub, Google Apps, Office 365, Salesforce, Asana, Mavenlink, etc. Managing user accounts in every service is not only an administrative challenge, but also presents a vulnerability as users are added and removed from the organization. AWS supports centralized user administration through authentication using a SAML 2.0 compliant identity provider.

For smaller organizations, such as early adopters, services such as OneLogin and Okta provide online identity management services that provide centralized user administration capabilities. For the enterprise organization running Microsoft Active Directory, AWS supports AD FS as an identity provider using SAML 2.0. Additionally, AWS supports multiple options such as synchronization with an on-premise Active Directory, an Active Directory compatible instance hosted in AWS, and most recently, a managed Active Directory service.

Takeaway: Identify an identity provider and configure identity federation.

3)    Tagging Strategy

Tagging in AWS is critical to cloud security because it provides resource identification in an agile environment where resources can be scaled automatically at any time. Additionally, tagging supports asset classification and can be leveraged within Identity and Access Management (IAM) policies to provide appropriate controls defined by organizational data classification policies. Tagging is also a critical component of auditing capabilities within AWS. As resources are provisioned, terminated and modified, tagging supplements the infrastructure inventory collection process.

Takeaway: Define an organizational Tagging Strategy to facilitate resource identification and inventory based on data classification policies and other data access controls defined by the organization.

4)    Identity and Access Management Policies

At the root of AWS security are IAM policies. IAM policies enable organizations to control access to AWS services and resources using either AWS provided policies or custom policies written and owned by the organization. IAM policies will allow or deny access to users, groups and roles based on the requirements defined by the organization.

Takeaway: Define a least privilege and default-deny security model for the organization and create IAM policies to coincide with organization policies. Assign IAM policies to IAM users, groups and roles.

5)    Event Logging and Alerting

Accountability and auditing is critical to a security strategy in that it provides visibility to the organization. One method that introduces visibility to the organization is event logging and alerting. AWS CloudWatch and CloudTrail are available for many AWS services and monitor various levels of the infrastructure in order to track changes and events occurring within the network stack, resource provisioning and de-provisioning and calls to the AWS API. The AWS Simple Notification Service (SNS) facilitates forwarding these events to responsible teams and integrates with other services such as Splunk. Enabling AWS Config also provides configuration history and relationships between AWS resources. Modifications to AWS resources such as changes to available ingress and egress controls are logged through AWS Config.

Takeaway: Enable CloudTrail, Config, VPC Flow Logs and CloudWatch Logs. Create and subscribe to notification topics to alert organization of changes within the AWS infrastructure. To go one step further, integrate logging with other services such as Splunk or SumoLogic.

6)    Remote Access

Securing remote access into AWS may be one of the first controls identified by enterprise cloud service customers for communication between on-premise services and cloud resources. Additionally, this may be one of the first controls an enterprise is ready to implement by integrating with an on-premise VPN solution. However, smaller organizations may not have an on-premise solution. Nevertheless, there are cloud-ready VPN solutions available that are fully supported within AWS. Many of these services can be found in the AWS Marketplace.

Takeaway: Disallow direct access to VPC resources and require VPN technology to access AWS resources configured within a VPC.

7)    Identify a Trusted Advisor

The cloud infrastructure will grow as the needs of the business evolve. Additionally, as AWS continues to add cloud services, organizations will need to ensure that cloud security strategies grow to cover the AWS footprint. AWS Trusted Advisor identifies (at a high-level) gaps against best practices in cost optimization, security, fault tolerance and performance improvement. AWS Trusted Advisor identifies over a dozen best practice security configurations and serves as a basic baseline recommendation tool; however, it cannot provide an in-depth analysis of a customer’s AWS environment. In order to strengthen the overall security posture, organizations should consider partnering with a Cloud Security company with proven expertise in the customer’s chosen CSP cloud infrastructure.

Note: AWS Trusted Advisor requires a premium support plan with AWS.

Takeaway: Identify a trusted advisor that understands your AWS environment and business goals.

Conclusion

There are many fundamentals to a Cloud Security Strategy that include encryption, compliance, risk management, application delivery, disaster recovery and more. Additionally, the core requirements identified in this Tip on Cloud Security will have a greater likelihood of success when they include existing organizational security strategies and input from their respective teams. Nevertheless, as organizations begin to deploy in the cloud (and specifically within AWS), having a core set of requirements to begin the discussion will help introduce cloud security early in the project’s lifecycle. 

Stay tuned for an upcoming post where we’ll review and discuss “Cloud Security Platforms.”

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Tips on Cloud Security Vol. 2: Establishing an Amazon Web Services Cloud Security Strategy

Nearly ten years ago, Amazon officially announced that they would be selling computing time and storage capacity, and over a decade since Simple Queue Service (SQS) was launched in 2004. Since then, Amazon Web Services (AWS) has developed hundreds of new services comprising Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) solutions, and became the go-to Cloud Service Provider (CSP) for early adopters such as startups, small and medium-sized businesses and nimble organizations. Organizations that took to the cloud soon after its inception have had time to mature cloud operation methodologies and form business culture around agile and continuous delivery solutions.

Despite the explosive growth of AWS’ market share, the enterprise market has only recently begun considering public CSPs as a viable infrastructure hosting option. Extending the enterprise data center to a public CSP is not only an architectural challenge; it also requires risk and compliance considerations that must be evaluated in order to avoid weakening the security posture of the organization. Moreover, security architects must understand how to translate existing controls, maintain visibility and adapt to a new technology while ensuring that speed of delivery and agility are not compromised.

Cloud Security Considerations for Early Adopters and New Customers

Despite continued cloud usage, many early adopters are asking the same questions as enterprise customers who are just beginning their journey into the public cloud. Organizations with a great deal of experience with AWS are asking “What do we secure?” while enterprises new to public cloud services are asking “How do we secure?” Below are a few prevalent challenges and considerations of both groups seeking to secure public cloud environments:

Common Challenges and Considerations

Screen Shot 2016-01-07 at 1.01.43 PM

 

 

 

 

 

 

While there isn’t a complete out-of-the-box cloud security solution that serves everyone equally, AWS has made significant progress in making resources available to assist with implementing a cloud security strategy. Less-regulated cloud service customers who have designed products for the cloud may find native AWS tools convenient and easily integrative with current cloud infrastructure. However, building an adequate security strategy can be complicated and challenging for an enterprise customer, given that AWS tools may not be sufficient for a mature security program. However, as enterprise applications and infrastructure are being architected and engineered for AWS, the enterprise cloud service customer will be able to use cloud-native solutions within their security strategies.

Cloud Security is a Shared Responsibility

Both the early adopters and enterprise organizations must have a strong understanding of the Shared Responsibility Model. The Shared Responsibility Model helps identify the boundaries between cloud service provider and customer security responsibilities. As organizations begin to develop cloud security strategies, identifying obligations is a critical success factor.

shared_responsibility

 

 

 

 

 

 

 

 

 

 

 

 

 

*Source: https://aws.amazon.com/compliance/shared-responsibility-model/

Responsibility boundaries shift when moving between IaaS, PaaS and SaaS. While a CSP may be responsible for certain layers of the cloud platform, cloud service customers must remain knowledgeable of where their own responsibility lies. Before moving to AWS, early adopters may not have had to consider infrastructure requirements below the application and data layers; however, they are now responsible for the security of additional layers. Conversely, the enterprise organization is accustomed to owning security at all layers, but can be relieved of managing layers such as physical security and the core network.

Conclusion

Cloud security is a similar challenge to traditional on-premise security when data centers were first being built; proper security practices were often an afterthought. An additional complication to cloud security is the elasticity of the cloud. A cloud environment can become difficult to manage very quickly, and success will also depend on an organization’s ability to maintain visibility within such a dynamic environment.

Designing a comprehensive cloud security strategy within AWS will require adapting controls and risk management methodologies to an agile operations model, as well as understanding of how to utilize the resources available for maintaining visibility. Lastly, it is imperative for those new to the public cloud to understand that boundaries may shift as organizations leverage IaaS, PaaS or SaaS solutions.

Stay tuned for an upcoming post where we’ll review and discuss 7 Core Requirements for a Solid Cloud Security Strategy.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Cloud Border Visibility

Maintaining network visibility is one of the biggest concerns in moving to the cloud. Fortunately, many traditional tools and techniques still work in a cloud environment. Network visibility is a broad topic. However, in this post, we will discuss maintaining network visibility at your cloud border.

Virtual Private Clouds

Amazon Web Services (AWS) and Microsoft Azure offer the capability of segmenting your infrastructure services into a private “virtual” network. In AWS this is called a Virtual Private Cloud (VPC), while in Azure it’s called a Virtual Network (VNet). In each platform, the capabilities are virtually identical.

Private Networks (as we’ll refer to them) allow you to segment your assets into a “virtual network.” These Private Networks allow you to create subnets, access control lists (ACL), route tables, and more. The Private Network itself can also have its own private IP space (RFC 1918) and a VPN gateway. This allows for – among other things – large hybrid cloud configurations.

At the border of your Private Network, you can place a simple cloud-provided Network Address Translation (NAT) gateway instance and route your Internet traffic to and from your network. To summarize, VPCs give the network engineer the appearance of a traditional network infrastructure.

Border Visibility

The problem with this configuration is in how access is controlled and reported at the border of the Private Network. Both AWS and Azure offer quick solutions to route traffic in and out of the network. These solutions act like stateful firewalls, simply brokering access to your network based on simple ACL rules.

AWS and Azure both have the ability to log Private Network firewall events. Using VPC Flow Logs (AWS) and Azure Diagnostics, it’s possible to pull firewall logs, as well as other security and operational metrics in to an existing log collection platform. However, there are a few capabilities still missing in this configuration.

First, the Cloud Service Providers’ gateway solutions (or simple public IPs, in the case of Azure) don’t provide the ability to inspect ingress or egress traffic using modern technologies. Unfortunately, promiscuous packet capture doesn’t work within these cloud environments. Therefore, activities such as layer-7 inspection (e.g. Next-Generation Firewall), network intrusion detection/prevention (IDS/IPS), and user behavior analytics are not possible unless you’re in-line with the communications channel.

Additionally, Cloud Service Providers’ NAT gateway solutions are proprietary and don’t fit in with the usual on-premises firewall solutions. For example, if your organization uses Palo Alto firewalls and manages them with Panorama, the cloud firewall device would not be able to be managed in the same interface. This makes configuration management and control more difficult for both the security ops and compliance teams.

In short, native Cloud Service Provider gateway solutions aren’t cut out for modern enterprise deployments. However, we routinely see these virtual gateways deployed in enterprise configurations.

Closing the Gap

Fortunately, there are other options available. Vendors like Palo Alto, Fortinet, Sophos, and CheckPoint have released their own virtual Unified Threat Management (UTM) appliances. The first step in closing this gap is – of course – using one of these enterprise appliances. If possible, you should choose one that matches your on-premise firewalls to help with management continuity.

But that’s not the end of the story.

Deploying a virtual UTM appliance is easy. Unfortunately, properly configuring the Private network is a step that many skip. Each Private network subnet (in both AWS and Azure) will need to be properly routed through this new UTM. Complicating this further, subnets in both AWS and Azure are all locally routable by default. That means, without configuring overriding those default routes between subnets, your new UTM can’t segment your networks. In AWS, that’s rather easy; but in Azure, this requires some PowerShell work. The effects of not configuring your routes properly can range from not working to evading the UTM; not something we want after all this work.

In summary, the subnets within the Private Networks must still be isolated from one another with ACLs or NSGs, and route tables must specifically route traffic through the UTM. In a future post we’ll go over specifically how to properly configure a VPC in AWS and a VNet in Azure using a UTM appliance.

Summary

The native cloud infrastructure solutions do not provide the expected level of visibility needed for enterprise analysis. Furthermore, achieving that level of visibility is not as straightforward as we would like it to be. It’s important that security and network engineers take their time to architect the infrastructure, create (and analyze!) threat models, and to thoroughly test the cloud infrastructure.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Tips on Cloud Security: New Blog Series

Cloud service providers like Amazon Web Services and Rackspace are expanding so rapidly that it can be difficult to keep up with the pace of change and how IT security is impacted as a whole. The power of automating development, test and production at-scale has changed the way software is developed, as clearly demonstrated within the DevOps community. The ability to use low-cost, on-demand compute resources to scale and grow business operations is compelling, to say the least, with organizations moving their IT environments to the cloud increasingly.

With the accelerated rate at which technology is evolving, there are more opportunities for security breaches—but here’s a refreshing development: security is finally becoming cool in the IT world! However, it’s often still an afterthought in the planning process of implementing a new technology, and taking an ad-hoc approach to security is typically a complex, frustrating and almost always expensive undertaking. As a result, the engineering team at GuidePoint has been diligent in looking for ways to help customers assess the technical challenges they may not realize they’re facing.

In response to the great cloud migration and the ever-changing tides of potential threats to security, we’ll be publishing a series of cloud security blogs over the coming months to help organizations understand how to better secure and operate their cloud environments. Topics will range from new cloud service reviews and architectural advice to hands-on technology integration how-tos.

We hope you’ll find this information helpful and join us in the conversation.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Time Inc. Highlights GuidePoint Security in the WSJ CIO Journal

Time Inc. (Time) recently mentioned GuidePoint Security (GuidePoint) in an article in the Wall Street Journal CIO Journal. Time leverages GuidePoint’s Amazon Web Services (AWS) and Payment Card Industry (PCI) expertise to guide them through the migration of applications into AWS. Specifically, GuidePoint provides expertise in implementing architectures and control frameworks that not only provide security, but also PCI compliance.

“We appreciate GuidePoint Security’s advice through this process. Their specific working knowledge of security and PCI compliance in AWS has been a great asset to us,” said Keith O’Sullivan, VP – Global Information Security for Time Inc.

Organizations are rapidly increasing their cloud-adoption, however Information Security and compliance considerations present both a challenge and an opportunity while moving to the cloud. Organizations must include Information Security and compliance experts into their project team, or risk jeopardizing their cloud-application’s security and compliance.

GuidePoint provides this expertise through our Cloud Solutions and Compliance practices. We’ve worked with numerous clients developing secure architectures, control frameworks, policies and procedures, and implementing security technologies across IaaS, PaaS, and SaaS platforms enabling our clients to leverage the benefits of the cloud while maintaining or improving their Information Security and compliance posture.

Contact sales@guidepointsecurity.com or visit www.guidepointsecurity.com to learn more about our Cloud Solutions and Compliance practices.

About GuidePoint Security

GuidePoint Security, LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Reston, Va., and with offices in Michigan, New Hampshire, Florida and North Carolina. GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM).

Security Visibility in the Cloud – Logging and Monitoring in AWS

By now we’re all well aware that there is a virtually limitless number of logging and monitoring solutions available on the market. Visit the Amazon Web Services (“AWS”) Marketplace, and you’ll find plenty of options. In fact, it gets really crazy when you start examining security monitoring versus application performance monitoring, often with solutions performing one role better than the other, or even just one of the roles altogether. What’s interesting to me is the lack of common Enterprise logging and monitoring solutions available in the AWS Marketplace. Obviously you can deploy instances to handle implementations of solutions like ArcSight, McAfee, LogRhythm, or NetIQ, but Splunk is the only well-known commercial provider with solutions available in the marketplace.

Now, that’s just the commercial side… what about open source?  Let’s cover a few terms first, for those new to centralized logging.

Shipper – a system agent that collects and forwards, or ships, system and application logs to a centralized server.

Collector / Broker – a message broker is a system that collects and queues logs as an intermediary step to indexing the logs centrally for analysis, monitoring, and alerting. Its primary purpose is to ensure you don’t lose messages when or if your indexer falls behind, crashes, or otherwise becomes unavailable to receive logs.

Collector / Indexer – a system used to collect, parse, and store logs for searching, analysis, monitoring, and alerting.

Dashboard / Visualizer – the dashboard is used to aid in log analysis by providing a search interface, and in some solutions alerting.

Open source logging and monitoring solutions abound, and like the well-known solutions missing from the AWS Marketplace, are typically implemented on purpose-built instances within your AWS Virtual Private Cloud (You are using a VPC, right?)  So what comprises an open source, centralized logging and monitoring solution?

Log shippers like Nxlog, Logstash, Lumberjack, and Fluentd. Brokers like Redis, RabbitMQ, and ZeroMQ. Indexers like Elasticsearch and… well, Elasticsearch seems to be the industry-standard as far as open source goes, but there are also a lot of folks using centralized syslog-ng, or Rsyslog.  Dashboards, such as Graylog2, Kibana (for Elasticsearch visibility, I like ElasticHQ), and security agents like OSSEC complete the architecture.

So, with all of these solutions available, why do I run into so many clients already in AWS, or moving to AWS, that have insufficient logging and monitoring, or worse, no logging and monitoring at all in their Cloud environment? Because Logging and Monitoring is Hard. Don’t get me wrong, it doesn’t require a rocket scientist on staff to get one or more of these commercial or open source solutions deployed. There’s preparation, communication, research, and other steps that have to be taken to properly implement logging and monitoring. I spent over a week researching available solutions, and building out proofs-of-concept in my virtualized lab to determine which solutions met my needs. That is the most critical point one should take away from this article; there is no right or wrong way to implement logging and monitoring in your AWS Cloud. As with all things IT, there is more than one way to accomplish your technical and business objectives. The trick is to find the right way for your organization.

Let’s look at some of the decision criteria that will come into play; this is not an exhaustive list:

People

  1. What expertise is available from my current staff – network engineering, development (if so, which languages), information security, incident handling, etc.?
  2. Do we have experience with a particular commercial solution?  A particular open source solution?
  3. Should I train existing staff, or hire staff with the relevant experience?
  4. Should I forget about managing this myself altogether and go with a Managed Services Provider?

Process

  1. Have we defined and documented the metrics we care about, and established a policy and process around ensuring this data is available and utilized?
  2. Have we defined and documented our business objectives behind logging and monitoring?
  3. Have we defined and documented regulatory mandates related to logging and monitoring? How do we keep our requirements and this documentation current?
  4. Have we determined roles and responsibilities involved in supporting the logging and monitoring initiative?

Technology

  1. Have we defined and documented technical requirements for our logging and monitoring solution? How do we architect our solution?
  2. Have we researched available options, and documented their strengths and weaknesses with regard to operating in our environment or culture?
  3. How do we facilitate a demonstration, proof-of-concept, or evaluation of the targeted solutionWhat do we log? Where do we store logs?
  4. What do we log?  Where do we store logs?
  5. How do we alert appropriate personnel a problem has been detected?

5_bucks

After extensive research, and comparison of features and functionality, I decided upon a Hybrid ELK Stack for this case study.  The ELK Stack is comprised of Elasticsearch Logstash and Kibana. I also added Graylog2 to support alerting, and OSSEC for file integrity and host intrusion prevention. There are numerous guides on the Interwebs to assist with deploying these solutions, so I will not go into installation and configuration in this post. I may write another article later to cover installation and configuration, but I’ve included links to all of the resources I used to get up and running at the bottom of this post. Note that, although this entire process covered a full week, the bulk of the final deployment was completed in about ~12 hours. I built the final environment on AWS’ Free Tier, but didn’t even complete rolling out the dashboards before the Logstash Shipper/Logstash Collector/Elasticsearch Indexer combination on the central server decimated the t1.micro instance (Ubuntu 12.04) I deployed it on (Java consumed all available memory). Rather than tune the overwhelmed box in an attempt to stabilize it, I took advantage of being in AWS and scaled up to a m1.small instance – problem solved.  In total, I spent less than $5 bucks on my, admittedly limited, proof-of-concept.

Kibana_booku_events

Figure 1: Kibana 3… Dead Sexy!

Take a look at the components I selected:

  1. Log Shipper – Logstash on Linux servers, Nxlog on Windows servers. Although Logstash is cross-platform, and is perfectly capable of shipping Windows Event Logs, IIS and MSSQL logs, the author of Nxlog convinced me Why Nxlog is better for Windows.
  2. Broker – This case study doesn’t incorporate the use of a Broker. I was originally going to include RabbitMQ in the architecture, but version dependencies led me down a path that was in danger of kludging up the whole study. In a production environment, you definitely need to use a broker to provide scalability and resiliency, but I pushed onward without including it.
  3. Indexer – Elasticsearch. Ridiculously easy decision for me, since Windows servers are in my test environment, and I was interested in testing something other than syslog.
  4. Dashboard / Visualization – Kibana 3 is dead sexy, and I’m an eye-candy kind of guy. I’d gone into this planning to just use Graylog2, since it is a great visualization tool itself, plus includes alerting capability, but after seeing screenshots of the new and improved Kibana 3.x, I couldn’t help deploying it, too. Regarding alerting, Nagios is often used in concert with Graylog2 for its ability to “roll up” alerts. If you’re interested in configuring email alerting/alarms for your Graylog2 deployment, Larry Smith has a great blog post to get you started here. Last, I also installed the ElasticHQ plugin to monitor my cluster of one’s health.
  5. As an aside, I also deployed OSSEC to the Linux and Windows servers for file integrity monitoring and intrusion prevention.
ElasticHQ

Figure 2: ElasticHQ… Elasticsearch cluster health, and a whole lot more!

A note about the final deployment; ultimately the redesigned, recently released Graylog2 v0.20.1 didn’t work out like I’d hoped. Everything was running smoothly, and based on configuration guidance and the absence of error output, it seemed I was setup properly, but I never saw the data from Elasticsearch in Graylog2. I spent the last few moments I had allocated to this project experimenting with some alternate configurations, and finally strayed so far from my working example that I had to give up. So, after a week of research and implementation time, a diagram of what we have can be seen in Figure 3.

Screen Shot 2014-04-23 at 2.49.08 PM

Figure 3: AWS Logging and Monitoring PoC Architecture

This was a trivial setup – I’m using a single box for a local Logstash shipper, an Elasticsearch index, MongoDB for Graylog2, and three different web interfaces. In a production system, ensure you use a more appropriate architecture including separating each component, utilizing multiple availability zones, inserting a broker to receive messages from log shippers, utilizing SSL, etc. etc.

Although I didn’t have enough time to sort out Graylog2, and get some alerting configured, I’m pleased with the overall outcome of my Security Visibility experiment. I found OSSEC to be an excellent “partner” in my quest for visibility, despite only utilizing and documenting the file integrity portion of its functionality.

ossec

Figure 4: OSSEC Web UI

Nxlog works perfectly for shipping Windows event logs, and of course, the lovely Kibana ties everything together and puts a nice bow on the concept of visualization.

Kibana_event_analysis

Figure 4: Analyzing events with Kibana 3

CP_Halo_deets

 

Although this was not a terribly difficult experiment, from a technical perspective, I still wondered, “Is there another | quicker | better way to gain security visibility in AWS?  Well, yes, and no. Yes, there’s an easy way to get security visibility, plus AWS automation to boot – no, because despite this gem of AWS security visibility, I will still recommend a centralized logging and monitoring platform in AWS. So, what’s this solution, you ask? CloudPassage Halo. But wait, there’s more! Halo has an API that’s made it possible for several SIEM solutions to integrate with it, sharing the Halo security visibility love in a centralized way within your existing, or planned, logging and monitoring deployment.

Halo has enough features and functionality to warrant its own blog post, so I won’t go into those here. Suffice it to say, anyone looking for security visibility, automation, or both in AWS should definitely have a look at what CloudPassage has to offer.

 

CP_win_security_events

Figure 5: Windows security events captured by Halo

Conclusion

Logging and monitoring is hard, but there are more than enough commercial and open source tools available to fit any size organization, with any size of budget. Attaining security visibility and appropriate incident handling isn’t just the right thing to do from a best practice perspective; many standards, regulations, and laws mandate them. So, regardless of the type of solution or solutions you select, choose and implement something, and gain insight into security incidents you may not have any idea are happening. After all, inadequate visibility is better than no visibility at all.

For additional information on this subject and the opportunity to ask questions, please click here to register for our Webinar titled:  Security Visibility in the Cloud – Logging and Monitoring in AWS occurring on May 1st, 2pm (EST).

 

GuidePoint Welcomes Joey Peloquin as Director of Professional Services

RESTON, Va., January 7, 2014 – GuidePoint Security LLC, a leading provider of innovative information security solutions, today announced that industry veteran Joey Peloquin has joined the company’s growing professional services team as Director of Professional Services.  GuidePoint Security’s customized, innovative information security solutions enable commercial and federal organizations to more successfully secure IT resources. The company will leverage Peloquin’s experience to further mature its world-class Information Assurance and Technology Integration services, including application, cloud and mobile security offerings.

“Joey brings a wealth of real-world expertise in dynamic fields of application, cloud, and mobile security,” said Bryan Orme, Principal at GuidePoint Security. “This expertise coupled with his proven records of building elite technical teams forwards our momentum of providing innovative security solutions for our clients’ most complicated information security challenges.”

As commercial and federal organizations further embrace today’s data-centric technologies, including mobile and cloud computing, the need to implement effective information security controls becomes paramount. Traditional thinking and controls no longer appropriately safeguard data and assets against emerging threats. GuidePoint Security provides customized innovative solutions to address the real-world information security threats that its customers face.

“I joined GuidePoint because they have managed to attract and retain a team of brilliant consultants of varying backgrounds, in addition to the founders and leadership that are veterans in the information security industry. In a nutshell, GuidePoint provides the support required to build a successful consulting practice, and the openness and attitude of sharing that will help make sure the journey together is a fun and successful one,” said Peloquin.

Peloquin’s 13 plus years of experience in the information technology industry includes specializing in all areas of information security. Prior to joining the GuidePoint Security team, Joey served as Worldwide Security Architect for F5 Networks focusing on mobile and application security, and authentication and access security. His previous experience also includes managing application and mobile security consulting teams at national security consulting firms, and establishing HP Software’s professional security services division after the acquisition of SPI Dynamics.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions that enable commercial and federal organizations to more successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps our clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. For more information, visit www.guidepointsecurity.com.

Egress Controls in Amazon’s AWS Virtual Private Cloud (VPC)

I recently had an in-depth conversation with a client discussing security best practices in Amazon’s Web Services (AWS) Infrastructure-as-a-Service (IaaS). Specifically, the client was interested in applying egress controls to their web, application, and database tiers. Given the sensitivity of the data contained within their AWS application, my client’s largest concern was limiting a potential breach to prevent a successful attacker from exfiltrating their application’s data.

Before diving into my recommendations, it’s important to understand two key security controls provided by AWS. Those who’ve worked with AWS EC2 instances should be familiar with Security Groups. For those of you who aren’t, Security Groups equate to firewall rules that are applied to a specific (or group of) EC2 instances. What some of you may not know is that Security Groups actually perform stateful inspection (this is important to those of you with PCI implications). When your application is architected directly in EC2 (not within a Virtual Private Cloud or VPC), Security Groups can only be applied to inbound traffic. Obviously, this doesn’t help with my client’s objective of implementing egress controls.

AWS Security Group Inbound Rules

The second security control provided by AWS is Network Access Control Lists or Network ACLs. Network ACLs differ from Security Groups in that they are only available within VPCs and are generally intended to be applied to networks rather than individual EC2 instances within a VPC. For example, with Network ACLs it is common that you would say that only 1433/tcp (MS SQL) is allowed from your public subnet to your private network. While utilizing a /32 netmask will allow you to implement Network ACLs for specific hosts, you should note that Network ACLs are NOT stateful (again, remember Security Groups are). This requires you to implement matching inbound and outbound Network ACL rules.

AWS Network ACLs

So back to egress controls. Regardless of your application’s architecture within AWS (just EC2 instances or utilizing a VPC), you can apply egress controls directly on your EC2 instances (on the OS itself). However, this often increases the overhead of the EC2 instances to levels unacceptable to development teams. So what other options do we have? A lesser-known feature of VPCs is the ability to apply outbound rules to your Security Groups. For example, you can say that your MS SQL server is not allowed to communicate directly with the Internet, but is only allowed access to 80/tcp and 443/tcp for Windows Updates through a NAT server in your public subnet. Such a setup accomplishes the goal of implementing egress controls on your EC2 instances while not increasing their overhead.

AWS Security Group Outbound Rules

After explaining the enhanced security features of an AWS VPC, my client made a case to his development team in support of re-architecting the application inside of a VPC. Fortunately for my client, the security team was engaged during the design phase of their organization’s AWS application and implementing such a change was a lot less painful than re-designing an existing application. That isn’t to say that such a redesign can’t be successfully performed on an established application, but we all know it’s a lot easier to do earlier in the game.

To recap what we’ve discussed…

  • Security Groups are analogous to firewall rules and can be applied to specific EC2 instances (or groups of instances)
  • Security Groups provide stateful inspection
  • Standard EC2 instances (those not part of a VPC) allow only inbound Security Group rules
  • Network ACLs can only be applied to entire networks (subnets)
  • Network ACLs do NOT provide stateful inspection
  • Network ACLs are only available within VPCs
  • VPCs enable outbound rules to be added to Security Groups and can be applied directly to individual or groups of EC2 instances that are part of a VPC
  • Inbound and outbound Security Groups do NOT add overhead to the EC2 instances they are applied to

Hopefully you found this information helpful and it results in your further investigation into VPCs when looking at how to apply egress controls to your AWS applications.