Enabling Public Cloud Application Performance and Security

There has been a lot of talk about cloud security and how to monitor SaaS and IaaS access and usage, both sanctioned and unsanctioned. However, one thing that needs to be talked about more is how applications that are known, tracked and managed are being deployed in the cloud, via IaaS.

When deploying applications on premise, either in a datacenter or in a DMZ, there are firewalls, network monitoring and various security controls that are known and already in place before an application even enters the discussion. However, when moving an application to the cloud via IaaS, none of those security controls exist by default, despite what customers might believe. This specifically applies to application hosting front ends such as ADC/WAFs.

Unfortunately, many cloud hosting deployments are being managed by development teams, not network or security teams. And while developer teams know what they are doing and are professionals, they often are not even aware of what network and security teams have done before they deploy their applications. An example of this is how many development teams are deploying default application delivery controllers offered up by IaaS providers. These ADCs appear to be point and click and cheap. And they are.

The problem is that they lack the performance and security that typical enterprise ADC/WAF appliances, virtual or otherwise, offer. Some of the clearest examples are features like DAST that allows an application to be scanned and resulting vulnerabilities be virtually patched by the application. Another example is the ability to automate security controls and requirements through industry standard DevOps tools like Ansible, Puppet, Chef as well as classic scripting languages like python and PowerShell. Further, using a product like F5 ASM that leverages broad industry support, application templates can be deployed with little or no customization or for custom applications, creating a custom security policy that can be accomplished with little or no user interaction with a Rapid Deployment Policy interface.

The final value, and probably the most critical, is a must-have for any government agency. A true enterprise virtual ADC/WAF offers FIPS level data encryption for application data in-flight. Without integrating with physical FIPS hardened appliances, the private keys necessary to do secure SSL transit data cannot be stored properly. Default ADC/WAFS supplied by the major IaaS providers do not have the ability to do this. Therefore, an enterprise software version is required.

Besides the added functionality, using a software enterprise ADC/WAF like F5 also provides consistency across on premise physical, on premise virtual and cloud application hosting. First and foremost, no new learning is required to ensure that the ADC/WAFS in the cloud are meeting security policy and are configured correctly. Any security issue can be resolved in the same manner that is currently used and probably will be used for on premise applications in most agencies that are going to persist to be hybrid computing for some time. A single management can be used for all and no additional training or risk of misconfiguration is added into the application life-cycle.

This consistency can be the difference between resolving a security issue with a few clicks in the proxy of an enterprise solution, and scrambling to figure out how to patch or fix code in an application that now has a major vulnerability and is in production. A common example is Heartbleed. When that hit enterprises, F5 front ended applications were able to resolve all applications, in some cases hundreds by simply pushing out a mitigation at the proxy, and then mapping out the patching and code fixes of the applications with more time and planning.

For a deeper dive into the differences between default IaaS ADC/WAFS, HSM integration to secure application traffic in-flight and how to securely move application to the cloud, join GuidePoint Security, F5 and Thales Security on Feb 27th for our live webinar.  Click here to register.

About the Author

Jean-Paul Bergeaux, Federal CTO, GuidePoint Security

With more than 18 years of experience in the Federal technology industry, Jean-Paul Bergeaux is currently the Federal CTO for GuidePoint Security. JP’s career has been marked by success in technical leadership roles with ADIC (now Quantum), NetApp and Commvault and SwishData. Jean-Paul focuses on identifying customers’ challenges and architecting innovative solutions to solve their complex problems. He is also a thought leader on topics that are top of mind for Federal IT Managers like Cyber Security, VDI, Big Data, and Backup & Recovery.

GuidePoint Security Achieves AWS Security Competency Status

HERNDON, VA – June, 8, 2017– GuidePoint Security announced today that it has achieved Security Competency Partner status with Amazon Web Services (AWS). This designation recognizes that GuidePoint has demonstrated deep expertise that helps its clients achieve their cloud security goals.

Becoming a Security Competency Partner differentiates GuidePoint as an AWS Partner Network (APN) member that provides specialized consulting services designed to help enterprises adopt, develop and deploy complex security projects on AWS. To receive the designation, APN Partners must possess deep expertise and experience on AWS.

AWS Competencies are only awarded to APN partners, like GuidePoint Security, that have demonstrated technical proficiency and proven customer success in specialized solution areas. GuidePoint Security is also an Authorized Government Partner and became an APN Advanced Consulting Partner in 2017.

“GuidePoint is proud to be one of the first APN partners to achieve Security Competency Partner status,” said Bryan Orme, Principal, Information Assurance. “As a security-focused consultancy, our team is dedicated to helping companies develop cloud security strategies and delivering cloud security solutions by combining our proven security expertise with the range of AWS security tools.”

The AWS Cloud is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the Security Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise. In addition to general Cloud Security Architecture and Strategy services, GuidePoint provides architectural reviews specifically focused on AWS environments. GuidePoint’s cloud security architects and engineers work with our clients to understand their operational needs, assess their current security posture, and provide relevant, prioritized, and actionable remediation guidance and recommendations for further improvement.

About GuidePoint Security

GuidePoint Security LLC provides innovative and valuable cybersecurity solutions and expertise that enable organizations to successfully achieve their missions. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification is with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Security as Added Value When Planning a Cloud Migration Strategy

The benefits organizations derive from adopting a cloud migration strategy are driven by several compelling factors that are as diverse as the business’ motivation for moving into the cloud. Organizations are well-aware of cloud computing’s value proposition, such as reduced disaster recovery costs, improved architectural flexibility, zero capital expenses to build out a new data center, and much more. However, one of the less obvious and understood factors—yet one that cloud customers benefit greatly from—is improved infrastructure security.

Cloud Service Providers, such as Amazon Web Services (AWS), have demonstrated their commitment to security through achieving compliance with numerous external compliance programs. Additionally, in an effort to ensure that AWS customers are well-protected (Shared Responsibility Model), they have proactively published best practices, such as Security by Design (SbD) and AWS Security Best Practices, and have continued to deliver native AWS services to improve security operations (e.g. WAF, Inspector, Config, CloudTrail, etc.).

When Deltek acquired HRsmart (now Deltek Talent Management) they began to plan the migration of the application to AWS where Deltek has been offering SaaS solutions for more than six years. Deltek’s cloud architects designed a cloud architecture that leveraged AWS Security Best Practices and ensured that their cloud infrastructure was compliant with their own internal security standards. Deltek then engaged GuidePoint Security’s Cloud Security Practice to provide third-party assurance for the secure design their AWS architecture.

GuidePoint leveraged a custom solution consisting of automation, the AWS SDK, and AWS services to deliver a Cloud Security Health Check for Deltek’s AWS environment. The evaluation criteria for Deltek’s Cloud Security Health Check was based upon information security industry benchmarks, AWS Security Best Practices, and GuidePoint’s Cloud Security Framework. The GuidePoint Cloud Security Framework is used by GuidePoint to evaluate AWS environments against cloud security best practices defined by industry standards including the Cloud Security Alliance Cloud Controls Framework, the CIS AWS Foundations Benchmark, and more general standards, such as the PCI Data Security Standard.

By leveraging infrastructure security provided by AWS, utilizing the combination of GuidePoint’s expert security knowledge and cloud operations experience, and being armed with an understanding of the Shared Responsibility Model, organizations like Deltek are able to deploy to AWS with confidence.

Tips on Cloud Security Vol. 4: Is Your Data Safe in the Cloud?

Is your data safe in the cloud? Security practitioners often have very strong opinions on the security of data in the cloud. In fact, many believe that hosting data in a remote data center simply cannot be more secure than hosting data locally.

I spent 14 years working for the US Army and other government agencies. As such, I fully understand the reluctance that many feel about shifting data and operations to the cloud. The simple notion of hosting any content outside of a government-controlled enclave was, for most of my tenure, heresy.

I’d like to begin by scoping this article to include only data, which falls into the categories of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Controlling and securing data in Software as a Service (SaaS) environments, like Google Drive or Salesforce, will be discussed in another article.

The Shared Responsibility Model

In an earlier article, Jonathan Villa discussed the concept of the Shared Responsibility Model. The premise of that model is that the cloud service prover (CSP), by necessity, assumes the responsibility for some of your security burden. Depending on the service category (IaaS or PaaS), you and your CSP share a different set of responsibilities for the security of your data.

In an IaaS model (e.g. compute VMs, virtual networks), the customer is given the most flexibility in services, but also has a greater responsibility for securing the data. Inversely, with PaaS services (e.g. hosted databases, queue systems), the consumer may be somewhat restricted by the service offerings, but they also don’t (and usually can’t) control any underlying operating or security infrastructures.

But Not-So-Shared Responsibility?

The Shared Responsibility Model provides a convenient framework in which to delineate between the responsibilities of you and your CSP. However, at the end of the day, your data is your responsibility. If you suffer a data breach, the chances are that Amazon or Microsoft aren’t going out of business; but you might.

Like any other IT and security initiative, you need to look at the cloud from a risk standpoint. There will most certainly be new risk areas to consider when putting your data in the cloud. However, you must weigh those risks against your current risks; something that many fail to consider.

For example, one concern that many cloud adopters have is administrative access to the hypervisor (e.g. the management plane). In the case of AWS, access is granted to administrators when justified, logged, and audited, and the credentials are revoked when the specific work is completed. This is in sharp contrast to what happens in most traditional data centers. In a traditional data center, administrative users generally have access to all data, all the time, regardless of specific need-to-know. In this one area, access to your data is likely controlled better inside AWS than in a traditional data center. On the other hand, this presumption requires an implicit trust of the AWS personnel and key management systems. This is just a single example, however.

Compliance is another issue that AWS and Azure can help with, without much effort at your end. In many cases, moving data to the appropriate cloud services (in the right way) can satisfy a vast majority of your compliance requirements.

In the End, CSPs Are Safer Than You Think

If you’re reading this article, the chances are that you’re a security practitioner and likely very skeptical of the cloud; that’s a good thing! But, anecdotally speaking, it’s exceedingly rare to see a data center secured to comparable standards of a major CSP. While certain risks will be inherent to cloud operations (or remote hosting in general), there are other benefits and risk reduction mechanisms that must be weighed into the decision. In the end, it comes down to the larger security strategy, sound architecture and risk-based prioritization.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Tips on Cloud Security Vol. 3: 7 Core Requirements for an AWS Cloud Security Strategy

Cloud privacyIn the recent post on “Establishing an Amazon Web Services (AWS) Cloud Security Strategy,” I introduced some of the adoption challenges Cloud Service Customers looking to strengthen their cloud security posture are facing. While designing and maturing a Cloud Security Program can be complicated and challenging, I can recommend a baseline for beginning. The following high-level infrastructure and operations capabilities serve as 7 core requirements when designing a cloud security strategy.

1)    Account Management

All organizations using AWS should evaluate whether their monthly AWS spend can be identified and described. The inability to do so may be indicative of loose provisioning controls within AWS account(s) and/or poor governance around IT infrastructure expenditures. AWS has simplified cost controls through CloudWatch billing alerts, a price list API, daily usage reports, consolidated billing and the support of cost allocation tags which allow for full programmatic integration with other SaaS technologies.

How many AWS accounts does an organization need?

With the support of Consolidated Billing and AWS’ decision to include the AWS account number as an identifying factor within the Amazon Resource Name (ARN), an organization can simplify security controls and policies and gain greater flexibility by using several distinct AWS accounts. However, owning multiple AWS accounts is not a security requirement and a strong security posture can be established through a single AWS account.

Takeaway: Identify the number of AWS accounts needed and their owners, and integrate using consolidated billing. Create CloudWatch Billing Alerts.

2)    Identity Federation

Any organization using AWS is guaranteed to have multiple services in use. For example, GitHub, Google Apps, Office 365, Salesforce, Asana, Mavenlink, etc. Managing user accounts in every service is not only an administrative challenge, but also presents a vulnerability as users are added and removed from the organization. AWS supports centralized user administration through authentication using a SAML 2.0 compliant identity provider.

For smaller organizations, such as early adopters, services such as OneLogin and Okta provide online identity management services that provide centralized user administration capabilities. For the enterprise organization running Microsoft Active Directory, AWS supports AD FS as an identity provider using SAML 2.0. Additionally, AWS supports multiple options such as synchronization with an on-premise Active Directory, an Active Directory compatible instance hosted in AWS, and most recently, a managed Active Directory service.

Takeaway: Identify an identity provider and configure identity federation.

3)    Tagging Strategy

Tagging in AWS is critical to cloud security because it provides resource identification in an agile environment where resources can be scaled automatically at any time. Additionally, tagging supports asset classification and can be leveraged within Identity and Access Management (IAM) policies to provide appropriate controls defined by organizational data classification policies. Tagging is also a critical component of auditing capabilities within AWS. As resources are provisioned, terminated and modified, tagging supplements the infrastructure inventory collection process.

Takeaway: Define an organizational Tagging Strategy to facilitate resource identification and inventory based on data classification policies and other data access controls defined by the organization.

4)    Identity and Access Management Policies

At the root of AWS security are IAM policies. IAM policies enable organizations to control access to AWS services and resources using either AWS provided policies or custom policies written and owned by the organization. IAM policies will allow or deny access to users, groups and roles based on the requirements defined by the organization.

Takeaway: Define a least privilege and default-deny security model for the organization and create IAM policies to coincide with organization policies. Assign IAM policies to IAM users, groups and roles.

5)    Event Logging and Alerting

Accountability and auditing is critical to a security strategy in that it provides visibility to the organization. One method that introduces visibility to the organization is event logging and alerting. AWS CloudWatch and CloudTrail are available for many AWS services and monitor various levels of the infrastructure in order to track changes and events occurring within the network stack, resource provisioning and de-provisioning and calls to the AWS API. The AWS Simple Notification Service (SNS) facilitates forwarding these events to responsible teams and integrates with other services such as Splunk. Enabling AWS Config also provides configuration history and relationships between AWS resources. Modifications to AWS resources such as changes to available ingress and egress controls are logged through AWS Config.

Takeaway: Enable CloudTrail, Config, VPC Flow Logs and CloudWatch Logs. Create and subscribe to notification topics to alert organization of changes within the AWS infrastructure. To go one step further, integrate logging with other services such as Splunk or SumoLogic.

6)    Remote Access

Securing remote access into AWS may be one of the first controls identified by enterprise cloud service customers for communication between on-premise services and cloud resources. Additionally, this may be one of the first controls an enterprise is ready to implement by integrating with an on-premise VPN solution. However, smaller organizations may not have an on-premise solution. Nevertheless, there are cloud-ready VPN solutions available that are fully supported within AWS. Many of these services can be found in the AWS Marketplace.

Takeaway: Disallow direct access to VPC resources and require VPN technology to access AWS resources configured within a VPC.

7)    Identify a Trusted Advisor

The cloud infrastructure will grow as the needs of the business evolve. Additionally, as AWS continues to add cloud services, organizations will need to ensure that cloud security strategies grow to cover the AWS footprint. AWS Trusted Advisor identifies (at a high-level) gaps against best practices in cost optimization, security, fault tolerance and performance improvement. AWS Trusted Advisor identifies over a dozen best practice security configurations and serves as a basic baseline recommendation tool; however, it cannot provide an in-depth analysis of a customer’s AWS environment. In order to strengthen the overall security posture, organizations should consider partnering with a Cloud Security company with proven expertise in the customer’s chosen CSP cloud infrastructure.

Note: AWS Trusted Advisor requires a premium support plan with AWS.

Takeaway: Identify a trusted advisor that understands your AWS environment and business goals.


There are many fundamentals to a Cloud Security Strategy that include encryption, compliance, risk management, application delivery, disaster recovery and more. Additionally, the core requirements identified in this Tip on Cloud Security will have a greater likelihood of success when they include existing organizational security strategies and input from their respective teams. Nevertheless, as organizations begin to deploy in the cloud (and specifically within AWS), having a core set of requirements to begin the discussion will help introduce cloud security early in the project’s lifecycle. 

Stay tuned for an upcoming post where we’ll review and discuss “Cloud Security Platforms.”

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Tips on Cloud Security Vol. 2: Establishing an Amazon Web Services Cloud Security Strategy

Nearly ten years ago, Amazon officially announced that they would be selling computing time and storage capacity, and over a decade since Simple Queue Service (SQS) was launched in 2004. Since then, Amazon Web Services (AWS) has developed hundreds of new services comprising Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) solutions, and became the go-to Cloud Service Provider (CSP) for early adopters such as startups, small and medium-sized businesses and nimble organizations. Organizations that took to the cloud soon after its inception have had time to mature cloud operation methodologies and form business culture around agile and continuous delivery solutions.

Despite the explosive growth of AWS’ market share, the enterprise market has only recently begun considering public CSPs as a viable infrastructure hosting option. Extending the enterprise data center to a public CSP is not only an architectural challenge; it also requires risk and compliance considerations that must be evaluated in order to avoid weakening the security posture of the organization. Moreover, security architects must understand how to translate existing controls, maintain visibility and adapt to a new technology while ensuring that speed of delivery and agility are not compromised.

Cloud Security Considerations for Early Adopters and New Customers

Despite continued cloud usage, many early adopters are asking the same questions as enterprise customers who are just beginning their journey into the public cloud. Organizations with a great deal of experience with AWS are asking “What do we secure?” while enterprises new to public cloud services are asking “How do we secure?” Below are a few prevalent challenges and considerations of both groups seeking to secure public cloud environments:

Common Challenges and Considerations

Screen Shot 2016-01-07 at 1.01.43 PM







While there isn’t a complete out-of-the-box cloud security solution that serves everyone equally, AWS has made significant progress in making resources available to assist with implementing a cloud security strategy. Less-regulated cloud service customers who have designed products for the cloud may find native AWS tools convenient and easily integrative with current cloud infrastructure. However, building an adequate security strategy can be complicated and challenging for an enterprise customer, given that AWS tools may not be sufficient for a mature security program. However, as enterprise applications and infrastructure are being architected and engineered for AWS, the enterprise cloud service customer will be able to use cloud-native solutions within their security strategies.

Cloud Security is a Shared Responsibility

Both the early adopters and enterprise organizations must have a strong understanding of the Shared Responsibility Model. The Shared Responsibility Model helps identify the boundaries between cloud service provider and customer security responsibilities. As organizations begin to develop cloud security strategies, identifying obligations is a critical success factor.















*Source: https://aws.amazon.com/compliance/shared-responsibility-model/

Responsibility boundaries shift when moving between IaaS, PaaS and SaaS. While a CSP may be responsible for certain layers of the cloud platform, cloud service customers must remain knowledgeable of where their own responsibility lies. Before moving to AWS, early adopters may not have had to consider infrastructure requirements below the application and data layers; however, they are now responsible for the security of additional layers. Conversely, the enterprise organization is accustomed to owning security at all layers, but can be relieved of managing layers such as physical security and the core network.


Cloud security is a similar challenge to traditional on-premise security when data centers were first being built; proper security practices were often an afterthought. An additional complication to cloud security is the elasticity of the cloud. A cloud environment can become difficult to manage very quickly, and success will also depend on an organization’s ability to maintain visibility within such a dynamic environment.

Designing a comprehensive cloud security strategy within AWS will require adapting controls and risk management methodologies to an agile operations model, as well as understanding of how to utilize the resources available for maintaining visibility. Lastly, it is imperative for those new to the public cloud to understand that boundaries may shift as organizations leverage IaaS, PaaS or SaaS solutions.

Stay tuned for an upcoming post where we’ll review and discuss 7 Core Requirements for a Solid Cloud Security Strategy.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Cloud Border Visibility

Maintaining network visibility is one of the biggest concerns in moving to the cloud. Fortunately, many traditional tools and techniques still work in a cloud environment. Network visibility is a broad topic. However, in this post, we will discuss maintaining network visibility at your cloud border.

Virtual Private Clouds

Amazon Web Services (AWS) and Microsoft Azure offer the capability of segmenting your infrastructure services into a private “virtual” network. In AWS this is called a Virtual Private Cloud (VPC), while in Azure it’s called a Virtual Network (VNet). In each platform, the capabilities are virtually identical.

Private Networks (as we’ll refer to them) allow you to segment your assets into a “virtual network.” These Private Networks allow you to create subnets, access control lists (ACL), route tables, and more. The Private Network itself can also have its own private IP space (RFC 1918) and a VPN gateway. This allows for – among other things – large hybrid cloud configurations.

At the border of your Private Network, you can place a simple cloud-provided Network Address Translation (NAT) gateway instance and route your Internet traffic to and from your network. To summarize, VPCs give the network engineer the appearance of a traditional network infrastructure.

Border Visibility

The problem with this configuration is in how access is controlled and reported at the border of the Private Network. Both AWS and Azure offer quick solutions to route traffic in and out of the network. These solutions act like stateful firewalls, simply brokering access to your network based on simple ACL rules.

AWS and Azure both have the ability to log Private Network firewall events. Using VPC Flow Logs (AWS) and Azure Diagnostics, it’s possible to pull firewall logs, as well as other security and operational metrics in to an existing log collection platform. However, there are a few capabilities still missing in this configuration.

First, the Cloud Service Providers’ gateway solutions (or simple public IPs, in the case of Azure) don’t provide the ability to inspect ingress or egress traffic using modern technologies. Unfortunately, promiscuous packet capture doesn’t work within these cloud environments. Therefore, activities such as layer-7 inspection (e.g. Next-Generation Firewall), network intrusion detection/prevention (IDS/IPS), and user behavior analytics are not possible unless you’re in-line with the communications channel.

Additionally, Cloud Service Providers’ NAT gateway solutions are proprietary and don’t fit in with the usual on-premises firewall solutions. For example, if your organization uses Palo Alto firewalls and manages them with Panorama, the cloud firewall device would not be able to be managed in the same interface. This makes configuration management and control more difficult for both the security ops and compliance teams.

In short, native Cloud Service Provider gateway solutions aren’t cut out for modern enterprise deployments. However, we routinely see these virtual gateways deployed in enterprise configurations.

Closing the Gap

Fortunately, there are other options available. Vendors like Palo Alto, Fortinet, Sophos, and CheckPoint have released their own virtual Unified Threat Management (UTM) appliances. The first step in closing this gap is – of course – using one of these enterprise appliances. If possible, you should choose one that matches your on-premise firewalls to help with management continuity.

But that’s not the end of the story.

Deploying a virtual UTM appliance is easy. Unfortunately, properly configuring the Private network is a step that many skip. Each Private network subnet (in both AWS and Azure) will need to be properly routed through this new UTM. Complicating this further, subnets in both AWS and Azure are all locally routable by default. That means, without configuring overriding those default routes between subnets, your new UTM can’t segment your networks. In AWS, that’s rather easy; but in Azure, this requires some PowerShell work. The effects of not configuring your routes properly can range from not working to evading the UTM; not something we want after all this work.

In summary, the subnets within the Private Networks must still be isolated from one another with ACLs or NSGs, and route tables must specifically route traffic through the UTM. In a future post we’ll go over specifically how to properly configure a VPC in AWS and a VNet in Azure using a UTM appliance.


The native cloud infrastructure solutions do not provide the expected level of visibility needed for enterprise analysis. Furthermore, achieving that level of visibility is not as straightforward as we would like it to be. It’s important that security and network engineers take their time to architect the infrastructure, create (and analyze!) threat models, and to thoroughly test the cloud infrastructure.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Tips on Cloud Security: New Blog Series

Cloud service providers like Amazon Web Services and Rackspace are expanding so rapidly that it can be difficult to keep up with the pace of change and how IT security is impacted as a whole. The power of automating development, test and production at-scale has changed the way software is developed, as clearly demonstrated within the DevOps community. The ability to use low-cost, on-demand compute resources to scale and grow business operations is compelling, to say the least, with organizations moving their IT environments to the cloud increasingly.

With the accelerated rate at which technology is evolving, there are more opportunities for security breaches—but here’s a refreshing development: security is finally becoming cool in the IT world! However, it’s often still an afterthought in the planning process of implementing a new technology, and taking an ad-hoc approach to security is typically a complex, frustrating and almost always expensive undertaking. As a result, the engineering team at GuidePoint has been diligent in looking for ways to help customers assess the technical challenges they may not realize they’re facing.

In response to the great cloud migration and the ever-changing tides of potential threats to security, we’ll be publishing a series of cloud security blogs over the coming months to help organizations understand how to better secure and operate their cloud environments. Topics will range from new cloud service reviews and architectural advice to hands-on technology integration how-tos.

We hope you’ll find this information helpful and join us in the conversation.

About GuidePoint Security

GuidePoint Security LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM). Learn more at: www.guidepointsecurity.com.

Time Inc. Highlights GuidePoint Security in the WSJ CIO Journal

Time Inc. (Time) recently mentioned GuidePoint Security (GuidePoint) in an article in the Wall Street Journal CIO Journal. Time leverages GuidePoint’s Amazon Web Services (AWS) and Payment Card Industry (PCI) expertise to guide them through the migration of applications into AWS. Specifically, GuidePoint provides expertise in implementing architectures and control frameworks that not only provide security, but also PCI compliance.

“We appreciate GuidePoint Security’s advice through this process. Their specific working knowledge of security and PCI compliance in AWS has been a great asset to us,” said Keith O’Sullivan, VP – Global Information Security for Time Inc.

Organizations are rapidly increasing their cloud-adoption, however Information Security and compliance considerations present both a challenge and an opportunity while moving to the cloud. Organizations must include Information Security and compliance experts into their project team, or risk jeopardizing their cloud-application’s security and compliance.

GuidePoint provides this expertise through our Cloud Solutions and Compliance practices. We’ve worked with numerous clients developing secure architectures, control frameworks, policies and procedures, and implementing security technologies across IaaS, PaaS, and SaaS platforms enabling our clients to leverage the benefits of the cloud while maintaining or improving their Information Security and compliance posture.

Contact sales@guidepointsecurity.com or visit www.guidepointsecurity.com to learn more about our Cloud Solutions and Compliance practices.

About GuidePoint Security

GuidePoint Security, LLC provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations to successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Reston, Va., and with offices in Michigan, New Hampshire, Florida and North Carolina. GuidePoint Security is a small business, and classification can be found with the System for Award Management (SAM).

Security Visibility in the Cloud – Logging and Monitoring in AWS

By now we’re all well aware that there is a virtually limitless number of logging and monitoring solutions available on the market. Visit the Amazon Web Services (“AWS”) Marketplace, and you’ll find plenty of options. In fact, it gets really crazy when you start examining security monitoring versus application performance monitoring, often with solutions performing one role better than the other, or even just one of the roles altogether. What’s interesting to me is the lack of common Enterprise logging and monitoring solutions available in the AWS Marketplace. Obviously you can deploy instances to handle implementations of solutions like ArcSight, McAfee, LogRhythm, or NetIQ, but Splunk is the only well-known commercial provider with solutions available in the marketplace.

Now, that’s just the commercial side… what about open source?  Let’s cover a few terms first, for those new to centralized logging.

Shipper – a system agent that collects and forwards, or ships, system and application logs to a centralized server.

Collector / Broker – a message broker is a system that collects and queues logs as an intermediary step to indexing the logs centrally for analysis, monitoring, and alerting. Its primary purpose is to ensure you don’t lose messages when or if your indexer falls behind, crashes, or otherwise becomes unavailable to receive logs.

Collector / Indexer – a system used to collect, parse, and store logs for searching, analysis, monitoring, and alerting.

Dashboard / Visualizer – the dashboard is used to aid in log analysis by providing a search interface, and in some solutions alerting.

Open source logging and monitoring solutions abound, and like the well-known solutions missing from the AWS Marketplace, are typically implemented on purpose-built instances within your AWS Virtual Private Cloud (You are using a VPC, right?)  So what comprises an open source, centralized logging and monitoring solution?

Log shippers like Nxlog, Logstash, Lumberjack, and Fluentd. Brokers like Redis, RabbitMQ, and ZeroMQ. Indexers like Elasticsearch and… well, Elasticsearch seems to be the industry-standard as far as open source goes, but there are also a lot of folks using centralized syslog-ng, or Rsyslog.  Dashboards, such as Graylog2, Kibana (for Elasticsearch visibility, I like ElasticHQ), and security agents like OSSEC complete the architecture.

So, with all of these solutions available, why do I run into so many clients already in AWS, or moving to AWS, that have insufficient logging and monitoring, or worse, no logging and monitoring at all in their Cloud environment? Because Logging and Monitoring is Hard. Don’t get me wrong, it doesn’t require a rocket scientist on staff to get one or more of these commercial or open source solutions deployed. There’s preparation, communication, research, and other steps that have to be taken to properly implement logging and monitoring. I spent over a week researching available solutions, and building out proofs-of-concept in my virtualized lab to determine which solutions met my needs. That is the most critical point one should take away from this article; there is no right or wrong way to implement logging and monitoring in your AWS Cloud. As with all things IT, there is more than one way to accomplish your technical and business objectives. The trick is to find the right way for your organization.

Let’s look at some of the decision criteria that will come into play; this is not an exhaustive list:


  1. What expertise is available from my current staff – network engineering, development (if so, which languages), information security, incident handling, etc.?
  2. Do we have experience with a particular commercial solution?  A particular open source solution?
  3. Should I train existing staff, or hire staff with the relevant experience?
  4. Should I forget about managing this myself altogether and go with a Managed Services Provider?


  1. Have we defined and documented the metrics we care about, and established a policy and process around ensuring this data is available and utilized?
  2. Have we defined and documented our business objectives behind logging and monitoring?
  3. Have we defined and documented regulatory mandates related to logging and monitoring? How do we keep our requirements and this documentation current?
  4. Have we determined roles and responsibilities involved in supporting the logging and monitoring initiative?


  1. Have we defined and documented technical requirements for our logging and monitoring solution? How do we architect our solution?
  2. Have we researched available options, and documented their strengths and weaknesses with regard to operating in our environment or culture?
  3. How do we facilitate a demonstration, proof-of-concept, or evaluation of the targeted solutionWhat do we log? Where do we store logs?
  4. What do we log?  Where do we store logs?
  5. How do we alert appropriate personnel a problem has been detected?


After extensive research, and comparison of features and functionality, I decided upon a Hybrid ELK Stack for this case study.  The ELK Stack is comprised of Elasticsearch Logstash and Kibana. I also added Graylog2 to support alerting, and OSSEC for file integrity and host intrusion prevention. There are numerous guides on the Interwebs to assist with deploying these solutions, so I will not go into installation and configuration in this post. I may write another article later to cover installation and configuration, but I’ve included links to all of the resources I used to get up and running at the bottom of this post. Note that, although this entire process covered a full week, the bulk of the final deployment was completed in about ~12 hours. I built the final environment on AWS’ Free Tier, but didn’t even complete rolling out the dashboards before the Logstash Shipper/Logstash Collector/Elasticsearch Indexer combination on the central server decimated the t1.micro instance (Ubuntu 12.04) I deployed it on (Java consumed all available memory). Rather than tune the overwhelmed box in an attempt to stabilize it, I took advantage of being in AWS and scaled up to a m1.small instance – problem solved.  In total, I spent less than $5 bucks on my, admittedly limited, proof-of-concept.


Figure 1: Kibana 3… Dead Sexy!

Take a look at the components I selected:

  1. Log Shipper – Logstash on Linux servers, Nxlog on Windows servers. Although Logstash is cross-platform, and is perfectly capable of shipping Windows Event Logs, IIS and MSSQL logs, the author of Nxlog convinced me Why Nxlog is better for Windows.
  2. Broker – This case study doesn’t incorporate the use of a Broker. I was originally going to include RabbitMQ in the architecture, but version dependencies led me down a path that was in danger of kludging up the whole study. In a production environment, you definitely need to use a broker to provide scalability and resiliency, but I pushed onward without including it.
  3. Indexer – Elasticsearch. Ridiculously easy decision for me, since Windows servers are in my test environment, and I was interested in testing something other than syslog.
  4. Dashboard / Visualization – Kibana 3 is dead sexy, and I’m an eye-candy kind of guy. I’d gone into this planning to just use Graylog2, since it is a great visualization tool itself, plus includes alerting capability, but after seeing screenshots of the new and improved Kibana 3.x, I couldn’t help deploying it, too. Regarding alerting, Nagios is often used in concert with Graylog2 for its ability to “roll up” alerts. If you’re interested in configuring email alerting/alarms for your Graylog2 deployment, Larry Smith has a great blog post to get you started here. Last, I also installed the ElasticHQ plugin to monitor my cluster of one’s health.
  5. As an aside, I also deployed OSSEC to the Linux and Windows servers for file integrity monitoring and intrusion prevention.

Figure 2: ElasticHQ… Elasticsearch cluster health, and a whole lot more!

A note about the final deployment; ultimately the redesigned, recently released Graylog2 v0.20.1 didn’t work out like I’d hoped. Everything was running smoothly, and based on configuration guidance and the absence of error output, it seemed I was setup properly, but I never saw the data from Elasticsearch in Graylog2. I spent the last few moments I had allocated to this project experimenting with some alternate configurations, and finally strayed so far from my working example that I had to give up. So, after a week of research and implementation time, a diagram of what we have can be seen in Figure 3.

Screen Shot 2014-04-23 at 2.49.08 PM

Figure 3: AWS Logging and Monitoring PoC Architecture

This was a trivial setup – I’m using a single box for a local Logstash shipper, an Elasticsearch index, MongoDB for Graylog2, and three different web interfaces. In a production system, ensure you use a more appropriate architecture including separating each component, utilizing multiple availability zones, inserting a broker to receive messages from log shippers, utilizing SSL, etc. etc.

Although I didn’t have enough time to sort out Graylog2, and get some alerting configured, I’m pleased with the overall outcome of my Security Visibility experiment. I found OSSEC to be an excellent “partner” in my quest for visibility, despite only utilizing and documenting the file integrity portion of its functionality.


Figure 4: OSSEC Web UI

Nxlog works perfectly for shipping Windows event logs, and of course, the lovely Kibana ties everything together and puts a nice bow on the concept of visualization.


Figure 4: Analyzing events with Kibana 3



Although this was not a terribly difficult experiment, from a technical perspective, I still wondered, “Is there another | quicker | better way to gain security visibility in AWS?  Well, yes, and no. Yes, there’s an easy way to get security visibility, plus AWS automation to boot – no, because despite this gem of AWS security visibility, I will still recommend a centralized logging and monitoring platform in AWS. So, what’s this solution, you ask? CloudPassage Halo. But wait, there’s more! Halo has an API that’s made it possible for several SIEM solutions to integrate with it, sharing the Halo security visibility love in a centralized way within your existing, or planned, logging and monitoring deployment.

Halo has enough features and functionality to warrant its own blog post, so I won’t go into those here. Suffice it to say, anyone looking for security visibility, automation, or both in AWS should definitely have a look at what CloudPassage has to offer.



Figure 5: Windows security events captured by Halo


Logging and monitoring is hard, but there are more than enough commercial and open source tools available to fit any size organization, with any size of budget. Attaining security visibility and appropriate incident handling isn’t just the right thing to do from a best practice perspective; many standards, regulations, and laws mandate them. So, regardless of the type of solution or solutions you select, choose and implement something, and gain insight into security incidents you may not have any idea are happening. After all, inadequate visibility is better than no visibility at all.

For additional information on this subject and the opportunity to ask questions, please click here to register for our Webinar titled:  Security Visibility in the Cloud – Logging and Monitoring in AWS occurring on May 1st, 2pm (EST).