GuidePoint Security Recognized for Excellence in Managed IT Services

CRN®, a brand of The Channel Company, has named GuidePoint Security to its 2018 Managed Service Provider (MSP) 500 list in the Security 100 category. This annual list recognizes North American solution providers with cutting-edge approaches to delivering managed services. Their offerings help companies navigate the complex and ever-changing landscape of IT, improve operational efficiencies, and maximize their return on IT investments.

In today’s fast-paced business environments, MSPs play an important role in helping companies leverage new technologies without straining their budgets or losing focus on their core business. CRN’s MSP 500 list shines a light on the most forward-thinking and innovative of these key organizations.

The list is divided into three categories: the MSP Pioneer 250, recognizing companies with business models weighted toward managed services and largely focused on the SMB market; the MSP Elite 150, recognizing large, data center-focused MSPs with a strong mix of on-premises and off-premises services; and the Managed Security 100, recognizing MSPs focused primarily on off-premise, cloud-based security services.

GuidePoint Security invested in a specialized team that developed our Virtual Security Operations Center (vSOC), to address flaws commonly found with other Managed Security Service Providers (MSSPs). As a result, GuidePoint’s vSOC provides differentiated customer-centric managed security services.

GuidePoint’s vSOC combines advanced detection and response capabilites, threat hunting powered by proprietary machine learning, and experienced security personnel, all provided as a service.

“Managed service providers have become integral to the success of businesses everywhere, both large and small,” said Bob Skelley, CEO of The Channel Company. “Capable MSPs enable companies to take their cloud computing to the next level, streamline spending, effectively allocate limited resources and navigate the vast field of available technologies. The companies on CRN’s 2018 MSP 500 list stand out for their innovative services, excellence in adapting to customers’ changing needs and demonstrated ability to help businesses get the most out of their IT investments.”

“Significant enhancements to our service offerings and processes, as well as the expansion of our vSOC team over the last year enabled GuidePoint to respond to the increased demand for our offerings,” explained GuidePoint’s Director of vSOC Product Development, Robert Vaile. “Our passion around continued innovation, key technology partnerships and world-class customer satisfaction are powerful differentiators for us and will continue to fuel our success.”

The MSP500 list will be featured in the February 2018 issue of CRN and online at

About GuidePoint Security

GuidePoint Security LLC provides innovative and valuable cybersecurity solutions and expertise that enable organizations to successfully achieve their missions. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification is with the System for Award Management (SAM). Learn more at:

No Cookie Cutters

Many organizations trying to mature their Application Security Programs are buying SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) solutions. For those unfamiliar, SAST tools are used for binary, byte, or source code analysis, and look for flaws at the code level, whereas DAST tools are meant to test an application at run time. These tool sets can add a lot of value to an organization, but how they are implemented into the SDLC will determine the true return on investment. Some organizations create a budget, then buy some tools…but beyond that, still need help figuring out next steps. Where there may not be a cookie cutter solution for this, there are common factors that will help you determine the most effective strategy for implementation.

Before we talk about implementing SAST and DAST tools into the SDLC, organizations should first gain an understanding of the size of their application portfolio, how many licenses they can reasonably budget for, and the amount of resources required to implement, tune, support and run these tools. Once those factors are understood, one must put the cart before the horse and ask how the results from these tests will be reviewed, who will review them, and how they will get tracked and prioritized for remediation.

Smaller development shops tend to have tighter budgets and a more tactical approach, given that they may only have one or two application security resources. With environments like this, the development leads are often getting asked for help, and being trained to run the tools themselves so that the application security resources can focus their time on reviewing, validating, analyzing, and tracking the results. Organizations should try to avoid implementing tools which are licensed per user. Why should you have to choose which developer should be able to proactively find issues in the code being developed? The whole purpose of driving automated tools into the SDLC is to encourage all developers to develop based on secure coding principles and be able to test their code as early in the SDLC as possible. When everyone on the development team has the same chance at secure development, a formalized secure coding standard starts to take shape.

Developers leveraging these tools are a very good thing for an organization, but this activity should never replace the more formal review performed by application security professionals. Frequency of testing factors in several other considerations that are a bit off topic for this blog, but may be revisited in a future article.

For issue tracking, the organization may leverage their ticketing, bug tracking, or GRC systems, but needs to also take into consideration what kind of detail is contained within the tickets. In other words, not everyone who can access the tickets should be able to access vulnerability details or application specifics. The ticket should be as generic as possible with details tracked in a system that can be limited to least privilege. Even a developer of one application shouldn’t necessarily have access to the vulnerabilities of another application they don’t work with. It’s important to keep the existence of insider threats in mind when deciding how much detail to reveal within an environment. If the application security issues are available to everyone, and an attack is executed before remediation is in place, this could introduce a great deal of complexity into an internal investigation.

Another important part of the process is aligning the findings that come out of the tools with the security policies/standards that may already be in place. Each tool assigns default levels of severity for each finding. These are typically configurable and should be reviewed, as some organizations may want to change some of these levels based on their own unique environments or controls. It is common for our clients to have a policy or standard in place (whether it be formal or informal) that requires the remediation of all high or medium severity findings prior to code being implemented to production. Ensuring the findings in the tools are configured to help meet this standard also aligns the business and security with the process. It should be noted that if developers can access and run these tools, they should not be able to reconfigure the severity levels themselves and should not deem anything a false positive without a formal review by the security team. Checks and balances are important to maintain, even in a large development shop or organization.

Overall, automated tools are an important part of a Secure SDLC program and provide a lot of value to any development organization. They can help increase the coverage for testing, help identify “low hanging fruit”, and are a great first step to help kick start a new Application Security Program within an existing SDLC. However, organizations must consider implementing usage plans and developing processes to expand the quality and security of the code, as well as provide a much more significant return on investment. Just remember, the solution is as unique as your development environment and overall business. There are no cookie cutter solutions to implementing tools, but GuidePoint is here to help you, and we might even have cookies!

About the Author

Kristen Bell, Managing Security Consultant – Application Security

Kristen is a Managing Security Consultant at GuidePoint Security who started in Application Security in 2005. Prior to joining GuidePoint, Kristen consulted for numerous companies performing application security services. Kristen has a background in the government sector, building application security programs and providing guidance in secure application design.

Kristen’s experience includes conducting application security assessments and database security reviews, secure SDLC consulting, as well as working with clients to improve their enterprise vulnerability management. Kristen’s ability to bridge the gap between technical and non-technical people, coupled with her strong interpersonal skills, has made her a strong champion for application security frameworks and controls for her customers. Kristen earned a Bachelor of Science degree in Computer Science from Kentucky State University.

GuidePoint Security recognized as recipient of 2018 Splunk Partner+ Awards

GuidePoint Security Named Global Partner of the Year and Americas Partner of the Year for Outstanding Performance

HERNDON, VA – March 5, 2018 – GuidePoint Security, a cybersecurity company that provides world-class solutions, today announced it has received the Splunk 2018 Global Partner of the Year award as well as the Americas Partner of the Year award, for exceptional performance and commitment to the Splunk® Partner+ Program. The prestigious Global Partner of the Year and Americas Partner of the Year awards recognize the Splunk partner who has demonstrated the ability to find and lead incremental business with a continued commitment to their partnership with Splunk. Learn more about the Splunk Partner+ Program here.

The Splunk Partner+ Awards are designed to recognize members of the Splunk ecosystem for industry-leading business practices and dedication to constant collaboration. Areas of consideration for an award include commitment to customer success, innovative program execution, investment in Splunk capabilities, technology integrations and extensions, and creative sales techniques.

“We’re honored to receive such prestigious awards,” GuidePoint Security Co-Founder and Principal Justin Morehouse noted. “It’s a testament to the strong partnership our two organizations developed over several years. Beyond our capabilities to provide Splunk certified professional services, our strategic partnership is supported by GuidePoint’s vSOC Managed Security Services, which continues to disrupt the MSS industry,” Morehouse added.

“As a vital partner to Splunk, we applaud GuidePoint Security for being recognized as the Global Partner of the Year and the Americas Partner of the Year, said Cheryln Chin, vice president of Global Partners, Splunk. “The Splunk Partner+ Awards recognize partners like GuidePoint Security who exemplify the core values of the Partner+ Program coupled with a strong commitment to growth, innovation and customer success.”

Winners of the Splunk Partner+ Awards reflect the top-performing partners globally and regionally. All award recipients were selected by a group of the Splunk executives and global partner organization. Read more about the Splunk Partner+ Program.

About GuidePoint Security

GuidePoint Security LLC provides innovative and valuable cybersecurity solutions and expertise that enable organizations to successfully achieve their missions. By embracing new technologies, GuidePoint Security helps clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Herndon, Virginia, GuidePoint Security is a small business, and classification is with the System for Award Management (SAM). Learn more at:

When user behavioral analytics isn’t the right name

There is a lot of talk about “machine learning” and “behavioral analytics” in the cybersecurity world. Some products and companies are doing a great job designing big data based solutions that use higher math and analytics to find and alert on unusual or malicious activities. Some products are simply a higher order of signatures hiding behind a shiny veneer to make them look like math and analytics.

But sometimes there is a way of doing things that’s simply, well, more than that. There are user behavioral products out there that I think really should be named something different. I’m not sure what that marketing name should be, but let me explain what they do and maybe someone can create a cool shiny name for it.

These products do in fact use math and analytics to baseline activities and alert on deviations, but more importantly, they collect up activities around those deviations and create timelines of total activity and then score them. This is higher order incident response. If you walk into any SOC when a major alert is being investigated, the first thing a SOC analyst will do is collect up evidence and create a timeline of activity around it. Then once all this information is plotted together of “what just happened” they make a decision about whether it was a user who hit something, an application that hiccupped, or the possibility of something much more sinister.

At least one of the user behavioral analytics products does most of that heavy lifting, and does it fast and automatically. Its hands over the timelines and evidence for a human to then validate the “risk score” or invalidate and throw in the trash. Who wouldn’t like to have more time back for their SOC analysts to go proactively hunting instead of reacting? It could be a game changer for many cash and talent strapped agency SOCs.

So, what should these products be called? They aren’t classic automation and orchestration products. They aren’t an IR tool for forensics. They are doing rock star user behavioral analytics, that’s true. Oh alright, I’ll keep calling them user behavioral analytics for now… until someone smarter than me figures out that cool shiny marketing term.

Join GuidePoint Security and Exabeam on March 21st, for a live webinar, to learn more about how they aren’t, well maybe are,  the best User Behavioral Analytics product on the market.  Click here for more information.

About the Author

Jean-Paul Bergeaux, Federal CTO, GuidePoint Security

With more than 18 years of experience in the Federal technology industry, Jean-Paul Bergeaux is currently the Federal CTO for GuidePoint Security. JP’s career has been marked by success in technical leadership roles with ADIC (now Quantum), NetApp and Commvault and SwishData. Jean-Paul focuses on identifying customers’ challenges and architecting innovative solutions to solve their complex problems. He is also a thought leader on topics that are top of mind for Federal IT Managers like Cyber Security, VDI, Big Data, and Backup & Recovery.

Enabling Public Cloud Application Performance and Security

There has been a lot of talk about cloud security and how to monitor SaaS and IaaS access and usage, both sanctioned and unsanctioned. However, one thing that needs to be talked about more is how applications that are known, tracked and managed are being deployed in the cloud, via IaaS.

When deploying applications on premise, either in a datacenter or in a DMZ, there are firewalls, network monitoring and various security controls that are known and already in place before an application even enters the discussion. However, when moving an application to the cloud via IaaS, none of those security controls exist by default, despite what customers might believe. This specifically applies to application hosting front ends such as ADC/WAFs.

Unfortunately, many cloud hosting deployments are being managed by development teams, not network or security teams. And while developer teams know what they are doing and are professionals, they often are not even aware of what network and security teams have done before they deploy their applications. An example of this is how many development teams are deploying default application delivery controllers offered up by IaaS providers. These ADCs appear to be point and click and cheap. And they are.

The problem is that they lack the performance and security that typical enterprise ADC/WAF appliances, virtual or otherwise, offer. Some of the clearest examples are features like DAST that allows an application to be scanned and resulting vulnerabilities be virtually patched by the application. Another example is the ability to automate security controls and requirements through industry standard DevOps tools like Ansible, Puppet, Chef as well as classic scripting languages like python and PowerShell. Further, using a product like F5 ASM that leverages broad industry support, application templates can be deployed with little or no customization or for custom applications, creating a custom security policy that can be accomplished with little or no user interaction with a Rapid Deployment Policy interface.

The final value, and probably the most critical, is a must-have for any government agency. A true enterprise virtual ADC/WAF offers FIPS level data encryption for application data in-flight. Without integrating with physical FIPS hardened appliances, the private keys necessary to do secure SSL transit data cannot be stored properly. Default ADC/WAFS supplied by the major IaaS providers do not have the ability to do this. Therefore, an enterprise software version is required.

Besides the added functionality, using a software enterprise ADC/WAF like F5 also provides consistency across on premise physical, on premise virtual and cloud application hosting. First and foremost, no new learning is required to ensure that the ADC/WAFS in the cloud are meeting security policy and are configured correctly. Any security issue can be resolved in the same manner that is currently used and probably will be used for on premise applications in most agencies that are going to persist to be hybrid computing for some time. A single management can be used for all and no additional training or risk of misconfiguration is added into the application life-cycle.

This consistency can be the difference between resolving a security issue with a few clicks in the proxy of an enterprise solution, and scrambling to figure out how to patch or fix code in an application that now has a major vulnerability and is in production. A common example is Heartbleed. When that hit enterprises, F5 front ended applications were able to resolve all applications, in some cases hundreds by simply pushing out a mitigation at the proxy, and then mapping out the patching and code fixes of the applications with more time and planning.

For a deeper dive into the differences between default IaaS ADC/WAFS, HSM integration to secure application traffic in-flight and how to securely move application to the cloud, join GuidePoint Security, F5 and Thales Security on Feb 27th for our live webinar.  Click here to register.

About the Author

Jean-Paul Bergeaux, Federal CTO, GuidePoint Security

With more than 18 years of experience in the Federal technology industry, Jean-Paul Bergeaux is currently the Federal CTO for GuidePoint Security. JP’s career has been marked by success in technical leadership roles with ADIC (now Quantum), NetApp and Commvault and SwishData. Jean-Paul focuses on identifying customers’ challenges and architecting innovative solutions to solve their complex problems. He is also a thought leader on topics that are top of mind for Federal IT Managers like Cyber Security, VDI, Big Data, and Backup & Recovery.

vSOC Background

GuidePoint Security Managed Services and Splunk providing value together

Recently, mainstream industry surveying and analyst firms have echoed what security leaders have known for some time, there are insufficient skilled security professionals to meet the demands for in-house cybersecurity expertise. This is driving security leaders from all industry segments to consider capable external security services providers to deliver needed expertise. Even organizations that have traditionally preferred or mandated that staff security resources be provided internally, have begun to explore outsourcing security capabilities. Federal government agencies that have strict control requirements and historically internal security teams are increasingly looking externally for capable managed security service providers (MSSP).

One of the hottest areas of need is Splunk expertise. Both installing, configuring and running as well as “eyes-on-glass” SOC analysts are using the application to keep agencies secure. While Splunk is an incredibly powerful platform that is taking the Federal government by storm, the situation has created an expected inability to find qualified “Splunkers” at an affordable cost for government agencies.

The challenge and opportunity for MSSPs like GuidePoint Security, is to deliver highly mature services that are compatible with the requirements of government organizations. For example, GuidePoint employs only US citizens who are based in the United States to manage security services for our customers. GuidePoint vSOC managed services, based on Splunk technology, can be deployed to FedRamp environments, and support FedRamp controls. These types of capabilities will be key to supporting an increasing government client base.

But government clients do not simply require checkbox compliance requirements to be met, they also expect sophisticated operational capabilities and high levels of service. Agencies expect to maximize the value delivered by the MSSP, and to minimize the time and effort of scarce internal security resources. GuidePoint prides itself on delivering white-glove service to its customers by managing SIEM to a higher level than typical of MSSPs. For example, vSOC analysts validate every Splunk event with the intent of eliminating false positives before providing an alert to clients. GuidePoint has augmented its core service (vSOC Detect) with advanced technologies and processes that integrate natively with Splunk, including extensive threat intelligence enrichment, darkweb threat monitoring, security automation and orchestration, active threat hunting, and managed endpoint detection & response. These capabilities allow GuidePoint to deliver advanced security operations that can significantly augment a client’s internal security capabilities. These service features also offer a level of capability and sophistication required by government clients.

Join us on Thursday Feb 22nd, for a live webinar, to hear more about how GuidePoint’s vSOC managed security services is leveraging Splunk to provide differentiated SOC-as-a-service to federal agencies.  Register now.

Security Tool Consolidation to fight “Tool Sprawl”

I’ve been talking about the problem of “Tool Sprawl” for over four years. I may have made up the term, or acquired it from somewhere else. I don’t remember. But the core idea is that buying a ton of security tools to fill in compliance gaps and spit out alerts doesn’t equate to security.  Even the coolest cyber security technology can be rendered useless if it is part of an avalanche of technology that an enterprise is trying to manage and respond to.

The clearest example of this is the constant problem of misconfigured firewalls, both traditional and next-gen, that have created a whole new category of products centered around validating FW rules and configurations or “Rule Clean Up.”  I’ll start by saying I think that those products are worth it, and I have proposed them to customers and would advocate they be used by any enterprise looking to protect their perimeters.

The problem is that only one category of product is being addressed to double check configurations.  What about your WAF/ADC, IPS/IDS, AV, EDR, Active Directory, PAM, vulnerability scanners, route/switch, or *gasp*? Shall I go on? How do we know anything in our network, end-point, and security tool environments are set up and configured right?  Adding more tools to check our tools only compounds the problem of tool sprawl mentioned above.

As a recovering Data Center enterprise architect, and present cyber security enterprise architect, my desire is to keep things simple, yet effective.  I am drawn to products and services that provide both Security ROI and Financial ROI.  Most assume correctly what a Financial ROI is, but what is “Security ROI”?  I look at it as quantifiably moving an enterprise’s security posture forward vs. the dollars spent.  Some good quick hit products in the security field are high bang for the buck I can rank with another tools Security ROI.  Believe it or not, there are some security tools out there that actually offer a true Financial ROI as well.  The best reduces both CAPEX and OPEX costs, as well as the labor overhead needed to manage everything.

The absolute home runs have both Security ROI and Financial ROI.  These are rare of course.  Keep an eye out for our soon to be released Federal whitepaper that will detail more about enterprise architectures and some go-to solutions that do have both. One of those solutions in our whitepaper is called security efficacy testing and automation. Sometimes referred to as “Security Instrumentation”, this software exposes misconfigured security tools, overlapping security products, confirms security teams are correctly responding to incidents, and allows an agency to continuously validate and improve layered defenses.  Often deploying a Security Instrumentation platform can immediately improve the security posture of an agency, as well as improve SOC processes in dealing with an incident, both with simple changes and little capital expenditure.

This is exactly what enterprise security teams need to battle tool sprawl.  Once you are able to identify what is and what is not working, you can justify consolidation and possible removal of ineffective tools, opening up CAPEX and OPEX for new tools that can fill in the gaps.

Join GuidePoint Security and Verodin on Feb 8th to hear more about security tool consolidation and how government agencies can move their security posture forward with less funds.


Click here to Register for the Feb 8th, 2018 Webinar.


About the author:

Jean-Paul Bergeaux, Federal CTO, GuidePoint Security

With more than 18 years of experience in the Federal technology industry, Jean-Paul Bergeaux is currently the Federal CTO for GuidePoint Security. JP’s career has been marked by success in technical leadership roles with ADIC (now Quantum), NetApp and Commvault and SwishData. Jean-Paul focuses on identifying customers’ challenges and architecting innovative solutions to solve their complex problems. He is also a thought leader on topics that are top of mind for Federal IT Managers like Cyber Security, VDI, Big Data, and Backup & Recovery.

Managing Spectre and Meltdown at Enterprise Scale

1/05/2018 Update:  Apple announced late in the day on 1/4 that its products are vulnerable. Its most recent versions of iOS 11.2.1 and macOS 11.12.3, released before this vulnerability went public, included some fixes. Apple is still working on further updates and will release them at an unspecified time in the future.

The dawn of the new year brings with it a pair of new designer vulnerabilities, Meltdown and Spectre, which affect virtually any CPU made after Intel’s original Pentium CPU, regardless of what operating system it runs.

What is Meltdown and Spectre?

Modern CPUs use a trick called speculative execution to speed up processing. When there is a branch in program code, the CPU runs both possibilities at once, then discards the one it didn’t need. Meltdown and Spectre use different tricks to find data from those discarded results and access memory that they normally wouldn’t be able to access.

An attacker could use this to steal passwords or credit card numbers, or in the case of cloud infrastructure, steal data from virtual machines belonging to other customers. In cloud environments, it is possible to read data belonging to the hypervisor or other virtual machines.

The biggest problems occur on Intel CPUs. CPUs from AMD and ARM are susceptible to a smaller number of more complex attacks, but still must be considered vulnerable. In enterprise environments, Intel CPUs are far more common than AMD or ARM.

Why should you care?

Almost any computer made in the last 22 years is vulnerable to one degree or another for this. These vulnerabilities have received a tremendous amount of coverage, even bleeding into the mainstream press, so everyone from customers to board members have likely heard about this and are concerned.

What can you do?

First, don’t panic. So far there are no reports of reliable exploits circulating in the wild. Operating system vendors are releasing patches as we speak. Spectre is difficult to mitigate at the CPU or operating system level, so browser makers are attempting to mitigate it at the browser level, since browsers are both an effective attack vector and an attractive target.

Scan your network with a proven vulnerability scanning solution. Check your results for CVEs CVE-2017-5753, CVE-2017-5715, and CVE-2017-5754, and check your web browser versions to build an inventory of patches that will need to be deployed and where. For best results, ensure you are scanning your entire network with authenticated scans. Vendors will be releasing updates through the end of January, so keep in mind, this is a moving target.

Chrome, Edge, Firefox, and Internet Explorer all received updates this week. Chrome will receive another update by January 23. Safari, Opera and Vivaldi will receive updates on or before January 31. Additionally, Google recommends enabling site isolation in Chrome. Opera and Vivaldi have the same feature. This setting is in chrome://flags/#enable-site-per-process.

If your vulnerability management platform is capable of scanning your mobile device management solution, scan your MDM solution as well to ensure your Android devices are running the January 2018 update from Google, and your iOS devices are running iOS 11.2.1 from Apple.

Microsoft released out-of-band updates for this, but its patch has issues with many third-party antivirus solutions. Unless you have other information direct from your antivirus vendor, GuidePoint Security recommends waiting until Monday for your antivirus vendor to catch up. On Monday, push the update to your antivirus client, then start pushing Microsoft’s update.

Patch in a controlled, prioritized fashion. Workstations and cloud infrastructure are the most critical, as they are most susceptible to attacks. Servers running on hardware you control are much more difficult to exploit, so they can be in your later round of patching. If possible, patch a test environment first so you can monitor for performance impact, as servers that do large amounts of I/O, such as database and web servers, can experience performance degradation of 20 or even 30 percent. Google and Intel have experimental mitigations to help with these degradations in the long term. However, these fixes will require recompiling code so these changes will take time to appear.

After patching, be sure to follow up with subsequent vulnerability scans. GuidePoint engineers have observed Microsoft’s patch giving false error messages that suggest the patch failed when, in fact, it had succeeded. Your vulnerability management solution has more thorough checks that can validate the patch actually succeeded. Microsoft is working on an update for this patch to fix the error messages.

If you cannot update all of your browsers, consider updating one browser and limiting general web access to one particular browser at your proxy server until you are able to update all of the browsers in your network. Please note that technologies like Microsoft EMET and Malwarebytes Anti Exploit, while very useful against certain types of exploits, are not able to protect your browser against Spectre and Meltdown.

GuidePoint Security is here to help

GuidePoint’s cybersecurity advisors have years of experience managing vulnerabilities in enterprise environments. We can help you ensure your vulnerability management solution is correctly sized for your environment, and our Virtual Security Operations Center (vSOC) Identify Team can even run your vulnerability management program for you. Learn more at


Dave Farquhar, vSOC Analyst at GuidePoint Security, is a Cyber Security professional who has worked in the field for 8 years with Vulnerability Management, Policy Compliance, and Incident Handling as his main focuses. Dave most recently managed accounts for 30 large customers at a major vulnerability management vendor, where he helped his most successful clients reduce their vulnerability counts by 50 percent. Prior to moving to security, Dave specialized in remediation management on the infrastructure side of IT. Dave has a Bachelors degree in Journalism from the University of Missouri as well as holding CISSP and Security+ certifications.

An Incident Responders Take on 2018’s Cybersecurity Predictions

In his article, The Top 18 Security Predictions for 2018, Dan Lohrmann’s roundup outlined the cybersecurity industry’s top predictions from some of the major industry vendors, including TrendMicro, McAfee, Symantec, Check Point, and others.

As with any prediction, there are always those who either agree, agree in part or totally disagree. I would place myself in the second category of agree in part, although there are a few salient points that I believe should be included.

Additionally, I am going to add a bit of fidelity to their predictions based on my market visibility and experiences. You will see some similarities and some differences in view but remember, they are based on my exposure to the industry, GuidePoint Security’s customer base, independent research I’ve performed, and input I have received from other valued Digital Forensics and Incident Response (DFIR) professionals.

Without further delay, here are my thoughts on the Top Security Predictions for 2018.

1) IoT devices will be the key victims for Ransomware

a. A lot of IoT device manufactures have implemented minimal security safeguards and these connected devices are low hanging fruit for attackers.
b. Moreover, these devices are relatively easy to target, have a highly visible public impact, and ransomware continues to provide a nice profit margin for attackers. I expect the combination of these factors to lead to a significant uptick in successful Ransomware attacks against IoT devices in 2018.

2) Most companies will take definitive action on the General Data Protection Regulation (GDPR) but only after the first set of high-profile fines or lawsuits are filed.

a. GDPR is the latest set of requirements that has companies scrambling to meet the compliance deadline, but few companies have invested the time and resources required to be properly prepared by May.
b. Also, with the EU wielding such power, European assets of American companies can be seized.

3) Malspam will increase and will focus on account compromises for Outlook Web Access (OWA) and Office 365 (O365) email/account access. Additionally, unsecured AWS and Azure environments could lead to large-scale compromises.

a. A large amount of companies are moving their email and Office environments into OWA and O365 as well as their workloads into Azure and AWS. As is often the case, security requirements are not included in these migrations in the haste to move to the new environment. (Remember, in 2017 we have already seen an uptick in the number of discovered publically accessible S3 buckets and there’s nothing to suggest that this will not continue well into 2018.)
b. Overall, malspam attacks are easy to execute and only requires gullible end-users to be successful.
c. Malspam success is based on the Human Element (HE) and you can never remove HE from cybersecurity, hence it will remain the weakest link in the chain

4) Companies in the cryptocurrency business will see the most attacks in 2018, with one or more declaring bankruptcy from the losses suffered in the attacks.

a. 2017 was a banner year for hacking the cryptocurrency businesses with at least one crypto currency dealer being hacked twice then filing for bankruptcy (e.g., Youbit[1]).

5) Non-malware and File-less malware attacks will dominate the tech industry.

a. These types of attacks were dominant and profitable in 2017 and I see them gaining strength in 2018. Many companies are ill-prepared to deal with these types of attacks, and the attackers are well aware of this weakness.

6) The Corporate Cyber Insurance industry will suffer large financial losses in 2018. This will not be a record for that industry but their claims will reach record levels.

a. I think the Cyber Insurance industry has a significant amount of maturing and change to accomplish in 2018.
b. The Cyber Insurance market will continue to explode. However, the common underwriting framework and process to measure the risk of the policies has lagged behind the policy writing.
c. I also believe the current cryptocurrency businesses are improperly designed and are too high of a risk for the cyber insurance market.

7) New POS malware variants will emerge in 2018 that will focus on EMV / Chip and PIN technologies with an increase of Ransomware targeting POS devices.

a. This is a bit of a reach for me but I refuse to believe the crime syndicates are not testing or trying to target Chip and PIN.
b. Ransomware on a POS device is simple, easy, cheap and effective and we will see it deployed effectively against retailers in 2018.

8) Online gaming agents will be used as bots in an DDoS attack. It is only a matter of time before this “innocent” avenue will be exploited and with the wide distribution of online gaming, these bots will be a force to reckon with in 2018.

a. This attack vector isn’t new but is often overlooked. I have been waiting for the past five years for this to happen and I think we are at that point in cyber-history to witness this type of massive distributed-global attack.

9) Increase in malware that targets PLC type devices. Much like we saw with the Trisis malware, the PLC device manufactures are unaware of how exposed they are to exploit and this type of targeting, especially for a Ransomware attack, can be extremely profitable.

a. With PLC devices connecting to the internet and/or to internal networks, most are not protected and large industrial corporations with deep pockets utilize the PLC devices. Therefore, with a well-planned Ransomware attack the payout could be massive.

10) NIST 800-171/ DFARS standard violations will outpace the US Government’s ability to contract and waivers will be provided to lessen the impact.

a. Many of the companies that claim compliance to 800-171 have scrambled to get a basic compliance program in place to meet the assessment criteria.
b. The DoD will need to make contracting adjustments to its FAR in order to keep up with DoD contracting demands.

Well, I guess it is a matter of record now, so we will have to revisit my prognostications in 2019 and see how close I was with each one.

Happy New Year everyone!


Image credit:


Bill Corbitt, National Practice Director for Digital Forensics Incident Response & Forensic Intelligence at GuidePoint Security, is a seasoned, results-oriented leader with extensive corporate, federal, and international experience, dealing with cybersecurity, forensic, and Incident Response dilemmas. In addition to his demonstrated success in aligning security results with business requirements, Bill is recognized for his abilities in implementing accurate cyber-countermeasures to protect intellectual property, reduce cybersecurity risk, and protect intellectual property on a global scale. A respected strategist within the forensic and incident response communities, Bill holds a Bachelor of Science degree in Criminal Justice from Valdosta State University.

2017 the year of the Non-Malware Attacks

What is a “non-malware” attack?

Image Source:

A non-malware attack is an attack that does not use malware. Simple.

More realistically, a non-malware attack is one in which an attacker uses existing software or allows (remote access) applications and authorized protocols (e.g., RDP, ssh, etc.) to carry out malicious activities on your network.

In a non-malware attack, the threat actor uses the accessible software to gain entry into the targeted network, control the victimized computers and from this point perform any sort of nefarious actions all within “full view” of all security safeguards.

These native tools grant users exceptional rights and privileges to carry out the most basic commands across a network that will eventually lead to your valuable data. With a non-malware attack, the victim has built into their traditional business model all the tools and access the threat actor needs to have to be successful. Yes, you could have made the bad-guy successful.

Without proper monitoring, the victim has, with legitimate business software (e.g., PowerShell, UltraVNC, TeamViewer, DesktopNow, etc.)[1], opened the front door to their kingdom and welcomed the threat actor with a big, warm hug and a hot cup of coffee.

In a recent Carbon Black report[2] they make note that; “Virtually every organization included in this research was targeted by a non-malware attack in 2016.” Furthermore, in the same report, Carbon Black also states there has been a +92% increase in non-malware based attacks for 2016.

The Carbon Black report says common types of non-malware attacks researches reported seeing and the percentage that saw them were: remote logins (55%); WMI-based attacks (41%); in-memory attacks (39%); PowerShell-based attacks (34%); and attacks leveraging Office macros (31%).[3]

Remember, I am not saying that any of these remote access utilities do not have a legitimate use. What I am pointing out is that non-malware remote access utilities, properly managed and not used in an ad-hoc fashion, can be very useful. However, after you add in the hubris of the Human Element (HE), this is hardly ever the case and security professionals are left scrambling to identify authorized vs. unauthorized use and access which is quite time-consuming.

What makes a non-malware attack work?

What makes a non-malware attack so successful? The answer is simple, we give the threat actor all the tools they need to be successful. We (the royal “we”) fully equip the threat actor with all the necessary tools and access simply by doing our normal daily activity and business.

Some of the more famous non-malware attacks or attack trends include the attack against the Democratic National Committee (DNC) and the “PowerWare”[4]  campaign tracked by the Carbon Black teams.

Remember, the basis of a non-malware attack is to gain a toe-hold with little threat of detection. From this point, the threat actor determines how to promulgate the attack internally.

Why are non-malware attacks so hard to prevent and detect?

Traditional security approaches in detecting non-malware (malicious) attacks will probably be 100% ineffective. This is because traditional security platforms and most modern security platforms were not designed to detect non-malware attacks in mind.

In addition to GuidePoint’s IR experiences, Carbon Black[5] has performed extensive research on non-malware based attacks, and has provided their findings in {}. Unfortunately, traditional antivirus (A/V) is ineffective in detecting non-malware based attacks, and security professionals should consider the use of technologies that incorporate Artificial Intelligence (AI), Machine Learning (ML), and User and Entity Behavior Analytics (UEBA) to effectively thwart non-malware based attacks.

Traditional A/V was never designed to detect non-malware attacks. They are basically designed as a signature-based threat detection platform that typically only monitors when a known malware signature has been written to disk. Non-malware based attacks are not identified as malware.

Image Ref:

“AI and ML’s roles in preventing cyberattacks have been met with both hope and skepticism. They have been marketed as game-changing technologies though doubts still persist, especially when used in siloes. Their emergence is due largely to the climbing number of breaches, increased prevalence of non-malware attacks, and the waning efficacy of legacy antivirus (AV)”.[6]

Real-World Example

In one real world example, of a non-malware attack the GPS/DFIR team responded to a customer request to analyze some anomalous network activity their security team had been witnessing for a couple of months (yes, months).

The Incident Responders were able to monitor an initial select set of endpoints and network segments.  Soon the GuidePoint Security Digital Forensics & Incident Response (GPS/DFIR) team identified the fact that no remote access malware was present and that network/system access was gained through compromised accounts via non-malware attack.

This was a complex DFIR investigation that involved multiple security and forensic disciplines, 24/7 monitoring of all network segments and an enterprise wide deployment with high fidelity endpoint sensors.  Also, customized onsite databases had to be designed so that all sensor data could be aggregated and analyzed in near-real-time.

The end result was a lengthy engagement with multiple forensic responders chasing and tracking the threat actor inside a global network.  The threat actor was using non-malware techniques, system administration tools and a variety of security tools to compromise user accounts, escalate privileges, access systems and exfiltrate data for profit.

Defense for non-malware based attacks

Remember, non-malware attacks will use legitimate software to perform malicious activity.  However, fielding a proper, holistic security strategy that encompasses enterprise level end point and UEBA advanced analysis that enables your overall investigative, cyber-hunt and security strategy should be carefully considered.

GPS/DFIR has a track record of investigating and analyzing such non-malware based attacks and with the combined strategic arm of GuidePoint’s security experts and knowledge of the security platforms available, we can help define the best short-term and long-term security roadmap for your organization.

As a basic defense, there are some “snap-shot” remedies that can be easily implemented:

  • Allow few (justified) remote access applications to be used (e.g., Windows RDP, TeamViewer, etc.) in your environment on your systems.  Ensure all remote access requires multi-factor authentication.
  • Because some applications can be manipulated and replaced it is important to have forensically hashed versions identified
    • Share those authorized forensic hash values with your security and IR teams
    • Place the authorized hash values into any white listing or AV applications
  • Only allow a pre-defined group of employees with a legitimate business need to use the remote access applications
  • Identify to your internal security and IR teams the list of who is authorized to use the remote access software
  • Have employees read and sign an “Acceptable Use” policy for the software or applications
  • Develop internal security alerts and rules that identify anomalous behavior and/or connections and alert/respond to those “out of parameter” activities
  • Educate your employees as to the vulnerabilities of such applications
  • Incorporate all non-malware investigative and response activities into your IR plans and run-books

The first line of defense in any effective security organization is the Human Element (HE). With proper education and training, employees can and do typically provide significant feedback as to unusual or questionable behavior.  So, open lines of communication within all business units can only benefit the entire security posture of your organization.


In conclusion, as in the real-world example, forensic analysis validated this particular threat actor using a non-malware attack method was active on this global network for over two years.  Essentially, most of their malicious activity was completely cloaked within the victim’s daily business activity and they were able to work autonomously.

This real-world example is being played out every day in companies all over the globe.  And as GPS/DFIR witnessed in this example, talented security teams recognized the threat but also realized their own team’s limitations and asked for outside help.

Non-malware attacks will never go away, rather we strongly believe that they will only increase in count and complexity and we strongly recommend that you ensure your organization is prepared to deal with this growing threat.








Bill Corbitt, National Practice Director for Digital Forensics Incident Response & Forensic Intelligence at GuidePoint Security, is a seasoned, results-oriented leader with extensive corporate, federal, and international experience, dealing with cybersecurity, forensic, and Incident Response dilemmas. In addition to his demonstrated success in aligning security results with business requirements, Bill is recognized for his abilities in implementing accurate cyber-countermeasures to protect intellectual property, reduce cybersecurity risk, and protect intellectual property on a global scale. A respected strategist within the forensic and incident response communities, Bill holds a Bachelor of Science degree in Criminal Justice from Valdosta State University.