What Universities and Colleges Do the Best Hackers Come From? Here’s One in Particular: UCF!

University of Central Florida (UCF) won 1st Place at the National Collegiate Cyber Defense Competition (CCDC) that took place April 25-27 in San Antonio, TX.  This annual competition was started in 2005 in conjunction with the Department of Homeland Security to improve cyber security education and increase the number of highly qualified cyber security graduates in the U.S.  This Championship brings the best of the best hackers from universities all over the U.S. to fend off cyber-attacks from penetration professionals and ethical hackers.

Among those winners are GuidePoint Security interns, Carlos Beltran, the Captain of the winning UCF team; and Alex Davis, his team member.

“What sets UCFs CCDC team apart from others is that they focus on the hackers perspective. We want to know how they got in, in order to keep them out,” explained Carlos Beltran.  “There are many reasons I chose to work at GuidePoint Security, but the main reason is because I have a strong desire to learn about security and the methodologies used to perform in real-world environments. GuidePoint Security gives me the opportunity to better my trade and excel in the areas I want.”

“Competitions like this foster learning about computer security, which is something businesses need, as shown by recent breaches like Target. The people coming out of competitions like CCDC will help prevent data breaches such as the ones we have seen on the news from happening,” said Alex Davis.  “I like learning how technology works, and I discovered that the best way to learn how things work is to focus on how systems can be exploited, and how to secure them. I enjoy the field, and working at GuidePoint Security allows me to do what I enjoy.”

According to the WSJ article, “University of Central Florida wins 2014 Raytheon National Collegiate Cyber Defense Competition”, more than 180 colleges and universities and 2,000 undergraduate students participated in the competitions that lead up to this year’s national championship.  The Raytheon website listed the following 10 regional champions with UCF coming in first place:

  • University of Central Florida – Southeast Regional
  • Air Force Academy – Rocky Mountain Regional
  • Dakota State University – North Central Regional
  • University of Alaska, Fairbanks – At Large Regional
  • Southern Methodist University – Southwest Regional
  • Rochester Institute of Technology – Northeast Regional
  • Western Washington University – Pacific Rim Regional
  • University of California, Berkeley – Western Regional
  • Towson University- Mid-Atlantic Regional
  • Northern Kentucky University – Midwest Regional

“While the competition has existed since 2005, UCF only very recently started competing. Shortly after I started teaching at UCF in August 2013, two of my students approached me to ask if I would be willing to sponsor a UCF team for this competition.  I realized the tremendous opportunities this competition would provide for our students.  I eagerly agreed and this is the second year UCF has entered a team.  It is also our second appearance at the National competition.  Each year, the team enters a virtual qualification round.  Eighteen teams from our 7-state Southeast region entered the qualification round including UCF, FSU, and USF from Florida.  The top 8 teams from the qualification round are invited to compete in a regional competition.  The UCF CCDC Team finished 1st in the Southeast Collegiate Cyber Defense Competition held in Kennesaw, GA in both 2013 and 2014.  The regional winner earns the privilege to compete in the National Collegiate Cyber Defense Competition in San Antonio, TX along with the winning teams from the other 9 U.S. regions.  In 2013, the UCF CCDC Team finished 10th nationally in our very first year of competition.  This year, the UCF team has captured the national title as the top Collegiate Cyber Defense Team in the nation,” explained Dr. Thomas Nedorost, Department of Electrical Engineering & Computer Science of the University of Central Florida.

2014 UCF Champs

Photo compliments of UCF Collegiate Cyber Defense Club

The winning members of the 2014 UCF Collegiate Cyber Defense Competition Team are: Carlos Beltran, Team Captain Jason Cooper, Team Co-Captain Austin Brogle Alexander Davis Kevin DiClemente Dale Driggs Grant Hernandez Mark Ignacio Heather Lawrence Troy Micka Cody McMahon Joe Pate “The team’s strength lies in their teamwork, cross-training, and dedication to continue learning and improving,” said Dr. Nedorost.  “National CCDC brings together the top 10 cyber defense teams in the nation.  Having the ability to compete at this level is an honor in itself.  The level of competition is fierce.  Seeing UCF bring home the Alamo Cup, the 1st Place trophy, is priceless.”

“We are very proud of our two interns who worked so hard and won this challenging competition” said Michael Volk, Managing Partner at GuidePoint Security. “Congratulations to all who made it and to all who participated!”

About GuidePoint Security, LLC
GuidePoint Security provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps our clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Reston, Virginia, and with offices in Michigan, New Hampshire, Florida and North Carolina, GuidePoint Security is a small business and classification can be found with the System for Award Management (SAM). Learn more at www.guidepointsecurity.com.

Security Visibility in the Cloud – Logging and Monitoring in AWS

By now we’re all well aware that there is a virtually limitless number of logging and monitoring solutions available on the market. Visit the Amazon Web Services (“AWS”) Marketplace, and you’ll find plenty of options. In fact, it gets really crazy when you start examining security monitoring versus application performance monitoring, often with solutions performing one role better than the other, or even just one of the roles altogether. What’s interesting to me is the lack of common Enterprise logging and monitoring solutions available in the AWS Marketplace. Obviously you can deploy instances to handle implementations of solutions like ArcSight, McAfee, LogRhythm, or NetIQ, but Splunk is the only well-known commercial provider with solutions available in the marketplace.

Now, that’s just the commercial side… what about open source?  Let’s cover a few terms first, for those new to centralized logging.

Shipper – a system agent that collects and forwards, or ships, system and application logs to a centralized server.

Collector / Broker – a message broker is a system that collects and queues logs as an intermediary step to indexing the logs centrally for analysis, monitoring, and alerting. Its primary purpose is to ensure you don’t lose messages when or if your indexer falls behind, crashes, or otherwise becomes unavailable to receive logs.

Collector / Indexer – a system used to collect, parse, and store logs for searching, analysis, monitoring, and alerting.

Dashboard / Visualizer – the dashboard is used to aid in log analysis by providing a search interface, and in some solutions alerting.

Open source logging and monitoring solutions abound, and like the well-known solutions missing from the AWS Marketplace, are typically implemented on purpose-built instances within your AWS Virtual Private Cloud (You are using a VPC, right?)  So what comprises an open source, centralized logging and monitoring solution?

Log shippers like Nxlog, Logstash, Lumberjack, and Fluentd. Brokers like Redis, RabbitMQ, and ZeroMQ. Indexers like Elasticsearch and… well, Elasticsearch seems to be the industry-standard as far as open source goes, but there are also a lot of folks using centralized syslog-ng, or Rsyslog.  Dashboards, such as Graylog2, Kibana (for Elasticsearch visibility, I like ElasticHQ), and security agents like OSSEC complete the architecture.

So, with all of these solutions available, why do I run into so many clients already in AWS, or moving to AWS, that have insufficient logging and monitoring, or worse, no logging and monitoring at all in their Cloud environment? Because Logging and Monitoring is Hard. Don’t get me wrong, it doesn’t require a rocket scientist on staff to get one or more of these commercial or open source solutions deployed. There’s preparation, communication, research, and other steps that have to be taken to properly implement logging and monitoring. I spent over a week researching available solutions, and building out proofs-of-concept in my virtualized lab to determine which solutions met my needs. That is the most critical point one should take away from this article; there is no right or wrong way to implement logging and monitoring in your AWS Cloud. As with all things IT, there is more than one way to accomplish your technical and business objectives. The trick is to find the right way for your organization.

Let’s look at some of the decision criteria that will come into play; this is not an exhaustive list:

People

  1. What expertise is available from my current staff – network engineering, development (if so, which languages), information security, incident handling, etc.?
  2. Do we have experience with a particular commercial solution?  A particular open source solution?
  3. Should I train existing staff, or hire staff with the relevant experience?
  4. Should I forget about managing this myself altogether and go with a Managed Services Provider?

Process

  1. Have we defined and documented the metrics we care about, and established a policy and process around ensuring this data is available and utilized?
  2. Have we defined and documented our business objectives behind logging and monitoring?
  3. Have we defined and documented regulatory mandates related to logging and monitoring? How do we keep our requirements and this documentation current?
  4. Have we determined roles and responsibilities involved in supporting the logging and monitoring initiative?

Technology

  1. Have we defined and documented technical requirements for our logging and monitoring solution? How do we architect our solution?
  2. Have we researched available options, and documented their strengths and weaknesses with regard to operating in our environment or culture?
  3. How do we facilitate a demonstration, proof-of-concept, or evaluation of the targeted solutionWhat do we log? Where do we store logs?
  4. What do we log?  Where do we store logs?
  5. How do we alert appropriate personnel a problem has been detected?

5_bucks

After extensive research, and comparison of features and functionality, I decided upon a Hybrid ELK Stack for this case study.  The ELK Stack is comprised of Elasticsearch Logstash and Kibana. I also added Graylog2 to support alerting, and OSSEC for file integrity and host intrusion prevention. There are numerous guides on the Interwebs to assist with deploying these solutions, so I will not go into installation and configuration in this post. I may write another article later to cover installation and configuration, but I’ve included links to all of the resources I used to get up and running at the bottom of this post. Note that, although this entire process covered a full week, the bulk of the final deployment was completed in about ~12 hours. I built the final environment on AWS’ Free Tier, but didn’t even complete rolling out the dashboards before the Logstash Shipper/Logstash Collector/Elasticsearch Indexer combination on the central server decimated the t1.micro instance (Ubuntu 12.04) I deployed it on (Java consumed all available memory). Rather than tune the overwhelmed box in an attempt to stabilize it, I took advantage of being in AWS and scaled up to a m1.small instance – problem solved.  In total, I spent less than $5 bucks on my, admittedly limited, proof-of-concept.

Kibana_booku_events

Figure 1: Kibana 3… Dead Sexy!

Take a look at the components I selected:

  1. Log Shipper – Logstash on Linux servers, Nxlog on Windows servers. Although Logstash is cross-platform, and is perfectly capable of shipping Windows Event Logs, IIS and MSSQL logs, the author of Nxlog convinced me Why Nxlog is better for Windows.
  2. Broker – This case study doesn’t incorporate the use of a Broker. I was originally going to include RabbitMQ in the architecture, but version dependencies led me down a path that was in danger of kludging up the whole study. In a production environment, you definitely need to use a broker to provide scalability and resiliency, but I pushed onward without including it.
  3. Indexer – Elasticsearch. Ridiculously easy decision for me, since Windows servers are in my test environment, and I was interested in testing something other than syslog.
  4. Dashboard / Visualization – Kibana 3 is dead sexy, and I’m an eye-candy kind of guy. I’d gone into this planning to just use Graylog2, since it is a great visualization tool itself, plus includes alerting capability, but after seeing screenshots of the new and improved Kibana 3.x, I couldn’t help deploying it, too. Regarding alerting, Nagios is often used in concert with Graylog2 for its ability to “roll up” alerts. If you’re interested in configuring email alerting/alarms for your Graylog2 deployment, Larry Smith has a great blog post to get you started here. Last, I also installed the ElasticHQ plugin to monitor my cluster of one’s health.
  5. As an aside, I also deployed OSSEC to the Linux and Windows servers for file integrity monitoring and intrusion prevention.
ElasticHQ

Figure 2: ElasticHQ… Elasticsearch cluster health, and a whole lot more!

A note about the final deployment; ultimately the redesigned, recently released Graylog2 v0.20.1 didn’t work out like I’d hoped. Everything was running smoothly, and based on configuration guidance and the absence of error output, it seemed I was setup properly, but I never saw the data from Elasticsearch in Graylog2. I spent the last few moments I had allocated to this project experimenting with some alternate configurations, and finally strayed so far from my working example that I had to give up. So, after a week of research and implementation time, a diagram of what we have can be seen in Figure 3.

Screen Shot 2014-04-23 at 2.49.08 PM

Figure 3: AWS Logging and Monitoring PoC Architecture

This was a trivial setup – I’m using a single box for a local Logstash shipper, an Elasticsearch index, MongoDB for Graylog2, and three different web interfaces. In a production system, ensure you use a more appropriate architecture including separating each component, utilizing multiple availability zones, inserting a broker to receive messages from log shippers, utilizing SSL, etc. etc.

Although I didn’t have enough time to sort out Graylog2, and get some alerting configured, I’m pleased with the overall outcome of my Security Visibility experiment. I found OSSEC to be an excellent “partner” in my quest for visibility, despite only utilizing and documenting the file integrity portion of its functionality.

ossec

Figure 4: OSSEC Web UI

Nxlog works perfectly for shipping Windows event logs, and of course, the lovely Kibana ties everything together and puts a nice bow on the concept of visualization.

Kibana_event_analysis

Figure 4: Analyzing events with Kibana 3

CP_Halo_deets

 

Although this was not a terribly difficult experiment, from a technical perspective, I still wondered, “Is there another | quicker | better way to gain security visibility in AWS?  Well, yes, and no. Yes, there’s an easy way to get security visibility, plus AWS automation to boot – no, because despite this gem of AWS security visibility, I will still recommend a centralized logging and monitoring platform in AWS. So, what’s this solution, you ask? CloudPassage Halo. But wait, there’s more! Halo has an API that’s made it possible for several SIEM solutions to integrate with it, sharing the Halo security visibility love in a centralized way within your existing, or planned, logging and monitoring deployment.

Halo has enough features and functionality to warrant its own blog post, so I won’t go into those here. Suffice it to say, anyone looking for security visibility, automation, or both in AWS should definitely have a look at what CloudPassage has to offer.

 

CP_win_security_events

Figure 5: Windows security events captured by Halo

Conclusion

Logging and monitoring is hard, but there are more than enough commercial and open source tools available to fit any size organization, with any size of budget. Attaining security visibility and appropriate incident handling isn’t just the right thing to do from a best practice perspective; many standards, regulations, and laws mandate them. So, regardless of the type of solution or solutions you select, choose and implement something, and gain insight into security incidents you may not have any idea are happening. After all, inadequate visibility is better than no visibility at all.

For additional information on this subject and the opportunity to ask questions, please click here to register for our Webinar titled:  Security Visibility in the Cloud – Logging and Monitoring in AWS occurring on May 1st, 2pm (EST).

 

Heartbleed – Security Technology Vendor Information

Based on the requests of our clients, as discussed in our previous blog post “The Heartburn of Heartbleed,” below is a list of security technology vendor information pertaining to the Heartbleed bug. This list will be regularly updated to provide you with timely information on the security technology vendors that you rely on to protect your organization.

Last Updated: Friday, April 18, 2014 8:19 EDT

Vendor

The Heartburn of Heartbleed

The Heartbleed Bug is a dangerous vulnerability found in OpenSSL.  It potentially allows the compromise of encrypted information, that under normal conditions is secured by SSL/TLS. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM), and some virtual private networks (VPNs).

The following versions of OpenSSL are/are NOT vulnerable to the Heartbleed bug:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) ARE vulnerable;
  • OpenSSL 1.0.1g is NOT vulnerable;
  • OpenSSL 1.0.0 branch is NOT vulnerable; and
  • OpenSSL 0.9.8 branch is NOT vulnerable.

I won’t go into the technical details of this vulnerability, since that has been done en masse. If you are looking for that level of information, I recommend the following analyses:

There are several misconceptions about Heartbleed, with common ones focusing on end-servers running Microsoft IIS and not evaluating upstream technologies like reverse-proxies, load-balancers, or having users change passwords before a fix has been implemented.

What I do want to discuss is a recommended approach for organizations dealing with the “Heartburn of Heartbleed.” We’ve had numerous clients reach out to us asking for assistance in several areas; identification, remediation, and working with security technology vendors to determine when their fixes will be ready. Here is what we recommend for each of these stages:

Identification

There’s been an influx of tools and scripts made available to identify the Heartbleed vulnerability. Below are ones that meet specific use cases:

Manual

Scripting

Vulnerability Management Tools

Remediation

This is where the heartburn starts. Hopefully, your organization maintains a Threat Management Program and you have already addressed your high-risk assets. If not, here is a triage approach:

  1. Patch perimeter systems first, then critical internal systems, then production systems, and then test/dev systems;
  2. If a patch or configuration fix is not yet available, <insert heartburn here>. Taking a proactive approach, you can implement a reverse-proxy (Load Balancer) that is NOT vulnerable, or can be configured as such, to terminate the encrypted connections, thereby eliminating risk to your web and application servers. If you prefer a passive approach, you can implement signatures on IDS/IPS solutions, but I do NOT recommend relying on these. Snort signatures are available here. Bro Heartbleed module here;
  3. Regenerate all SSL certificates with new private keys;
  4. Replace all SSL certificates with newly generated certificates;
  5. Revoke all old SSL certificates;
  6. Force all accounts on affected systems to expire; and
  7. Communicate to account users the necessity for the password resets. CloudPassage did a good job of this; here is a link to their blog post.

Security Technology Vendors

This is where the heartburn can reach extreme levels. Some vendors have done a great job implementing fixes as soon as updates to OpenSSL were available, however, others have been less than forthcoming with their remediation approach and timelines. These are the solutions that you rely on to protect your organization and their internal failure to identify and remediate vulnerabilities in core components of their solutions in a timely manner has left you vulnerable. If this is the case, I recommend emailing your vendor representatives and letting them know you need this information ASAP. If you have formed an internal task force to deal with Heartbleed, it may be worth mentioning this to the vendor representative, and suggest that being “last” to remediate amongst your vendors is probably a bad idea.

If you utilize a Value Added Reseller (VAR), for example GuidePoint Security, I recommend reaching out to your Account Executive (AE) and asking for assistance. Provide your AE a list of all technology vendors you need assistance with, as you probably own more technologies than you have purchased through them. This is an area where VARs can show some of their “Value.”

Lessons Learned

It is my sincerest hope that organizations embrace this opportunity to take a fresh look at how they are dealing with a number of areas within their Information Security Program. Particularly, I believe Heartbleed has forced organizations to look at:

  • Threat Management
  • Vulnerability Management
  • Patch Management
  • Public Key Infrastructure (PKI)
  • Defense-in-Depth
  • SSL Decryption / Visibility Practices

GuidePoint Security can assist your organization in building and maturing these components of your Information Security program, as well as help procure, architect, implement, and optimize security technologies to support them (Hey, I have to get a shameless plug in somewhere, right?).

*** Updated on 4/10/14 @ 13:49 to include vulnerable/NOT vulnerable versions of OpenSSL, replace the python script with one that does not have false positives and clarify my statement on identifying vulnerabilities in core components of security technologies. ***

Visit GuidePoint Security at InfoSec World, Orlando

Join GuidePoint Security as we highlight and showcase two of our technology partners, Bromium and Skybox.

When:  Monday, April 7-8, 2014
Where:  InfoSec World Conference & Expo, Booth #219, at Disney’s Contemporary Resort, Orlando, FL

GuidePoint Security partners with vendors that offer unique technologies that address the security needs of our clients.  With the complexity of security threats ever increasing, GuidePoint Security offers the right solutions and technologies for our clients’ specific needs. 

These two technology partners offer the following solutions to address today’s advanced security threats.

Bromium provides protection at the endpoint with vSentry, an innovative product that protects against all advanced malware. vSentry automatically creates hardware-isolated micro-VMs that secure every user task – such as visiting a web page, downloading a document, or opening an email attachment.

Skybox delivers cutting-edge risk analytics for enterprise security management.  Their solutions give complete network visibility, help to eliminate attack vectors, and optimize security management processes. Protecting the network and the business.

GuidePoint Security uses their expertise to lead security innovation by helping clients recognize threats, understand solutions, and mitigate risks throughout their IT environment by determining which solutions fit their clients’ needs.  GuidePoint Security offers the people, processes, technologies, and oversight that deliver results to your organization.

Be sure to visit GuidePoint Security at the InfoSec World conference in Orlando, booth #219.

For additional information about the InfoSec World Conference and Expo, visit http://gpsec.me/1hmTEAm.

About GuidePoint Security, LLC
GuidePoint Security provides customized, innovative and valuable information security solutions and proven cyber security expertise that enable commercial and federal organizations successfully achieve their security and business goals. By embracing new technologies, GuidePoint Security helps our clients recognize the threats, understand the solutions, and mitigate the risks present in their evolving IT environments. Headquartered in Reston, Virginia, and with offices in Michigan, New Hampshire, Florida and North Carolina, GuidePoint Security is a small business and classification can be found with the System for Award Management (SAM). Learn more at www.guidepointsecurity.com.