Tribe of Hackers - Red Teams & More

By Jeff Man on August, 29 2019

Get latest articles directly in your inbox, stay up to date

Back to main Blog
Jeff Man

Respected Information Security advocate, advisor, evangelist, international speaker, keynoter, former host of Security & Compliance Weekly, co-host on Paul's Security Weekly, Tribe of Hackers, TOH Red Team, TOH Security Leaders, TOH Blue Team, and currently serving in a Consulting/Advisory role for Online Business Systems. Nearly 40 years of experience working in all aspects of computer, network, and information security, including cryptography, risk management, vulnerability analysis, compliance assessment, forensic analysis and penetration testing. Certified NSA Cryptanalyst. Previously held security research, management and product development roles with the National Security Agency, the DoD and private-sector enterprises and was part of the first penetration testing "red team" at NSA. For the past twenty-five years has been a pen tester, security architect, consultant, QSA, and PCI SME, providing consulting and advisory services to many of the nation's best known companies. Credentials: NSA Cryptanalyst, Hacker, QSA

Online is proud to feature a chapter from Tribe of Hackers, written by Marcus J. Carey and Jennifer Jin. Our very own security expert Jeff Man shared many insights and stories in chapter 25 of this highly acclaimed book.

Jeffrey Man is a respected information security expert, advisor, evangelist, and co-host of the security podcast Security Weekly. He has more than 35 years of experience in all aspects of computer, network, and information security. Jeffrey has held various information security roles within the DoD as well as private sector enterprises, is a former PCI QSA, and was part of the first penetration testing red team at the NSA.


How did you get your start on a red team?

I first became involved with ethical hacking in the early 90s while working for an organization entitled the Fielded Systems Evaluation Branch within the Information Systems Security Directorate (INFOSEC) at the National Security Agency (NSA). The mission of this branch was to perform security assessments against all cryptographic and communications systems developed by the NSA and currently in use by such entities as the U.S. Armed Forces, the Department of Defense, and the Department of State. One area of focus in the branch was networked systems, and my interests gravitated toward that group because they were talking about things that I had learned about by seeing such movies as WarGames and Sneakers.

We basically started learning how to hack computers and networks by targeting each other’s workstations and by trying to gain “root” access to our own local area network. Shortly after I began working in the Fielded Systems Evaluations branch, the world as we knew it changed forever with the initial release of Mosaic, published by the National Center for Supercomputing Applications (NCSA) on January 23, 1993. Although we were already working with workstations, networks, and even the internet, the release of Mosaic greatly popularized the internet (or World Wide Web as it was called back then) for the general public, and this got the attention of executive leadership at the NSA.

Months of extensive analysis and planning resulted in the formation of the first Center of Excellence for Network Security at the NSA, which was named the Systems & Network Attack Center (SNAC). The formation of the SNAC involved the relocation of numerous branches and divisions from all over the INFOSEC Directorate to a single location, which ended up being a building that was part of NSA’s Friendship Annex (FANX) complex near what is now Baltimore/Washington International Thurgood Marshall Airport.

Our branch was included in the move, and the small group of us that was focused on learning hacking skills ended up getting our own office from which to hone our skills and practice our craft. We were focused on learning everything we could about hacking skills, and as part of that wanted to embrace as much of hacking culture as we were able to and live the lifestyle.

We built our own network within our office, bought a refrigerator and stocked it with Mountain Dew, and dubbed the name of our hacker space The Pit. We had few tools back then to help us and very little in the way of automation. Hacker conferences weren’t really a thing yet as conferences like DEF CON were just getting started (and weren’t exactly friendly to “feds” anyway). There were few or no training courses available, and automated tools were almost nonexistent. What tools did exist were mostly “designed” to help network and Unix administrators secure their systems.

What we did have was eagerness to learn and access to as much information as was publicly available at the time (you just had to know where to look for it), and a few “sacred texts” such as Practical UNIX and Internet Security by Simson Garfinkel and Gene Spafford, Firewalls and Internet Security: Repelling the Wily Hacker by William R. Cheswick and Steven M. Bellovin, and of course The Cuckoo’s Egg by Clifford Stoll. We learned hacking skills and how to exploit “features” of many operating systems.

We developed a methodology for performing what we called a vulnerability and threat assessment (VTAM); the DoD loves acronyms, after all. We performed VTAMs for many internal customers and for other DoD customers using classified networks. We were also in demand for other non-DoD and unclassified networks, so we also learned how to navigate the bureaucracy and the NSA Charter and ultimately developed a strategy for engaging with our “customers” in a reasonable time period. We didn’t call ourselves a red team at the time—that term did not exist. We were hackers.

I will put forth my definitions of the various activities in question, not with the intent of changing anyone’s mind about the meanings, but simply for understanding and perspective:

Vulnerability scanning: An automated tool that reports on vulnerabilities and/or misconfigurations of targeted systems generally without any context of overall business operations.

Vulnerability assessment: A process of researching, interviewing, observing how an entity’s business is conducted, computing and networking operations, application development, etc., to provide recommendations on how to improve the overall security of the entity Penetration testing: A manual exercise, performed by skilled hackers to test the defenses and/or detection capability of a target entity. Targets may vary as well as the scope of the engagement, but the overarching objective is to see how well an entity holds up to attempts at breach or compromise. In simple terms, a penetration test provides threat emulation and focuses its findings on the weaknesses in processes and operations that led to any negative result.

Red team: Still kind of a new concept for me, but I’ve heard most people describe a red team exercise in a similar manner to how I describe a penetration test. They go on to describe a penetration test as largely an automated process (using vulnerability scanners) to report on as many vulnerabilities as can be discovered given the targets, scope, and time frame of the engagement.

What is the best way to get a red team job?

This often seems to be the most coveted job in cybersecurity. I personally think that the majority of penetration test/red team exercises are pointless— particularly when they rely on scan results and only report on vulnerabilities. But then, I might be blurring the definitions—but that’s okay since we can’t universally agree on what red team means. I’m not saying there isn’t a need for skilled red teams, I’m saying that even today most companies that engage a red team aren’t really ready for the exercise, so they are more often than not wasting their money. A better engagement for the majority of companies out there would be to perform a vulnerability assessment.

Some might call this a tabletop exercise, but ask any decent red-teamer and they will tell you they have a pretty good idea of how they are going to break into a company just from their initial interview/kick-off session with the target. So, the best way to get a red team job is to have demonstrated skills. But I would ask this question: Why is there so much enthusiasm/ focus/training for breaking things, and is there the same emphasis on making things that are resistant to attack compromise right from the start?

How can someone gain red team skills without getting in trouble with the law?

Don’t do anything illegal. There are plenty of ways to practice in safe places such as CTFs or hacker spaces or build your own network/system (which is a great learning exercise anyway) and then break it.

Why can’t we agree on what a red team is?

Probably for much the same reasons that nobody can agree on other terms in our industry like security, risk, vulnerability, or threat. The primary reason for lack of agreement on terminology is largely that there is no authoritative source for creating such definitions or rather, the private sector is not aware of/does not acknowledge what used to be considered the authoritative source, which was the National Information Systems Security (INFOSEC) Glossary published by the NSA in 1992.

So, the best way to get a red team job is to have demonstrated skills. But I would ask this question: Why is there so much enthusiasm/ focus/training for breaking things, and is there the same emphasis on making things that are resistant to attack compromise right from the start?

Beyond that, I’d say the biggest problem with our terminology largely stems from lack of agreement or standardization on what are the activities, goals, or objectives of whatever-you-want-to-call-it but might be a vulnerability assessment, a penetration test, or a red team exercise. I think it is extremely important to set expectations with the customer/target and determine the goals and objectives first, followed by rules of engagement, timing, reporting, and so forth. I began working in the private sector in 1997, delivering vulnerability assessments and penetration tests. Well, we usually performed vulnerability assessments even though the customer always asked for a penetration testing exercise.

Whenever I would start talking to the customer about what they wanted in terms of services, I would ask several key questions:

•      Do you want us to find all the holes that would enable an attacker to break into your         environment (vulnerability assessment), or do you just want us to find one way in             and see how far we can get (penetration test)?

•      What do you want to get out of the engagement (goals)?

•      What do you plan on doing with the results of our testing (objectives)?

Most of the time the customers had read or heard about the need for a penetration test but when pressed, they really wanted a vulnerability assessment. I usually had to coach them through the rest of the process to develop reasonable goals and objectives for the engagement but ultimately to produce a successful result. We measured success not by being successful in gaining “root” access to the network, but in either a) being caught/detected or b) the customer learning from the experience and making real changes to their systems and their processes. We failed more often than not.

What is one thing the rest of information security doesn’t understand about being on a red team?

What is the most toxic falsehood you have heard related to red, blue, or purple teams? That red teaming should not be the ultimate goal of anyone interested in this field. There are numerous ways to make a contribution, so why do we regard red teaming as being the apex? I think building something that resists compromise is way cooler than breaking stuff (and harder too). As for toxic falsehood, I’d pick the belief that an organization’s cybersecurity program or posture begins, ends, and revolves around the hackers. Also, the vast majority of companies simply don’t have the infrastructure or resources to have such things as red, blue, purple, or even IT teams. This means that the vast amount of focus on this type of craft is on a relatively small number of organizations, leaving all the rest to fend for themselves. Enter the product vendors.

When should you introduce a formal red team into an organization’s security program?

When the organization has effectively built a security program starting with a basic but comprehensive understanding of what the organization has or does that has some value that is worth protecting. This could be proprietary research data, customer PII, continued operations, brand recognition, and any number of other things. Once an organization knows what it needs to protect, it needs to describe how it will accomplish this task through a series of documented policies, standards, and procedures. Then, it needs to train its employees in what they individually and collectively need to do to follow the rules put forth in said security program.

They need to have put as many detection, logging, and monitoring processes in place as they can afford and to have tweaked and tuned it and trained on them. They need to have practiced through various exercises, fire drills, and tabletop exercises; reviewed the results; put in place lessons learned; and developed a certain level of appropriate paranoia for what’s at stake. Then they can go ahead and hire a red team to pretend to be a bad guy and see how well the organization holds up to attack (and if it even sees the attempts).

How do you explain the value of red teaming to a reluctant or nontechnical client or organization?

I’m usually explaining why it’s mostly unnecessary given the lack of maturity of the organization (customer) whom I am advising.

What is the least bang-for-your-buck security control that you see implemented?

Any pentest/red team exercise when the organization is not ready for it, which is most of them.

Have you ever recommended not doing a red team engagement?

All the time, especially back when I was selling/managing/performing penetration tests/vulnerability assessments. I learned to do this early in my private-sector career because one of my first engagements was to perform a pentest (red team exercise) for an organization that had recently connected to the internet but had gone out and purchased the leading firewall solution at that time and had put several folks through the firewall training. They were set, and they knew it—and I think their motivation was really to justify the investment to their skeptical management, who might have thought they were spending a little too much time and effort on something as trivial as direct internet access.

I began the reconnaissance phase of the engagement, which included attempting a port scan of the targeted IP address space. I was perplexed because I immediately started seeing results as if there wasn’t a firewall in place at all. Our methodology at the time involved starting remotely and trying to gain access to the customer’s network, but then we would also go on site and perform additional testing from “behind” the firewall to see where we could get throughout the network. We pretty much had a field day from the outside, so we quickly moved on site to perform additional testing. We had been testing for only a short time before we were approached by the admins accusing us of “bringing down their network.” We hadn’t really been doing much of anything, so I called a timeout, and we asked to see the firewall rules.

You may have already guessed that the last rule was “IP ANY ANY ALLOW.” That was the point when I called off the penetration test and convinced the customer that they needed us to perform an architectural review and basically really help them set up a secure network and get the most out of their firewall. The life lesson I learned came when we presented our report on the engagement to the CFO of the company. He commented that he was extremely impressed that we had done right by them and helped them get better instead of needlessly sticking to the letter of the contract and doing only what we had been hired to do (and then billing them for additional time for the work we ended up doing). My takeaway was that we should always do right by the client—they ended up being our customer for years after that engagement, and we ended up being not only their pentesters but their trusted advisors as well.

What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?

I don’t think the answer is technical. I think the answer is to teach them what they don’t know. Then they can make better informed decisions on how to invest their scarce resources. That, and outsource— with the caution that passing on responsibility for security to a third party does not automatically mean that you have passed on your potential liability. Big difference.

Why do you feel it is critical to stay within the rules of engagement?

Integrity and trust. We’re in the security business, and when you’re trusting a company/individuals to potentially gain access to everything of value in your organization, you need to know you can trust them. So, be trustworthy.

I learned another early lesson when I was working for a client that hired us to do a penetration test of a single IP address (which was their perimeter firewall). We were explicitly asked not to scan or attempt to gain access to any other IPs surrounding that single IP address. We did notice from a simple port scan that there were other systems responding in their registered address spaces, and just by connecting with a browser, we deduced that they were likely out-of-the-box web servers. When we reported to the big boss at this company, I presented our findings (which weren’t much against the actual target) but also provided a footnote that during our reconnaissance we noticed what appeared to be default web servers that would have more than likely given us complete access to their internal network and bypassed their very tightly configured firewall. I think this time it was the CIO.

His response was priceless and provided me another early life lesson. He turned to his staff and asked, “What’s wrong with our processes that we are letting people get away with doing something like this?” Process. The real vulnerability wasn’t the default web server—it was that someone was able to put it out there in the first place and that nobody saw it or said anything about it.

If you were ever busted on a penetration test or other engagement, how did you handle it?

In my early days in the private sector, I would sometimes port scan a “prospect” as part of the reconnaissance to help determine what kind of help they needed. I remember one time I scanned a bank that was somewhere in Europe. They had some detection mechanism in place, traced the scan back to our registered domain, and ended up sending a rather nasty “cease and desist” email to my company’s CEO. When I found out about it, I contacted the originators and commended them for their detection capabilities. They were not amused.

“We’re in the security business, and when you’re trusting a company/individuals to potentially gain access to everything of value in your organization, you need to know you can trust them.”

What is the biggest ethical quandary you experienced while on an assigned objective?

The biggest quandary I can think of was in my days as a qualified security assessor (QSA) for the Payment Card Industry (PCI). One of the PCI requirements is to perform annually both an internal and external network penetration test. As a QSA, I had to first find out if the required penetration testing had been performed and then I had to obtain a copy of the report and review the findings. Too often, I had to grimace and accept a “pentest” report that was merely a Nessus report with a cover page. The big ethical quandary was when the pentest/red team was hired by my own company.

How does the red team work together to get the job done?

It’s been quite a while since I was part of an active engagement, but back in the day the “team” aspect was generally a) divide and conquer, especially when coming from outside and inside; b) collaboration when necessary to try to solve a problem; c) letting each member of the team do what they did best, which meant I did most of the talking and writing and left others on the team to do the technical aspects. We would all participate in reporting and presentation of findings, as we were all eager to teach how to prevent the things that we were able to accomplish.

If you were to switch to blue team, what would be your first step to better defend against attacks?

I wouldn’t, as I am not active anymore. But, I think the first step to better defend against attacks is to figure out two things—what you are protecting and what/ who you are protecting against.

What is some practical advice on writing a good report?

Most, if not all, of the reports that I have had the pleasure to review over the past 15 years are heavy on automated findings output and extremely light on what I would call custom or tailored content. My idea of a good red team report is that you describe at some level the methods/techniques you tried and describe not only success but failures as well. Remember, my idea of a perfect red team exercise is total and utter failure coupled with early detection. When there is a finding, the focus should not be on “we found a vulnerability and exploited it,” but rather it should be “the presence of such and such vulnerability indicates that a particular process or set of processes is failing.” This should be followed up by detailed recommendations for how to address the process issues and not simply “we need to install a missing patch” or “we need to change that shared admin password to something other than Secret123.”

How do you ensure your program results are valuable to people who need a full narrative and context?

First, insist on a presentation of findings so you can talk things through. This is always better than just delivering a poorly written, light on the right kind of details report. I always found it best to start with something positive (no matter how hard that might be) and work your way into the details—what I call the good, bad, and ugly format for presenting bad news.

How do you recommend security improvements other than pointing out where it’s          insufficient?

I try to start with the basics and work my way into the more detailed/nuanced recommendations from there. The real audience is not always the group that hired you but some level of executive management, and they (no offense) usually have no clue what any of us are talking about. This is where it becomes persuasive speech, in that you are trying to make someone understand why they need to change behavior or attitudes and, most often, spend money. Context is everything, and the challenge is to explain the complexities of what needs to be done in a way that the recipient can understand it. I call it the “Art of the Jedi Mind Trick,” and it involves things like listening, knowing your audience, learning to speak their language, telling stories/making analogies that fit within their frame of reference but get the point across, simplifying things, framing the discussion in the context of the business and its business processes, and not assuming they get it on the first try. Engage a feedback loop, ask them what they heard, and above all try not to overwhelm them with how bad things are. The old African proverb says, “How do you eat an elephant? One bite at a time.”

What nontechnical skills or attitudes do you look for when recruiting and interviewing red team members?

I certainly would mention having good communication skills here, but I usually hear that from folks who proceed to demonstrate it poorly. Beyond that, I can only suggest what has worked for me, which is having a certain skepticism about everything, a tenacity to keep digging, not taking no for an answer, and always looking for another way—I think we can call that the hacker mentality, which is not in and of itself a technical skill but certainly goes a long way to helping one be successful in this field.

What differentiates good red teamers from the pack as far as approaching a problem differently?

To me the best red teamers are the ones who will walk away from or redirect a customer away from a red team exercise if they are not ready for it but will work with them to get them ready. I think what is also important is to understand what motivates a customer to request a third-party red team exercise in the first place. Is it a regulatory or compliance requirement? Have they previously been breached or compromised and now have the right attitude? In other words, accurately defining not only what is the “problem” but putting it in context with goals, scope, objective, and expected outcomes. The best red teamers are also teachers, and teaching starts from the beginning of and lasts through the entire engagement, ideally until the customer becomes adept at catching the wily hacker.

20190624_100926

 

“To me the best red teamers are the ones who will walk away from or redirect a customer away from a red team exercise if they are not ready for it but will work with them to get them ready.”

To purchase a copy of the book click here.

If you have a question for Jeff, please use the contact form below.

Submit a Comment

Get latest articles directly in your inbox, stay up to date