Questioning the Wisdom of Equipping Adversaries with Free Offensive Security Tools
*Note: This article was originally published by the author on July 19, 2020.
“Four elements make up the climate of war: danger, exertion, uncertainty and chance.” — Carl von Clausewitz (1780–1831)
In the days of ancient Greece did Spartans equip their enemies with spears, shields, and swords? I’ll let you research the answer to that question yourself. We cannot separate military analogies from information security no matter how hard we try to because at its very core what we are doing every day as information security professionals are fighting an unending war that consists of billions of tiny battles taking place in cyberspace, on the internet, on computer systems at work and home, only the consequences are for now anyway, less than lethal. Military theorist Carl von Clausewitz said that there are four elements that comprise the climate of war.
- Danger is supplying your enemies with the very weapons they will use to exploit your systems
- Exertion is all of the efforts you have put forth into protecting your systems only to have them exploited with a free exploit tool, readily available on the Web
- Uncertainty is not knowing when the attackers will strike and if you will be able to patch all of your system hosts before it happens because we all know it will eventually happen
- Chance is the free opportunity that the freely available exploit tool gave to the low-skill script kiddie to exploit your system instead of making the exploit cost-factor unaffordable to all but the most sophisticated & well-resourced attackers
It is time to open up the proverbial can of worms and for the information security (infosec) community to have a calm discussion about the controversial topic of whether it is wise to equip your adversaries with free Offensive Security (OffSec) tools or not? I think by my opening introduction you already know where I stand on this issue. Nevertheless, rally the legions of penetration testers, Red Teamers, and Blue Team defenders. Everyone please gather around the campfire, cozy up with your blankets, and roast some marshmallows with me as I tiptoe around this incredibly divisive but critically important topic while trying not to end up skewered on a spear like a shishkabob.
In what I call a clinical approach, I’ll begin by asking a few basic scenario-based questions to each person reading this.
Q1: Would you leave a lockpick set outside of the front door to your home?
Q2: How about leaving a full can of gasoline and some matches next to the outside wall of your garage?
Q3: Would you leave a crowbar outside of your home next to your kid’s bedroom window?
These overtly basic questions are analogous to the infosec community’s discussion that we’ve repeatedly seen play out on social media like a badly casted rendition of Shakespeare’s Romeo and Juliet in a high school gymnasium. Similarities to the gun control debate exist as well as to whether guns or people kill people but that is another even more divisive topic that will only invite further criticism of my humble but well-informed professional opinion on this matter. I am not naive enough to believe that this article will solve any of the issues at hand but my hope is that it can help further the discussion.
Using the above analogies of leaving “tools” outside of the home that can be used against you and your family who dwell within. It is my belief that most rational-minded people would argue this is probably not a wise thing to do. I know I certainly wouldn’t intentionally leave those types of tools outside of my home because it invites trouble. It’s the same reason that we lock the doors to our cars and homes every night so that our vehicles and personal belongings aren’t stolen. It’s the same reason we don’t write the password to our accounts on yellow sticky notes and hide them under the keyboard for would-be intruders to find. In my view, this is just plain old common sense.
When vulnerabilities are discovered and reported, it can sometimes be a lengthy process and often unnecessarily so thanks to vendors who take too long to respond or fix the vulnerability without crediting and paying any bug bounty to the security researcher who discovered it. Over the years this has left a foul taste and frustrated ethical hackers (white hats) who sometimes get fed up and just publish the vulnerability on social media. Depending on the type of vulnerability it is and its severity rating, exploit developers might begin to code an exploit tool similar to Mimikatz that other people can download for free and use to exploit the vulnerability.
What exploit developers are really doing, however, when they create OffSec Tools (OST) or frameworks like Mimikatz, Responder, and Metasploit to name a few, is to supply cyber threat actors with weapons to use against the U.S. and any other individual or organization using those software products. Publishing proof of concept (PoC) white papers explaining in detail exactly how they successfully exploited these vulnerabilities, minus some of the more sensitive IPs details, only serves to supply would-be attackers (black hat hackers) with a how-to guide to exploit systems like a spare key directly to their own networks and systems. Does that seem like an intelligent thing to do to you? Would you give an attacker a tool to attack something that you are at the same time trying to defend? I wouldn’t, in fact, I’d call that foolish.
It’s bananas actually!?! I have nothing personal against Rapid7, Metasploit’s developer company, or any OST proponents. In fact, I’ve personally used several of these tools to check the security of systems I’ve been responsible for securing. Yes, I am also aware that in order to get access to the real goodies, you need to pay for Metasploit Pro. But check it out, we are getting our asses handed to us in America and really all over the damn world by attackers using basic tactics and exploits. Twitter just suffered a social engineering attack last week for goodness sake that resulted in a Bitcoin scam and widespread concern over pre-election social media meddling. If we can’t handle the basic exploit and tactics, why the hell are we creating semi-sophisticated exploits and tools for threat actors to use against us? Meanwhile, the real sophisticated threat actors are laughing all the way to the bank. You don’t see them publishing proof of concept white papers detailing exploits and OSTs, at least not to the degree U.S.-based security researchers do. We are our own worst enemy but we’re too dumb to realize it.
I question the logic of staying one step (or two) ahead of the game by developing OSTs that I know for a fact are used against us because I’ve seen it. I’ve responded to the incidents and in some cases, I was the one perpetrating the authorized penetration test attacks. Guess what though? Six months later after the incident or the pentest, a return visit to the organization will reveal that very little has changed. The organization’s information security posture is still lacking because this is something all organizations struggle with. The counter-argument, of course, is that if we [the good guys/girls] don’t develop these tools and make them available to the industry for pentesters to use against our systems, then attackers will do it and we won’t be able to recognize and share their signature attack methods, otherwise known as their Tactics, Techniques, and Protocols (TTP). Sorry, but that dog don’t hunt, as the saying goes. Bad logic, it does not compute. Personally, I’d rather an attacker have to spend the time, money, and effort to develop these same types of exploit tools on their own. God forbid an attacker should use a TTP that’s different from what the MITRE ATT&CK Framework says they typically use or modify the exploit tool by adapting for their own malware purposes which thereby changes the attack signature, right? This whole OST industry supplies both
As an industry we’ve created quite the job stability situation for ourselves, haven’t we? Customers are mandated to comply with cybersecurity compliance regulations by state, federal, and sometimes even international governing bodies like the European Union. However, here we are as an industry also developing exploits and tools to automate those exploits (e.g., Autosploit) so that software vendors and hardware manufacturers, in turn, have to develop and publicly release security patches and customers, in turn, then have to also patch those vulnerabilities before an attacker exploits it against their systems. It’s a vicious cycle that I like to compare to a self-licking ice cream cone.
If the Blue Team defenders want to be able to recognize the anomalous activity on their systems they had also better download and install all of these OffSec tools to familiarize themselves with them. If I am an Advanced Persistent Threat (APT) group, I am thinking that these tool download links might be the perfect watering hole attack. How many security professionals actually take the time to check the download file checksums anyway should the vendor/creator bother to provide one? If Avast’s CCleaner can suffer a supply chain attack and be infected with ShadowPad malware on their own website, then so can your Github code repository.
The Pro-OST Crowd Arguments
- Companies won’t fix their code flaws in a timely manner if we don’t create tools to exploit these vulnerabilities.
I fail to understand the logic of some OST proponents, who are usually pentesters by the way, who say that creating free OffSec tools is the only way to get companies to fix reported vulnerabilities in their software product code. In my humble opinion, this argument is complete garbage. If you have a dispute with a company, the solution is not, “Watch this! I’ll show them with this disclosure. They’ll regret not getting back to me sooner and not paying me. Here’s a new exploit everyone! Have fun!” No, that is not responsible disclosure or even remotely a responsible action to take whatsoever. Does it work by forcing their hands? Sure, and of course, there are certainly some companies who are just begging for it but realize that each time this happens it damages our industry reputation that much further. I laugh though when hackers do this and find themselves seeking the services of a lawyer afterward.
We might even call it a ‘frustrated disclosure’ and it goes against the code of ethics many of these security professionals who hold security certifications agreed to abide by or risk losing their certification status. I’m sure many wouldn’t care if that happened but some of us need certifications to hold certain jobs. Should companies do the right thing and respond sooner? Sure. That is a valid complaint. Should they be extorted by bug bounty hunters though for taking too long to respond and come up with the money to pay them or figure if a particular vulnerability qualifies? That, to me, is harder to answer but again I don’t believe extortion is the way to go here considering the nature of the relationship involved.
The question you have to ask yourself is whether you are a security professional or not? If not, then extorting a company for bug bounty rewards by threatening immediate ‘frustrated’ disclosure makes you a criminal and you risk being charged with violating the Computer Fraud and Abuse Act (CFAA). You don’t want to go there, it’s no fun. You drop the exploit on Twitter, the software vendor will roll out an emergency patch in a rush. Everyone will scramble to patch before they too become victims. You’ll gain some new followers and maybe score a speaking gig at the next security conference but you’ll still be a dick. I mean honestly, how many times do we have to go through this nut roll? It’s tired, it’s been done to death. Let’s move past it and figure out a better way forward as a community.
2. OSTs allow Blue Team defenders to learn attacker TTPs that they can use to better detect, respond to, and prevent.
The problem with this argument is that many organizations just simply are not there yet. They aren’t even staffed or operating at a level with respect to information security readiness that they are monitoring event logs on a daily or even weekly basis. Is that wrong and very risky? Of course, but some things are outside of our control. We, as an industry, can only help shape this battlespace so much before we just look like angry, geeky a-holes shouting from the rooftops about security vulnerabilities. Patch now, change your passwords, use multi-factor authentication!!! It’s best practice, you better do it.
How on Earth are cyber defenders supposed to recognize new threat actor TTPs if they aren’t even doing the basics now? The simple answer is that they won’t folks. We’re facing overwhelming attack threats. These automated OSTs don’t always have affordable automated defensive counterpart tools that every company can purchase, configure, and maintain. Should they do the basics and do them well? Of course, they should, but we have to work with what we have and this isn’t a perfect world situation. Defender experience and skill levels vary. Thus far, anecdotally speaking, the evidence has shown that it’s too much to expect that every organization or even the majority of organizations will be able to Identify, Protect, Detect, Respond, and Recover (NIST Cybersecurity Framework) from attacks.
So why give attackers free OSTs that many defenders aren’t well-versed or even aware of? We should still develop the exploits and OSTs but lock them down. Put a price tag on it. “Intrusion software,” otherwise known as OSTs, is required under the International Traffic in Arms Regulation (ITAR) and Export Administration Regulations (EARs) which is managed by the Department of Commerce to have their “item” added to the Commerce Control List (CCL) (e-CFR, 2020) for which a license to export is required because of it’s “potentially exploitable [nature as an] instrument of military technology” (Herr & Rosenzweig, 2016, pp. 310–311).
Right now, most, if not all, of these OSTs are freely available online for any script kiddie to download. I know for a fact that none of these developers have bothered getting a license to export their software tools as is required by the ITAR law because no one is enforcing it and its equally disappointing that the Department of Commerce that is over the National Institute of Standards and Technology (NIST) doesn’t enforce the EAR requirement for OSTs.
3. Where does the control OST debate end? What type of tools are considered off-limits if we go down the route of restricting free OSTs? Are vulnerability scanning tools like Nessus or Shodan.io considered OffSec tools?
I would argue that restricting vulnerability scanning tools is off-limits. Our job as security professionals is to make the internet safer. We should not make that task more challenging by restricting scanning tools. Yes, they’re used by our adversaries and cyber criminals but everyone connected and disconnected to/from the internet needs to be using those tools in some way, shape, or form periodically. Otherwise, we get Wannacry-style wormable SMB attacks propagating like wildfires across the internet. Now, surely someone will argue that restricting OSTs is making securing the internet more difficult. That is my line with OSTs.
I feel otherwise and that is why this needs to be a much larger discussion. But I’ll tell you this, once the politicians get involved it will be a train wreck just like they are trying to do with the EARN-IT Act to covertly backdoor encryption. My rationale is that scanning tools like Nmap, Nessus, or Shodan serve a valuable purpose to identify exploitable vulnerabilities. The attackers will use vulnerability scanning tools and so should you to identify unpatched vulnerabilities so that you can patch them before they are exploited.
What’s really driving the desire to create OffSec tools? Could ego and fame be a factor? Could it be perhaps that a certain security researcher wants to make a name for themselves as the OffSec pro hacker that created the “X” tool that exploits the “Y” vulnerability against all “Z” types of devices? What are the psychological driving forces behind it? I don’t think it is wise to ignore the psychological aspect of this debate. You just can’t ignore it. Some so-called security professionals are in this for the wrong reasons. If your only reason for being involved with infosec is making money then I can’t have a rational discussion with you about why OSTs contribute to Clausewitz’s four elements that make up the climate of [cyber] war[fare]: danger, exertion, uncertainty, and chance.
Remember those? Why not? Because all the money-grubbers care about is making more money in any way possible, whether it is ethical or not. Whether it’s actually going to hurt the industry as a whole or in your mind will speed up the patch development, release, and patching by the users. Money is always an important factor in any topic, but this debate, in my opinion, is less about money and more about securing the internet. Making vulnerabilities exploitable is contrary to common sense logic and it only exacerbates the myriad of issues already causing information security to lag across the industry. Such as qualified personnel shortages, companies not paying infosec professionals what they are worth, entry-level job requirements that are completely unrealistic, and the list goes on and on.
“He will win who, prepared himself, waits to take the enemy unprepared.”
― Sun Tzu, The Art of War
This quote from Sun Tzu, played out as it is, strikes a nerve in this debate when we think about publicly reported software vulnerabilities, 0-day unknown software vulnerabilities, software vendor patching schedules, and user/customer software patching response times. If we supply the [enemy] with the tools to compromise our networked systems who is the fool in this equation? Who is prepared and who is unprepared? I’ll give you a hint — we’re not as prepared as we need to be. All of this is not to say that I am anti-Offensive Security Tools, I just think it is foolish to create the same tools and give them out freely to adversaries who will use them against defenders.
The argument is not about exclusivity by setting a price tag, it’s about figuring out a better instrument of controlling access to dangerous tools that every developed nation does in some form or another. Sometimes it doesn’t work out well in the case of the NSA Shadowbrokers or the CIA Vault 7 leaks, but every country that dabbles in Computer Network Exploitation (CNE) has these types of OSTs and other nations are not putting them up for anyone to download and use against them.
(2020, July 16). Electronic code of federal regulations. Office of the Federal Register (OFR) and the Government Publishing Office. Retrieved from https://www.ecfr.gov/cgi-bin/retrieveECFR?gp=&SID=70e390c181ea17f847fa696c47e3140a&mc=true&r=PART&n=pt22.1.125
Herr, T., Rosenzweig, P. (2016, April). Cyber Weapons and Export Control: Incorporating Dual Use with the PrEP Model. Journal of National Security Law & Policy, Vol. 8:301. Retrieved from https://jnslp.com/wp-content/uploads/2016/04/Cyber-Weapons-and-Export-Control_2.pdf