Time to Level Up Our Cyber Resiliency Game
*Note: This article was originally published on September 19, 2018. It is being re-posted here by the author for archival purposes.

Have you ever wondered why the United States, arguably the most technologically advanced nation to ever exist; inventor of the Internet, the Deep Web, and the Dark Web; generally continues to be the victim of massive data breaches such as those suffered by Equifax, Yahoo, the Office of Personnel Management (OPM), Ashley Madison, or Target? It’s lonely at the top when everyone is targeting you. It may seem strange to the layperson who is not formally educated or trained in cybersecurity, but there is a perfectly rational explanation for why U.S. cybersecurity appears to suck so badly. Unfortunately, this predicament we find ourselves in cannot be pinpointed to a single reason for the sad state of national cybersecurity due to the sheer size of the government and corporate industry which makes protecting it a nearly impossible task. Sadly, it has been a total shit show that continues to get worse over time because there are so many hands in the proverbial ‘cookie jar.’ There are ‘too many cooks in the kitchen,’ everyone wants to be a decision-maker, a policy influencer, and a technical expert.
So what are we doing wrong? How can we possibly improve or reverse this trend? Nothing stated in this article is anything that hasn’t already been stated before by people much smarter than myself, but I feel it bears repeating perhaps in a slightly different way or tone in the hopes that it will re-ignite discussion on some crucial cybersecurity issues. The points made here by the author are merely observations from someone who’s worked in and studied the information security field for many years. Additionally, this is not to say that U.S. cybersecurity, in general, has not drastically improved over the years, for it most certainly has. However, there is still much more room for improvement, and we have a long way to go.
Many of these have been voiced before, such as when the members of L0pht testified before Congress in May 1998 about the dangerous state of cyber-insecurity. We in the U.S. can’t seem to get our collective act together when it comes to implementing basic cybersecurity best practices that could vastly improve the protection of our computer networks from all types of cyberattacks. We can sit around at our keyboards and talk about technical security controls all day long, getting quagmired in intellectual debates on social media about which security controls or tools are better or more affordable than others. Or, we can perform some fundamental root cause analysis of why the current state of cybersecurity posture is lacking in the U.S. For all of the executives reading this, don’t think that you can fix your cybersecurity readiness problems by only throwing money at it. It comes down to implementing best practices and continuous monitoring of systems that evidence has shown isn’t being done.
Too much concentration on which security controls and tools are better can make you lose sight of the bigger picture. There are plenty of technical aspects within the field of cybersecurity that requires in-depth study to grasp fully and are very challenging to implement correctly such as cryptography and public key infrastructure to name a couple. However, most of the cybersecurity best practices are not too terribly difficult to implement. We often try to take the cheapest route and often fail to achieve even basic best practices, or we configure a system securely to a certain point and then walk away from it for several months without checking back on it. “It works, so it’s good!” Wrong answer, time to start over.
Allow me to briefly dissect the current state of U.S. cybersecurity and trace a path that technology leaders from every public and private sector can use to change the trajectories of their organizations and plot a new course to improve the overall U.S. cybersecurity posture. To do this, please bear with me as I briefly carve through some Internet history lessons.
Still Using Legacy Systems & Internet Protocols From the 1970s

There is a fantastic quote in the movie “The Matrix Reloaded” in which the Architect character explains to Neo (Keanu Reeves) that his entire life was an anomaly in the programming of the matrix’s code. This scene and quote correlate somewhat to the design of the Internet protocols that have shaped the Internet as we know it today. Perhaps if a single architect had designed the Internet, it would be more congruent and flow more smoothly. This movie demonstrated that even Hollywood has embraced the fact that there is no such thing as perfect program code, much less a perfectly designed system.
Neo: Why am I here?
The Architect: Your life is the sum of a remainder of an unbalanced equation inherent to the programming of the matrix. You are the eventuality of an anomaly, which despite my sincerest efforts I have been unable to eliminate from what is otherwise a harmony of mathematical precision. While it remains a burden assiduously avoided, it is not unexpected, and thus not beyond a measure of control. Which has led you, inexorably, here.
In the 1960s, U.S. computer technology was primarily focused on the mainframe computer arena. During the 1970s, Ethernet and the concept of Local Area Networks (LAN) were developed and implemented. Additionally, in 1969, the Advanced Research Projects Agency (ARPA), the predecessor to today’s Defense Advanced Research Project Agency, better known as “DARPA,” created and deployed the ARPANET. The ARPANET was the predecessor to the Internet, and it quickly grew to 19 nodes within the first couple of years. The nodes were primarily formed of government and universities to facilitate collaboration on technology research and development (R&D). Initially, the ARPANET had design limitations that only allowed for a maximum of 265 nodes and used the Network Control Protocol (NCP). NCP was the predecessor to the Transmission Control Protocol/Internet Protocol (TCP/IP) that was developed in 1974, and though never formally adopted, it is still used today across the Internet.

Internet Protocol version 4 (IPv4) was developed in 1983 for the ARPANET but also had some design flaws that became apparent later on that would need to be addressed. For instance, 32–bit IP addresses allowed for a maximum of 4,294,967,296 IP addresses. Given that there are over 8 billion human beings on the planet, many of which own multiple Internet-connected electronic devices, it quickly became apparent that IPv4 would need to be succeeded by a more modern and larger capacity Internet Protocol. With IPv4, network administrators eventually had the option of enabling Internet Protocol Security (IPSec) in the 1990s, which without going into painstaking detail was developed for IPv6 and serves to authenticate and encrypt data packets.

Along came Internet Protocol version 6 (IPv6) in 1998, which was formally adopted as the new Internet Standard on 14 July 2017 (i.e., you would think it is still a baby protocol since many networks still haven’t switched to IPv6). IPv6 allows for multicasting to many different destination IP addresses at the same time and boasts a 128-bit IP address range that allows for 2 to the 128th power or 3.4 x 10 to the 38th power number of IP addresses. In case you’re not familiar with scientific notation, pull out your calculator and do the math. You’ll see that IPv6 has many orders of magnitude more IP address space allocation than IPv4 allowed. This is critical because IPv4 addresses were quickly running out, and the Internet of Things (IoT) signaled a global explosion in the number of Internet-connected electronic devices that each require unique IP addresses, some static, some dynamic. IPv6 was designed with IPSec automatically enabled.
A Hodgepodge of Technologies
The Internet as we know it today is a compilation of technologies that have been asynchronously developed over decades, overlapping on top of each other somewhat disjointedly. The cyber threats faced today were not anticipated when these protocols were originally designed which has led to many security-related vulnerabilities throughout computer systems and networks for many years now. Software vendors issue patches to plug security holes sometimes on a weekly basis (e.g., Microsoft’s “Patch Tuesday”). Of course, it is impractical to switch from using current protocols cold turkey and immediately adopt new, more secure protocols. Any proposed changes to existing Internet protocols should be thoroughly vetted and gradually phased in over time, much like the recent development and adoption of the Wi-Fi Protected Access III (WPA3) protocol that was announced by the Wi-Fi Alliance in January 2018. Change management processes are a good thing. This process takes years and years to achieve on such a vast scale as the entire Internet because let’s be honest, it does cost money to upgrade IT infrastructure and software. There are also hidden, intangible costs such as training IT personnel on new technology. Until the design flaws within Internet Protocols are redesigned more securely, we will continue to see these same types of vulnerabilities exploited time and again. Slapping Band-Aid fixes on vulnerabilities in the form of software patches is not the best approach to fixing security vulnerabilities. We have to require developers to bake in security from the initial design phase of the software development life cycle (SDLC).
Additionally, the government has not prioritized enough funding to upgrade legacy systems and IT infrastructure that is far beyond its End of Life vendor support. This is a valid security concern as analysis of modern threat attackers, and their tactics have demonstrated a propensity for exploiting published vulnerabilities. This was the case when the DPRK [Lazarus APT] incorporated the SMB 1.0 protocol EternalBlue NSA exploit leaked by the [Russian-attributed] Shadow Brokers in the WannaCry ransomware worm code. Microsoft had already published a software security patch [MS17–010] for this vulnerability two months before WannaCry began propagating across the Web. As typically happens, there were still plenty of victim systems running legacy software that is unsupported such as Windows XP that hadn’t upgraded their OS or patched their systems and consequently had their files encrypted disrupting services. Even Windows 7 systems, a supported OS, that were not patched became infected. Some of the infected victims were U.S.-based, many were in the U.K. or Europe.
U.K.-based malware researcher Marcus Hutchins was able to quickly reverse engineer the WannaCry code and slow its spread by registering a Web domain. However, things could have gone very differently. What if the situation had gone sideways and we were not able to stop it? What will happen next time a global virus or worm sweeps rapidly across the Internet? Global malware infections are similar to a pandemic, but they are not human-borne and are capable of spreading far more quickly across oceans and continents through Ethernet, fiber optic cables, and Wi-Fi radiofrequency signals without needing to hop on international airplane flights. Will U.S. critical infrastructure systems become victims because government and privately-owned critical infrastructure system officials are too stubborn to listen to reason? Will it take a tragedy for common sense to prevail? Americans panic when they don’t have good cell phone reception, imagine what kind of fresh pandemonium will ensue if they suddenly begin losing electricity, water, communications systems (i.e., no phones or Internet), emergency services, public transportation systems, financial systems, water dams stop working and so on? Let’s hope it never gets to that point, but hope is not a course of action.
Virtually Unlimited Number of Attack Surfaces
The average computer user is oblivious to the fact that a computer has 65,535 TCP and User Datagram Protocol (UDP) ports each. The significant difference between the TCP and UDP protocols being that TCP asks for acknowledgment replies and UDP doesn’t, it is a “fire and forget weapon system” as we used to say in the Marines. So, if we do the math on available TCP & UDP ports, it comes out to over 131,070 unique ports that an Internet-connected computer could be vulnerable to attack from if they are left open on a system for whatever reason. This is why proper firewall configuration is essential. This number also doesn’t include other possible attack vectors such as physical access, E-mail phishing, or software applications that are often full of software bugs.

Software & Firmware Insecurities
Manufacturers and software development companies are creating hardware and software that is insecure, and there is no incentive to change this malpractice. Especially with the Internet of Things (IoT), there seems to be a total lack of regard for security baked into the initial design of the device. Children’s teddy bears with video cameras and microphones built into them without any type of security protocols allow these to not only be easily hacked, but the company also left their customer [MongoDB] database unsecured and exposed on the Internet. What could go wrong? Some may think this to be nothing more than a one-off case, but a quick search of IoT device vulnerabilities will prove otherwise.
The corporate greed to increase profit has blinded software, and IoT manufacturers to the dangers of shipping unprotected software and devices. Microsoft is well known for shipping its Windows OS in a very ‘open’ and ‘user-friendly’ condition. Windows has always been fairly plug-and-play out-of-the-box. The security features are there, but they usually have to be enabled by the user or an administrator after the initial install. Security-Enhanced Linux (SELinux) is a different story, however. It has much better access controls built into it. Red Hat Enterprise Linux (RHEL) comes with SELinux. Considering that America has voted with their wallets and corporate budgets making Microsoft the #1 software company on the planet, it would be nice to see better out-of-the-box security features turned on by default in all Windows OS versions. In all fairness, however, Microsoft has put a lot of security-focused improvement work into its Windows Defender firewall and several other security features found in Windows 7, 8, 8.1, and 10 that are beyond the scope of this article.
Ticking Time Bomb
Why do employers continue to allow their employees to access social media and personal email from work computers? As a reward for their hard work? To keep morale high? Surely it can’t be for productivity reasons considering that:
“ Employees spend between one and three hours a day surfing the web on personal business at work, depending on the study reviewed.” — various studies
Aside from the obvious privacy concerns employees face when logging onto personal email accounts and social media profiles from work computers, allowing employees to surf the Web on work computers is risky from a cybersecurity standpoint. Employees generally have no expectation to privacy when they use company computers if they have signed the Acceptable Use Policy (AUP). The level of employee use monitoring varies from employer to employer, but assume that they see everything you do on your work computer.

Phishing email attacks often contain malicious URLs and macro-enabled Microsoft Office files that could potentially bring down an entire enterprise network. Social media is questionable not because employees should be working instead of looking at Facebook, Twitter, and YouTube during working hours, but because social media is also a vehicle for malware transfer that encourages users to click on potentially malicious URLs or content that could contain malware. Perimeter firewalls, Intrusion Detection/Prevention Systems (IDS/IPS), Host-based Security Systems (HBSS), and anti-virus/malware software are all great and very much needed, but if you’re allowing users to access personal email and social media sites from company computers then it indeed is a ticking time bomb waiting to go off. Organizations should know better, and we’ll leave it at that. Good luck getting the CEO to stop checking Facebook and Gmail at work though. Sometimes leadership is the problem.
Perhaps the User is not at Fault?
Blaming so-called “dumb” users for security incidents within your organization is a direct reflection of your inability as a security professional to design and administer a network that is secure and promotes a security conscientious work environment. In other words, it’s not necessarily the user’s fault that they have introduced chaos to your network. Sometimes users do foolish things, but often these are entirely avoidable when the proper security controls are implemented. It is sometimes helpful to think of users like children. As a dad, I can relate to this analogy because you know that kids are going to test the limits to see what they can get away with. Just expect it and plan accordingly. It’s like setting parental controls on the TV or home Internet. You do the same thing to test security controls on your networks, checking first to see if the control functioned as intended and then checking the audit logs afterward to see if it captured the event properly. You can’t blame users for doing precisely what you would do.
In many organizations, we’ve cultivated a prevailing attitude of security as a hindrance to business operations instead of security being carefully interwoven into the overall fabric of the mission. If that occurs, then it is a significant setback for the cybersecurity posture of the organization. If security is seen as an obstacle, then users will do whatever they can to circumvent both physical security measures and cybersecurity controls.
Your company wonders why its proprietary data is escaping its networks, but you allow users to access personal email, during working hours nonetheless. There is no need to use personal email during working hours. If you can’t go 8 hours without checking your private email, then you’ve got a problem. Besides, you can also check private emails on your cell phone during breaks if you’re expecting something to pop in that can’t wait until the workday is finished. Administrators can blacklist private email and Cloud storage websites (e.g., Dropbox, iCloud, Google Cloud) easily by using a proxy server for company Internet access. It is advisable to do so to prevent employees from copying sensitive or proprietary files outside of the organization. Make sure to get senior management buy-in before changing your company’s Internet usage policy.
Your company wonders why productivity is down, and the company network keeps getting infected with malicious software (malware), yet the organization does not restrict access to access social media sites during working hours. Hmm. Social media sites such as Facebook, Twitter, Instagram, Reddit, YouTube, and Snapchat are notorious for malicious links that users post which could potentially contain malware. All it takes is one click, and the download could occur. If the URL click isn’t blocked by the Internet browser as a potentially malicious link or up-to-date anti-virus/malware software, then the machine becomes infected, and the problem only spirals downward from that point on.
Is it because we’re afraid to do our jobs or is it because your organization's policies prevent you from doing your job? Why are we fearful as System Administrators or Security Admins to snoop in on our users’ Internet browsing activities or so-called ‘personal’ files on their work computers? Or is it because we haven’t been given the proper tools and/or permission to do our jobs properly? These types of tools and permissions should be standard for SysAdmins and Security Admins. Virtually every organization requires employees to sign an Acceptable Use Policy (AUP), it’s up to the IT/Security department to hold users accountable for not adhering to the organization’s policies. Bring it up to management and let them be the bad guy. That’s what they get paid the ‘big bucks’ to do.
One Size Never Truly Fits All
Each organization is unique in its business mission, processes, and IT infrastructure, and security needs. This is why risk management frameworks are vitally important to cybersecurity. We know that checking the blocks on cybersecurity compliance checklists does not necessarily result in an un-hackable system. Where there is a will, there is a way in. Advanced Persistent Threat (APT) actors and cybercriminals are very creative, patient, and persistent. If they want to get into your system, they most likely will at some point given they possess the proper skills, resources, and time.
One problem with cybersecurity is that senior management has unrealistic expectations that a ‘good’ cybersecurity professional should be able to defend against any cyber attack, every time. Well, I’ve got news for you. Not only do they not understand how cybersecurity works or what is involved with trying to harden a computer system against all manner of threats, but the fact is also that the attacker only has to be lucky or correct one time, and it’s game over. The cybersecurity defender, on the other hand, has to be correct and successful every time or their job may be at risk. Does that seem fair to you? Next time your organization suffers a security ‘incident,’ your first instinct should not be to fire your IT staff. Show some compassion, the incident should be viewed as a learning experience the first time it occurs. If corrections aren’t made and the same attack is successful again, then the IT staff should be held accountable. The level of damage also has to be taken into consideration. Are we talking Equifax hack level of damage due to incompetent IT staff and failure to patch known software vulnerabilities, or did your company website suffer a Distributed Denial of Service (DDoS) attack that took down your website for several hours? There is a difference.
We’re Not Hunting Werewolves Here
Hence, there are no silver bullets in cybersecurity. In other words, there isn’t one single security control or approach that will 100% protect a system from every known type of physical or digital threat. Instead, we as cybersecurity professionals, have no other choice but to employ a defense-in-depth strategy. Often, an entire regulatory risk management framework is required for compliance which does not guarantee 100% protection but should mitigate the risk for most types of attacks. There is always the possibility that no one knew about the Zero-day attack vulnerability until it was too late and the payload had already been deployed. However, that is the nature of software code, and luckily Zero-days are few and far between. Risk management frameworks have matured over the years to the point where we have some excellent models to follow now such as the NIST Risk Management Framework (RMF), COBIT, COSO, CIS Top 20 Critical Security Controls, and many more. It becomes a matter of determining which framework is best suited for the system if not explicitly defined by compliance regulations.
Operation Blue Falcon
The Marines use the term ‘Blue Falcon’ to denote someone who isn’t a team player or doesn’t get with the program and is basically on their own program. Blue Falcons, or “Buddy F’ers” as they are also colloquially known, are mercilessly ridiculed until they get the point and conform to expectations. In the civilian private and public [government] sectors, it simply doesn’t work in the same way. Everyone appears externally to be on their own program when it comes to cybersecurity with no clear direction or unity of effort despite laws like FISMA. Some agency or organizational leaders see cybersecurity as a priority and designate funding appropriately, whereas some don’t. Some agencies and organizations strive to meet or exceed regulatory compliance standards, whereas others seem not to care at all.
Organizations all have different cybersecurity strategies, resources, and employee depth of experience/skill levels. Without a unified cybersecurity posture, we don’t stand a chance. Look no further than the U.S. State Department who continues to be made an example of year after year because they fail to comply with U.S. law [see FISMA] that requires all government organizations to adhere to the NIST Risk Management Framework. Recently the State Department was hacked again, this time due to their failure to implement multi-factor or two-factor authentication (MFA/2FA) for email account access. One has to wonder who is holding this agency accountable? It should be Congress, but unfortunately, this type of failure has become the status quo. Nothing changes.
Few Consequences for Foreign Adversaries Who Hack Our Systems
When the Morris worm first appeared on the Internet on November 2, 1988, the first of its kind, by the way, it was prolific in that it was able to infect thousands of computer systems rapidly. Worms are self-replicating, self-propagating programs that exploit insecure communication protocols used between networks and computer systems. If the Department of Justice (DOJ) truly believes that indictments of Russian, Chinese, North Korean, Iranian, and East European cybercriminals are putting a dent in their operations, they are sadly mistaken. Perhaps this is the most that the long arm of American justice can do in this situation? The fact is though that without a legal extradition treaty in place, those indicted for cybercrimes will never serve a day of a sentence in a U.S. prison unless they are foolish enough to travel to a country that has an extradition treaty with the U.S.
There are two options to fix this. 1) Either the U.S. passes strict cybercrime laws that affect every user on the Internet who accesses a U.S. computer system or website (e.g., such as the EU did with GDPR) and backs it up with ‘teeth’ in the form of police arrests, indictments, and sanctions; or 2) The U.S. begins conducting special ops-like physical extractions of suspected cyber criminals which would violate international law and potentially start World War III. I dare say we shouldn’t go to that extreme.
Air-Gapping Critical Infrastructure Systems Isn’t As Good As You Think It Is
This recommendation is relatively simple, yet contentious in that many cybersecurity professionals are at odds over it. Either you’re in the camp that believes in air-gapping, or you’re not. Oh, and by the way, the National Institute of Standards and Technology (NIST), you know, those folks who write and publish volumes of federal cybersecurity reference material, recommends air-gapping Industrial Control Systems (ICS) [see NIST SP 800–82r2, SC-41, page G-59]. Specific information systems are categorized as being more sensitive or critical than others. If these sensitive and critical systems are connected to the Internet in some manner, whether directly or backhandedly, then it is only a matter of time before they are compromised. All critical infrastructure systems that Americans rely upon to live such as electric grids, water and wastewater systems, emergency services, and so many more should be protected and disconnected, or air-gapped from the Web.
A genuinely air-gapped system means that it’s completely disconnected from the Internet. This does not mean it’s segregated by a firewall through network isolation or only connected for brief periods of time to update software or any other “sort of” or partial air-gap method. Contrary to some popular opinions, air-gapping is not security through obscurity. Rather, it is part of a defense-in-depth strategy that makes it more difficult for cybercriminals to attack. Systems can still be patched and scanned manually for air-gapped machines or networks, but care must be taken as to how that is accomplished as the Stuxnet worm analysis revealed. Any files transferred from the Internet to an air-gapped machine or standalone network should first be scanned with anti-virus/malware software and file hashed against the original vendor files posted online for download to verify authenticity (e.g., in the case of vendor site patches or software upgrades). This could include things like virus definition files, operating system updates, and software application patches.
Radiofrequency (RF) signal attacks on air-gapped systems are possible, but highly unlikely due to the required sophistication level of the attacker and equipment to pull off such an attack. Air-gapping is not the end of the world or the return to the Stone Age. Let’s put this topic to bed and move on from the silly anti-air-gap arguments. Although air-gapping isn’t 100% effective, it can still be a very important aspect of a layered defense. Air-gapping can be effective if done correctly, and it is still widely used across the industry.
Our Information Systems are NOT Cyber Resilient.
Designing cyber resilient information systems is a basic tenet of system security engineering. However, it is a tricky task that is easier said than done because it requires a system to be designed in such a way that it can recover and continue to operate following a cyber attack, even if only partially. Space systems and military weapons systems have been designed in such a manner for years now, to varying degrees of success. However, has this same philosophy and effort been applied to the nation’s critical infrastructure systems? If not, then why? What could be more important than ensuring our critical infrastructure systems fail in a “safe” state and are quickly recoverable following disruption of service for whatever reason? I am confident that some critical infrastructure is cyber resilient, but “some” is not good enough when we consider that human lives are at stake. Where is the accountability? It circles back to the unity of effort and having a united national approach to cyber network defense (CND). This is where the Department of Homeland Security (DHS) and US Cyber Command should step up and enforce regulation for the sake of national security. Instead, what is happening is more of the same status quo, everyone is on their own program.
A Lack of Scientific Quantitative Data
In the scientific community, research is conducted based on factual data that is measurable and repeatable. We have a serious problem in the cybersecurity community with not being able to provide metrics or “scientific”-like data to support or justify our claims and actions. This should not be the case since computer systems are based on computer science and can, therefore, be measured scientifically using quantitative data analysis to produce repeatable results. If the cybersecurity community wants to have a permanent seat at the C-suite table, then it needs to get in the habit of quantifying data to put results into a context that business managers can understand, track, and attempt to help improve. Speaking anecdotally may work fine in social media post threads, but it has no place in serious, fact-based discussions.
Narcissism is not limited to the White House
Having been a part of the InfoSec community in some fashion or another for over two decades, I can relate to the frustrations that we face. There aren’t enough trained, educated, or qualified cybersecurity professionals to fill the need. Or perhaps this perceived cybersecurity skills gap is due to companies not wanting to pay what cybersecurity professionals feel they are worth? How many times have you seen a job position description asking for entry-level cybersecurity professionals with a desired 3–5 years of experience, a B.S. degree in Computer Science or IT-related field of study, and a Certified Information Systems Security Professional (CISSP) certification? ZipRecruiter lists a $90K salary as the industry norm for entry-level cybersecurity analysts. It is not uncommon, however, to see job advertisements for entry-level cybersecurity analysts starting at $75K-to-$80K/year which is a bit unrealistic and compounds the industry shortage crisis.
On top of this skill shortage is the fact that we have some brilliant people within the InfoSec industry that are doing more damage than good whether they realize it or not. It’s some, not all. Their snarky, more sophisticated, and ‘I am more intelligent-than-thou’ technocrat attitude reeks foul and isn’t earning them any fans in the corporate business world or within government circles [read: people who can actually effect meaningful policy change]. Putting average users down is not only contradictory to what we do as cybersecurity professionals, but it is incredibly off-putting and counterproductive to improving the overall image of cybersecurity as well as the cybersecurity posture of your organization. Companies and organizations from both the private and public sectors go to security conventions like Black Hat and DEF CON to learn from the expert hackers/security researchers, advertise their products, and potentially meet new talent they may want to hire. If you come across as a know-it-all, super L33T attitude that is only concerned about data privacy and encryption of all things, then how does this help the overall cybersecurity awareness and posture?
We [the Infosec community] criticize the medical community for failing to patch software or for using Windows XP and 7 Enterprise when windows 10 is available, but we fail to understand that it is cost-prohibitive for hospitals and health clinics to upgrade their systems. Additionally, it’s not as if hospitals and clinics have throngs of IT support staff going around patching their systems and investigating security incidents. As cybersecurity professionals, we should strive to find affordable solutions for companies and organizations that will best defend their mission-critical systems. It might not make sense to spend thousands upgrading an MRI machine to the latest version of Windows or get all of the latest software patches when it is never connected online.
We’ve got to get out from behind our computer terminals and walk around to check security on systems, Operational Security (OpSec), Emission Security (EMSEC), and storage media. Not every process can be automated. You still need to get out of your comfort zone and talk to users, talk to management, find out where the “rub” is within your organization concerning cybersecurity; conduct your rounds if you will.
I think the word expert is like the word ‘hero,’ it is overly used to the point where it has lost meaning. Expert is one of those generic labels we as a society tend to place on people who are highly educated, have many years of experience, or just seem to know a lot about a particular subject. The term expert can be misleading though, as I have seen numerous so-called cybersecurity experts provide inaccurate or flat-out wrong guidance. This is partly why requirements such as certification continuing professional education (CPEs) are essential. For instance, within the Information Security (InfoSec) or cybersecurity community, there is a tendency for professionals to rack up degrees and IT security industry certifications like trophies on the shelf, listed on their LinkedIn profiles and resumes.
Let me be clear about this, having a Ph.D or a bunch of acronyms signifying all of the certifications you hold after your name does not make someone better qualified or smarter than another person lacking those qualifications. Nowadays, the ability to attend a university is largely driven by financial aptitude instead of scholastic aptitude.
Yes, having established industry certification benchmarks is important, but we shouldn’t place a whole lot of value on them since they have been proven time and again to be irrelevant to someone’s aptitude or the potential to grow within the industry.
Misguided Focus
First off, can we please stop bickering about whether we should call our field of study Information Security (InfoSec), Information Assurance (IA), or Cybersecurity? I mean for goodness sake, it’s time to let it die already. Seriously though, we have so many better things we can be doing with our time. If you’ve got time to rant on Twitter, Facebook, Reddit, or LinkedIn about why some hackers are good as opposed to the negative, more criminal image Hollywood has infected America’s minds with, then you might want to take a hard look at your priorities in life. Get over it. Most people will naturally associate computer “hacking” as bad and call them “hackers.” I don’t get offended by this association. You can view it as an opportunity to educate them on the various types of hackers or about the original meaning of the term hacker. I’ve often found that it makes for some interesting conversations.
Security conferences are another colossal time suck. Sure, they are fun to attend to pick up free swag and meet up with your fellow InfoSec colleagues, but aside from the security talk content what is the real value of attending these events? Social networking occurs continuously and not just at these conferences. Not to mention the costs associated with attending the plethora of annually held conferences. If your company will pay your time to attend then, that is awesome! However, most folks don’t work for companies like that. So, in addition to using paid or unpaid time off, you’re also looking at having to pay the conference fee (DEF CON was nearly $300 cash this year), travel and lodging expenses, food/drink. I have several InfoSec friends who continuously budget all year long to afford to go to a few of the big-name security conferences like BlackHat, DEF CON, and RSA. Are conferences your focus, or should your focus perhaps be elsewhere?
Part of the problem the U.S. cybersecurity industry faces is the fact that we cannot all agree on what to call whatever it is that we do on a daily basis. I know, I know, “I do cybersecurity damn it!” Ok, we got that.

Beyond that though, what do you do? How do we justify our existence to our employer? Are you able to produce quantifiable results that when measured demonstrate increased security of your organization’s computer systems? Take a moment and ask yourself, what value do we add to the organization? Realizing that I won’t earn any brownie points by saying this, I think it is time that we in the InfoSec community get off our high horses and come down to the user level so that we can work with and educate our users on how best we can improve our overall cybersecurity posture. Not just within your organization, but nationally. Cyber on Garth!