Artificial intelligence and machine learning have never been more prominent in the public forum. CBS’s 60 Minutes recently featured a segment promising myriad benefits to humanity in fields ranging from medicine to manufacturing. World chess champion Garry Kasparov recently debuted a book on his historic chess game with IBM’s Deep Blue. Industry luminaries continue to opine about the potential threat by AI to human jobs and even humanity itself. Much of the conversation focuses on machines replacing humans. But the fact is the future doesn’t have to see humans eclipsed by machines.
In my field of cybersecurity, as long as we have a shortage of human talent, and, as a 451 Research report released this week illustrates, we must rely on technologies such as these to amplify the capabilities of the humans we have. Furthermore, as long as there are human adversaries behind cybercrime and cyber warfare, there will always be a critical need for human intellect teamed with technology.
We recently commissioned 451 Research to delve into this area in one of its Pathfinder Advisories. Released this week, the report nicely frames the concept of “human-machine teaming” in cybersecurity. It identifies ways in which we can use machine learning to overcome the challenges of protecting organizations and do so with an insufficient number of cybersecurity professionals.
To quote the report:
“Machine learning makes security teams better, and vice versa. Human-machine teams deliver the best of both worlds:
- Machine learning means security teams are better informed so they can, therefore, make better decisions. Security executives realize that the intelligence and creativity of their security operations experts are critical business resources. Machine learning is a technology that allows chief security officers (CSOs) to get the most out of human and security product assets.
- Adversaries are human, continuously introducing new techniques. Creative new tactics and strategies dealt by adversaries force security teams to employ machine learning to automate the discovery of new attack methods. Creative problem solving and the unique intellect of the security team strengthen the response.
- Machine learning becomes more accurate as more data is available to feed its algorithms. Enhancements in handling big data using high-performance and massive-capacity storage architectures have enabled the growth of artificial intelligence.
- IT teams need help analyzing faults. In those rare instances when endpoint security cannot prevent damage from an attack, machine learning accumulates relevant data elements into one place, placing it at the fingertips of security analysts when needed.
- Human-machine teaming makes for sustainable endpoint security. As new threats are introduced, security teams alone cannot sustain the volume, and machines alone cannot issue creative responses. Human-machine teams make endpoint security more effective without draining performance or inhibiting the user experience.”
Machine learning has enabled us to improve the accuracy of hurricane forecasting from within 350 miles to within 100 miles. Nate Silver’s best seller The Signal and the Noise notes that although our weather forecasting models have improved, combining this technology with human knowledge of how weather systems work has improved forecast accuracy by 25%. Such human-machine teaming has literally saved thousands of lives.
As we implement machine learning deeper into our cyber defenses, we must recognize that humans are good at doing certain things and machines are good at doing certain things. The best outcomes will come from combining them. Machines are good at processing massive quantities of data and performing operations that inherently require large scales. Humans have strategic intellect, so they can understand the theory about how an attack might play out even if it has never been seen before.
Of course, thunderstorms are not trying to evade the latest in machine learning technologies applied by human beings. Cybercriminals are.
Cybersecurity is very different from other fields that employ big data, analytics, and machine learning because there is an adversary trying to reverse engineer your models and evade your capabilities. Security technologies such as spam filters, virus scans, and sandboxing are still part of protection platforms, but their industry buzz has cooled since criminals began working to evade their technology.
Based on the information they receive, IT security staff on the front lines of an attack can anticipate new evasion techniques, exploits, and other tactics in ways detection models based on the past cannot. A major area in which we see this playing out is attack reconstruction, where technology assesses what has happened inside your environment, and then engages a human to work on the scenario.
Efforts to orchestrate security incident responses can benefit tremendously when a complex set of actions is required to remediate a cyber incident. Some of those actions might have very severe consequences to networks. Having a human in the loop not only helps guide the orchestration steps, but also assesses whether the required actions are appropriate for the level of risk involved.
The 451 report asserts that machine learning will manifest itself by optimizing the cyber professional’s user experience, automatically flagging suspicious behavior, and by automatically making high-value investigation and response data available. In this way, says the report, IT security teams will have “the ability to rapidly dismiss alerts and accelerate solutions that thwart new threats.”
In threat intelligence analysis, attack reconstruction, and incident response orchestration, human-machine teaming takes the machine assessment of new information and layers upon it the intellect that only a human can bring.
Doing so can lead us to better outcomes in all aspects of cybersecurity. Now more than ever, better outcomes are everything in cybersecurity.