The future of cyber security is about more than just AI-assistants. It’s about changing the whole paradigm of how we automate cyber defense. It's about self learning systems based on Reinforcement Learning from Human Feedback (RLHF) loops that can fight the threats of today and of tomorrow. But despite the hype that Large Language Models (LLMs) and generative AI have sparked, generative AI as it is today won’t get us to our destination: an intelligent, self-learning cyber defense technology. This will require a blend of different technologies and approaches, and it won’t happen overnight.
Cybersecurity is fundamentally about classification between benign and malicious activities, a task which requires reasoning that LLMs have yet to prove adept or superior to other machine learning approaches. LLMs excel at processing and generating human language, understanding context, semantics, and the nuances of different forms of communication, capabilities that have a huge impact on tasks like incident reporting and increasing explainability, analyzing advisories, etc. And that may be useful, it is unlikely to single handedly solve the core challenge of the SOC: detecting and remediating threats with accuracy and speed. Focusing on generative AI as the new ‘silver bullet’ misses the entire breadth and depth of an evolving field that is set to touch every facet of security operations and beyond.
In a rare case of competing analyst firms agreeing, even Forrester and Gartner have released reports contextualizing which use-cases generative AI and LLM’s are - and aren’t - suitable for. As technology advances, and LLMs improve their reasoning abilities and the sophistication of specialized models, they will become effective orchestrators. But, generative AI and LLM’s are not a silver bullet. One of the key issues with LLMs is that they lack agency: the ability to act autonomously without human supervision. Realistically, solving the entire problem set that needs to be solved in security operations will require a combination of different AI and machine learning approaches, not to mention a completely new type of architecture. That’s what we’re building at Hunters.
The AI Revolution is an Evolution
At Hunters, we believe that the AI revolution will occur in phases, not dissimilar to the six stages of autonomous driving. In the short to mid-term, we see three distinct phases on the path to building a truly revolutionary AI-driven SOC Platform. Each phase builds on the previous one, gradually enhancing the system's capabilities and integrating more and more sophisticated AI to handle the complexities of modern, evolving cyber threats. While we’ve made the progress look linear, it’s also worth considering the futurist William Gibson, who said, “the future is already here; it’s just not very evenly distributed.” We don’t expect every dependency and development to evolve in lockstep. Some use-cases we apply AI to will develop faster than others.
Phase 1 (Now): The Building Blocks
Today we find ourselves transitioning out of Phase 1. We have developed many AI building blocks that are encoding our team's deep domain expertise, automatically performing tasks such as: data analysis and modeling, detections, automatic investigation drilldowns, threat clustering, scoring functions, enrichments, graph correlation strategies and more. Most of these building blocks are deployed narrowly or in isolation, lacking thorough integration with other techniques to derive strong synergies and with limited ability to learn from past executions.
In parallel to analytical building blocks, a critical part of the SOC AI evolution is data infrastructure. Our data infrastructure is built to collect, process, and store massive data volumes using data lakes, data pipelines, and in the future vector databases. We believe that transitioning from Phase 1 to Phase 2 will crucially depend on adopting security industry standards such as OCSF and open table and storage formats like Iceberg and Parquet for data storage and exchange.
We focused on detection, triage and investigation, as we believe this is where the massive bottle neck for SOC analysts' workflows lies, deploying techniques that excel in use-cases such as dramatically reducing alert noise and increasing SOC productivity.
Some examples of the building blocks we’ve deployed in Hunters include:
These are just a few examples of the intelligence and automation building blocks already deployed in our solution. We've also deployed LLMs for explainability purposes around specific use cases which benefit them, such as explaining complex command line executions.
Phase 2: Integration and Optimization
We believe the key to enabling an AI driven SOC lies in the foundations, as mentioned earlier, of our security knowledge layer which serves as the bedrock to making intelligent machines. The next step in the evolution is allowing these machines, in the form of AI agents, to utilize and learn from these clusters, graphs and raw data. These agents will form a multiagent network and will be used for distinct tasks (i.e. triage), with access to skills and actions sufficient to do their work. AI agents will utilize various ML techniques, such as reinforcement learning, to optimize task execution based on past experiences embedded in the knowledge layer. Some examples of our plans include:
By leveraging multiple technologies and AI-agents, we will further automate and vastly accelerate threat detection, identification and remediation. SOC personnel will still be in charge of complex investigation and identification of new threats, helping the intelligent system improve and learn.
Phase 3: Scaling Intelligence
Further into the future, connected agents will be integrated into highly interconnected networks that can share insights and adapt quicker to emerging threats. Humans will be guided by actionable information, and kept in-the-loop and on-the-loop as needed for autonomous processes, creating direct human-agent feedback loops. Additionally, we will see new security knowledge generated automatically by intelligent systems based on a multitude of feedback loops and learning models.
Keeping our eye on the prize: We’re here to stop cyber threats, AI is just a means to an end.
Ultimately, fighting cybercrime is a human endeavor. We know that nothing will get rid of cybercriminals and adversaries; it's about changing the equilibrium and staying ahead of the curve. This will require us to fully commit and discover a distinctive fusion of machine and human intelligence.
At Hunters, we are committed to remaining at the forefront of cybersecurity operations technology built to stop cybercrime. Our solutions will not just adapt to the changing landscape but also set new standards for innovation and effectiveness in protecting our clients. As AI technologies evolve, so will Hunters solutions, with the goal of providing the most advanced and dependable cybersecurity defenses possible.