The future of cyber security is about more than just AI-assistants. It’s about changing the whole paradigm of how we automate cyber defense. It's about self learning systems based on Reinforcement Learning from Human Feedback (RLHF) loops that can fight the threats of today and of tomorrow. But despite the hype that Large Language Models (LLMs) and generative AI have sparked, generative AI as it is today won’t get us to our destination: an intelligent, self-learning cyber defense technology. This will require a blend of different technologies and approaches, and it won’t happen overnight.  

Cybersecurity is fundamentally about classification between benign and malicious activities, a task which requires reasoning that LLMs have yet to prove adept or superior to other machine learning approaches. LLMs excel at processing and generating human language, understanding context, semantics, and the nuances of different forms of communication, capabilities that have a huge impact on tasks like incident reporting and increasing explainability, analyzing advisories, etc. And that may be useful, it is unlikely to single handedly solve the core challenge of the SOC: detecting and remediating threats with accuracy and speed. Focusing on generative AI as the new ‘silver bullet’ misses the entire breadth and depth of an evolving field that is set to touch every facet of security operations and beyond.

In a rare case of competing analyst firms agreeing, even Forrester and Gartner have released reports contextualizing which use-cases generative AI and LLM’s are  - and aren’t - suitable for. As technology advances, and LLMs improve their reasoning abilities and the sophistication of specialized models, they will become effective orchestrators. But, generative AI and LLM’s are not a silver bullet. One of the key issues with LLMs is that they lack agency: the ability to act autonomously without human supervision. Realistically, solving the entire problem set that needs to be solved in security operations will require a combination of different AI and machine learning approaches, not to mention a completely new type of architecture. That’s what we’re building at Hunters. 

 

The AI Revolution is an Evolution

At Hunters, we believe that the AI revolution will occur in phases, not dissimilar to the six stages of autonomous driving. In the short to mid-term, we see three distinct phases on the path to building a truly revolutionary AI-driven SOC Platform. Each phase builds on the previous one, gradually enhancing the system's capabilities and integrating more and more sophisticated AI to handle the complexities of modern, evolving cyber threats. While we’ve made the progress look linear, it’s also worth considering the futurist William Gibson, who said, “the future is already here; it’s just not very evenly distributed.” We don’t expect every dependency and development to evolve in lockstep. Some use-cases we apply AI to will develop faster than others. 

 

Phase 1 (Now): The Building Blocks

Today we find ourselves transitioning out of Phase 1. We have developed many AI building blocks that are encoding our team's deep domain expertise, automatically performing tasks such as: data analysis and modeling, detections, automatic investigation drilldowns, threat clustering, scoring functions, enrichments, graph correlation strategies and more. Most of these building blocks are deployed narrowly or in isolation, lacking thorough integration with other techniques to derive strong synergies and with limited ability to learn from past executions. 

In parallel to analytical building blocks, a critical part of the SOC AI evolution is data infrastructure. Our data infrastructure is built to collect, process, and store massive data volumes using data lakes, data pipelines, and in the future vector databases. We believe that transitioning from Phase 1 to Phase 2 will crucially depend on adopting security industry standards such as OCSF and open table and storage formats like Iceberg and Parquet for data storage and exchange.

We focused on detection, triage and investigation, as we believe this is where the massive bottle neck for SOC analysts' workflows lies, deploying techniques that excel in use-cases such as dramatically reducing alert noise and increasing SOC productivity.

Some examples of the building blocks we’ve deployed in Hunters include:

  • Anomaly Detection & Multi Context UEBA: We use proprietary ML-based anomaly detection algorithms to detect events such as anomalous logins based on deep insights from historical data. Additionally, we deploy multi-context UEBA techniques to help reduce noise and find events that matter. 
  • Multi Stage Scoring Layers: We deploy several dynamic layers  on vast amounts of alerts and threat clusters based on multiple confidence and severity models, providing an accurate threat score. 
  • Streamlined Threat Detection through Advanced Clustering: To optimize our threat detection, we've moved beyond addressing individual alerts. By clustering related threats based on their type and context, we significantly cut down background noise. This streamlined approach not only sharpens our detection capabilities, but also enhances our overall response efficiency, ensuring quicker triaging and more effective investigation.
  • Graph-Based Correlation & Attack Stories: We use graphs to narrate attacks, drawing together all of the separate strands into a single story instead of multiple separate, isolated events.

These are just a few examples of the intelligence and automation building blocks already deployed in our solution. We've also deployed LLMs for explainability purposes around specific use cases which benefit them, such as explaining complex command line executions.

 

Phase 2: Integration and Optimization

We believe the key to enabling an AI driven SOC lies in the foundations, as mentioned earlier, of our security knowledge layer which serves as the bedrock to making intelligent machines. The next step in the evolution is allowing these machines, in the form of AI agents, to utilize and learn from these clusters, graphs and raw data. These agents will form a multiagent network and will be used for distinct tasks (i.e. triage), with access to skills and actions sufficient to do their work. AI agents will utilize various ML techniques, such as reinforcement learning, to optimize task execution based on past experiences embedded in the knowledge layer. Some examples of our plans include:

  • Agent-Based Architecture: The breadth of our architecture encompasses various components that must interact intelligently through the creation of shared agreements. These agreements should be aligned with the knowledge each component holds, as well as its impact on upstream and downstream components.
  • From Brute Force to Precision: Transition from a brute force approach (processing every alert in a linear, exhaustive manner) to a more intelligent approach where the agent determines the most effective path for investigation based on likely outcomes and past learning.
  • Reinforcement learning from Human Feedback: Continuous feedback from real-world human interactions will be used to train machine learning models, leading to self improving and more precise investigation and scoring flows, directly improving the state of the customers environment 

By leveraging multiple technologies and AI-agents, we will further automate and vastly accelerate threat detection, identification and remediation. SOC personnel will still be in charge of complex investigation and identification of new threats, helping the intelligent system improve and learn.

 

Phase 3: Scaling Intelligence 

Further into the future, connected agents will be integrated into highly interconnected networks that can share insights and adapt quicker to emerging threats. Humans will be guided by actionable information, and kept in-the-loop and on-the-loop as needed for autonomous processes, creating direct human-agent feedback loops. Additionally, we will see new security knowledge generated automatically by intelligent systems based on a multitude of feedback loops and learning models.

  • Multiple Feedback Loops: Agents will share outcomes and insights, allowing the system to adapt collectively to new threats and changes in the environment more efficiently. The system will learn from other agents, human analysts, and ecosystem knowledge.
  • Dynamic, Rapid Response Mechanisms: The system’s capability to react to threats can evolve, leveraging collective intelligence and automated adjustments to strategies in real-time.
  • Optimized Responses: The system will not only detect and respond to threats more efficiently but also prioritize actions based on the likelihood of success and the impact of the threat. Autonomous containment will identify where and how to stop an attack based on minimizing business workflows and collateral damage. 

 

Keeping our eye on the prize: We’re here to stop cyber threats, AI is just a means to an end.

Ultimately, fighting cybercrime is a human endeavor. We know that nothing will get rid of cybercriminals and adversaries; it's about changing the equilibrium and staying ahead of the curve. This will require us to fully commit and discover a distinctive fusion of machine and human intelligence. 

At Hunters, we are committed to remaining at the forefront of cybersecurity operations technology built to stop cybercrime. Our solutions will not just adapt to the changing landscape but also set new standards for innovation and effectiveness in protecting our clients. As AI technologies evolve, so will Hunters solutions, with the goal of providing the most advanced and dependable cybersecurity defenses possible.