Hunters has launched a new capability that allows security analysts to ask GPT for explanations about suspicious command line attributes. This can be utilized on a command line of any alert or signal surfaced via Hunters SOC Platform, greatly streamlining the investigation process.
With the new capability, we further help analysts understand incidents that matter and act upon them with the necessary context, as it allows anyone to instantly ask GPT what a specific command line is and get an explanation of what it does in a human-readable format.
Command lines can be obfuscated or extremely complex, and knowing all possible arguments and flags is simply impossible. Even people who have been in the cybersecurity space for over a decade need to research and read documentation when they come across specific alerts. Memorizing millions of binaries and all their arguments and flags is humanly impossible, but luckily, GPT is here to help.
In the example below, we asked GPT for a further explanation of what an alert from CrowdStrike Falcon relating to a First Behavior did, and received a summarized, easy-to-read description of this complex command line.
This feature is yet another advancement that allows Hunters customers to accelerate the investigation process and empowers every member of your security team to quickly determine whether the lead they are investigating is benign or malicious.
Existing Hunters customers can learn more on how to use the ‘Ask GPT’ button as part of their investigation workflow in our existing documentation.
Continuing on our mission to improve the effectiveness and efficiency of security teams, there are plenty more use cases where AI can and will be useful. As we continue to develop features based on this premise, we are dedicated to doing so responsibly and in close collaboration with our customers.
Take the Hunters SOC Platform for a test drive here.