Business & Technology
How does AI improve the speed of threat hunting?
The introduction of LLM-powered AI SOC platforms is democratising threat hunting by breaking down the technical barriers that have historically limited access to it for senior analysts.
By allowing analysts to translate intent into platform-specific queries using natural, non-technical language, AI eliminates the need for specialised knowledge like Python scripting or proprietary query languages.
Now we know that artificial intelligence can accelerate threat hunting and open it up to a wider set of team members, but exactly how does it achieve this transformation? This article covers exactly how.
Applied to the threat hunting process, AI can:
- Automate evidence gathering
- Suggest where threats can be hunted
- Translate intent into queries
- Provide a reasoning layer that wasn’t there before
- Enable complex, always-on threat hunting
Threat hunting isn’t good enough if it is sporadic, subjective, or based on human timelines: adversaries are attacking at the speed of machines, and AI-enabled ones at that.
Weaving AI deeply into modern threat hunting practices will now only “speed things up,” but change the threat hunting expectation from an occasional benefit to a constant, standard practice.
1. Automating Evidence Gathering (& Saving SOC Cycles)
At the start of a threat hunt, one looming barrier stands in the way: gathering evidence. For the typical SOC, this means toggling between a half dozen tools, taking screenshots, and compiling the case.
With AI, security operations automation becomes a reality. As leading AI SOC platform company Prophet Security explains, “Once a hunt starts, [an AI SOC solution] pulls logs, events, and metadata from integrated sources without requiring the analyst to query each one manually.”
Without the use of AI, this process can take up to an hour with manual investigative querying processes alone: across SIEM, EDR, email, IAM, etc. With AI, that timeline is reduced to less than 20 minutes.
2. Suggesting Threat Hunts: Getting to What Matters
However, before evidence can even be gathered, analysts need to know what they’re hunting: the hypothesis.
Not all SOCs are equipped with the same technical expertise or the same amount of time to do a hunt. The status quo is that threat hunting is currently a proactive measure; something done to stay ahead of threats missed by detection rules and done as a hygienic best practice. Otherwise, threat hunting is a strictly reactive procedure as part of the incident response process, and typically done in response to a recent breach or an upcoming audit.
Either way, feeling ahead of the game or behind it still makes threat hunting seem “special.” The end goal is to make it seem standard.
And neither scenario leaves hunters with all that much time to carefully choose where to start, or what to pursue. With so many possible signals, any one of them could lead to a wider issue – or to a dead end. Getting hours into a hunt only to realise the road leads nowhere is a waste of time and money, and every threat hunter knows the feeling.
AI can suggest the threats worth hunting before anyone even starts looking at the signals. By ingesting telemetry from across all integrated tools (EDR, identity logs, network traffic, SIEM), it creates a baseline of normal behaviour.
When something deviates from normal behavior, it can go one step further by mapping to known attacker techniques (MITRE ATT&CK), and then form a hypothesis about what could be wrong.
Most importantly, not all hypotheses are created equal. AI knows this. It ranks hypotheses by criticality (asset criticality, privilege level, likelihood) and presents hunters with a ranked list: not a best-guess, intuition-inspired direction.
Then, all analysts have to do is ask the right questions.
3. Translating Intent into Queries: No Coding Required
Currently, when analysts want to query systems, they have to speak the respective language. With AI, Large Language Models (LLMs) do this technical heavy lifting for threat hunters. In an AI SOC, even a junior analyst can type in a simple request:
“Where else across the environment was this (flagged) IP seen?”
And AI will use natural language processing to translate the plain-language question into platform-specific query languages (SQL, SPL, KQL): no technical interface required. No manual coding. This not only makes “every analyst a threat hunter,” thereby speeding up how many threat hunts can be performed, but it also makes each hunt faster.
Senior analysts can skip the long lines, the reviewing and editing, and the technical learning curves to searches; instead, they can focus on the actual “thinking” part of threat hunting.
Increasingly, AI is doing even that, too.
4. Providing Additional Reasoning, At Machine Speed
Automation-only tools (SOAR, XDR) may correlate events, but the best AI SOC platforms tell analysts why they happened. Agentic AI is behind that.
By providing an additional reasoning layer, analysts can move more quickly and confidently through hunts, having a built-in backup “brain” at each step.
Agentic AI constructs dynamic attack narratives, building an attack graph across users, hosts, processes, and network connections. It processes and correlates context, tying it into the broader story.
After mapping to MITRE ATT&CK, it can show analysts:
- A timeline of the attack
- A likely attack path
- Any missing steps
These missing steps are where threat hunters fill in. It takes teams from raw logs to the structured intent of the attacker, bypassing hours of analysis, toggling, and piecing together clues along the way.
Now, instead of “Suspicious PowerShell execution” alerts, teams get something like: “Suspicious PowerShell on a domain controller by a rarely used admin account after anomalous login.”
Starting there means starting with a significant head start.
5. Enabling Complex, Always-On Threat Hunting for Max Coverage
Another reason threat hunting with AI is faster than threat hunting without it, is that AI never tires. In traditional setups, humans are the head, foot, and tail of threat hunts. They might operate automated tools, but things don’t happen until they’re at the controls.
While most SOCs run 24/7, small teams and even large enterprises understand how hard (and costly) that can be. Your 3 am threat hunting team is not going to be as sharp, savvy, or awake as your 9 am team.
Or, as AI.
AI-enabled threat hunting through an AI SOC means vigilance that never sleeps, tires, or makes mistakes out of exhaustion. Mental powers are never taxed, and help surface signals that may otherwise be overlooked.
Speed Becomes Consistency
AI makes threat hunting faster. And when things are done faster, they can be done more often.
This benefits large enterprises, who, even at their best, may only conduct threat hunting once a week (or once a day for elite achievers).
This benefits mid-tier organisations that hover somewhere between quarterly threat hunts and even-based threat hunts: trying to stay on top of things but having to split analysts between proactive activities and daily tasks.
And it benefits the smallest companies that struggle to even staff a SOC, much less a SOC full of experienced threat hunters.
For all these teams, AI gives them something they never had: round-the-clock threat hunting, done at machine speed, and proactive security that comes standard.
The Takeaway: At a time when AI-driven threats never sleep, AI-driven threat hunting is more than a nice recommendation. It is the new norm for organisations that understand AI attackers aren’t playing by traditional detection rules, and that they will increasingly be found only via ongoing, AI-powered threat hunts.
Business & Technology
New opening at the wellbeing community Rooted in Burford
Heart Mind Spirit, founded by yoga therapist Alison Lewis, provides personalised, one-to-one yoga therapy sessions at the Burford-based space in the Cotswolds.
The sessions, available on Monday afternoons at Rooted, focus on movement, breathwork, and mindfulness, and will be tailored to each individual.
Alison Lewis, founder of Heart Mind Spirit, said: “I’m incredibly excited to be joining the Rooted community.
“It’s such a special space that truly reflects the values of connection, wellbeing, and personal growth.
“More and more people are looking for personalised, holistic ways to support their mental and physical health, and yoga therapy is becoming an increasingly valued part of that journey.
“Rooted provides the perfect environment to offer that kind of tailored support in a welcoming community setting.”
Heart Mind Spirit joins Rooted in Burford as part of a growing trend for accessible, community-led wellbeing services in rural areas.
Lillie Ananda, founder of Rooted, said: “As more people seek meaningful, accessible wellbeing support within their local communities, spaces like Rooted have an important role to play, and welcoming Heart Mind Spirit into the space feels like a natural and valuable addition to that vision.
“For me, it’s all about slowing down and really appreciating the root of all our offerings.”
Sessions with Ms Lewis began on Monday, April 13.
Rooted in Burford offers a broad programme of classes, workshops, and therapies to support individual wellbeing and foster a sense of community.
Business & Technology
Bicester law firm receives award for client experience
Magara Law, which has offices in Bicester, Banbury, Reading and London, won the Best Guest Experience award within the Business Buzz network.
The award is based on an anonymous vote from members of the Business Buzz community.
Roy Magara, founder of Magara Law, said: “This award really stands out for us because it comes from people who have dealt with us, which makes it an honest reflection of what we’re doing in practice.”
The firm was previously named Best Place to Work at the Business Buzz Awards last year and has been shortlisted in six categories at the upcoming Modern Law Awards.
This includes recognition for Boutique Law Firm of the Year and Employment Law Team of the Year.
The win comes during a period of growth for the firm, with new team members joining and further recruitment planned.
The practice has also launched a workplace mediation service after Mr Magara became an accredited mediator.
He said: “If someone comes away from the process understanding where they stand and feeling properly supported, that’s what we’re aiming for.
“To have that recognised by peers is something we are very proud of.
“It’s still about dealing with matters properly and making sure clients are supported throughout.”
Magara Law advises both individuals and organisations on employment law and maintains a perfect 5.0 rating on ReviewSolicitors.
The firm is ranked among the UK’s top 25 employment law practices.
Business & Technology
UK launches GBP £500m sovereign AI fund amid doubts
The UK government has launched a GBP £500 million Sovereign AI fund, a move that industry figures say highlights tensions between national AI ambitions and reliance on overseas providers.
The fund is intended to support domestic AI infrastructure and models as part of a broader push for so-called sovereign AI across Europe and other advanced economies. Ministers have presented it as a way to strengthen national resilience in strategic technologies and reduce exposure to foreign supply chains.
The announcement has sparked debate among vendors and advisers over how far the UK should pursue AI self-sufficiency. Much of the discussion centres on the dominance of US and Chinese firms in foundation models, cloud infrastructure and specialist chips.
George Tziahanas, vice president of compliance and associate general counsel at Archive360, warned that governments risk overextending national resources if they try to replicate entire AI stacks onshore too quickly. In his view, strategies focused too narrowly on sovereignty could miss advances in commercial tools developed abroad.
“Sovereign AI investments are smart, but countries shouldn’t over index on building fully domestic AI supply chains. Not only will they be difficult to achieve at speed, but they also risk falling behind the ongoing innovations in other countries. In the UK’s case, that’s China and the US, both of which have a large head start.”
“Countries should also consider prioritising flexibility to support the use of multiple AI tools to ensure individuals and companies are not locked into any one model or one tech company. Optionality is likely a stronger long-term strategy than attempting to build a fully domestic AI model,” Tziahanas said.
Archive360 works with regulated organisations on data and AI governance and manages large volumes of cloud-based corporate information. Its clients use third-party AI models for analytics and automation, making data jurisdiction, vendor concentration and model risk central concerns for the firm.
Tziahanas’s comments reflect a broader concern that heavy investment in homegrown models could weaken incentives to adopt global tools that have already reached scale. Supporters of the government’s approach argue that long-term security and strategic control justify the initial cost and delay.
Another line of criticism focuses on how the Sovereign AI fund will benefit ordinary businesses. With many enterprises still in the early stages of deployment, advisers argue that policy must address both adoption and industrial strategy.
Tarek Nseir, co-founder and senior value partner at consultancy Valliance, drew a distinction between building national champions and driving day-to-day AI use inside existing corporations. He pointed to low adoption levels among UK firms and continued dependence on US providers.
“AI sovereignty is a positive long-term ambition and this investment is a good move to that end, but the reality is UK enterprises are still heavily reliant on US-controlled technology – which is far from a bad thing. The UK’s real challenge is working with these providers to make sure the right infrastructure is in place for enterprises, so they can get the maximum value from working with the likes of OpenAI, Google, Anthropic or Palantir,” Nseir said.
Nseir pointed to recent developments involving major AI companies working closely with UK public bodies. He argued that political debates over national control can distract from immediate opportunities to improve productivity through existing services.
“We can’t celebrate more sovereign technology funding without also acknowledging that not enough is being done to put AI into the hands of existing enterprises. OpenAI pulling Stargate UK, and the ongoing debate around Palantir’s work with the NHS, both suggest that independence is distracting from on-the-ground realities. These are the firms who can deliver returns immediately, and we can’t let the pursuit of sovereignty become a blocker,” Nseir said.
Government departments have presented the Sovereign AI fund as one part of a broader industrial and digital strategy. Policy documents refer to domestic compute infrastructure, homegrown models, and support for UK research, as well as work on skills and regulation.
Data from industry groups suggest that only about one in six UK businesses has adopted AI in its core operations. Larger companies and financial services firms report greater use of machine learning and generative tools, while many small and medium-sized enterprises remain cautious about costs, compliance, and return on investment.
Vendors warn that fragmented approaches to sovereignty across jurisdictions could add complexity to compliance and cross-border data flows. They argue that multi-model strategies and contractual controls over data location and privacy may offer a more flexible path than the strict localisation of all AI components.
The UK fund comes as scrutiny of the AI supply chain intensifies, including concentration risks around advanced chips, dependence on a small group of cloud providers and questions over long-term access to leading frontier models. Industry participants expect those structural issues to shape the extent of influence any single national programme can have on the global market.
-
Crime & Safety1 week agoLorry overturns on Oxfordshire A43 roundabout with driver trapped
-
UK News8 hours agoStarmer says it ‘beggars belief’ he wasn’t told about Mandelson vetting failure as he faces Commons – UK politics live | Politics
-
UK News5 hours agoPhones to be banned in schools by law in England under government plans
-
Crime & Safety5 days agoOxford teacher who fiddled grades wants banning order ended
-
Oxford News3 weeks agoDrug driving arrest carried out in Oxfordshire market town
-
Oxford News1 week agoOxfordshire children care provider employed illegal staff
-
UK News6 days agoHouseholds could get free electricity for doing washing on sunny weekends
-
UK News2 days agoIran closes strait of Hormuz again ‘until US lifts blockade’ | Iran
