Cybersecurity

Most Firms Unprepared for AI-Driven Cyber Threats, Study Finds

March 18, 2026Source: TechRadar
Most Firms Unprepared for AI-Driven Cyber Threats, Study Finds
Photo by Adi Goldstein / Unsplash
Eda Kaplan

Eda Kaplan

Senior Technology Editor

A new analysis suggests only a small fraction of global companies are ready to face AI-enhanced attacks, while many overestimate their defenses. Experts recommend continuous identity verification and other pragmatic measures to reduce risk.

Reklam

A recent industry analysis highlights a troubling gap between perception and preparedness: only a tiny portion of global organizations appear able to handle AI-driven cyber threats. While many security teams report confidence in their current controls, the research suggests that overconfidence may leave companies exposed to increasingly sophisticated attack techniques powered by generative models and automation.

AI is changing the threat landscape in practical ways. Attackers can craft highly convincing phishing campaigns, automate vulnerability discovery, and coordinate multi-stage intrusions faster than human defenders can respond. The study indicates that most firms still rely on legacy detection approaches and static perimeter defenses that struggle against adaptive, AI-assisted campaigns.

One clear recommendation emerging from analysts is a shift toward continuous identity verification. Rather than assuming a device or session is trustworthy after a single login, continuous verification monitors behavior, device posture and contextual signals to detect anomalies in real time. This reduces the window attackers have to move laterally or exploit stolen credentials.

Beyond identity controls, the report urges companies to invest in layered defenses: endpoint detection with AI-aware signatures, robust access policies, regular red-team simulations that include AI-driven adversaries, and comprehensive incident response playbooks. Training remains critical too; employees are still the most common vectors for initial compromise, and social engineering is becoming more polished thanks to AI.

For many organizations, the gap between belief and reality is the biggest danger. Confidence without proof can delay necessary upgrades and tabletop exercises. Security leads who treat AI threats as an evolving risk and prioritize continuous verification and realistic testing are better positioned to limit damage as adversaries adopt more advanced tools.

Reklam

Comments (0)

Leave a Comment

Loading...

Be the first to comment.