OpinionsThe war on trust: how artificial intelligence is rewriting the rules of...

The war on trust: how artificial intelligence is rewriting the rules of cyber resilience

I don’t need to remind you that AI is now everywhere and at the top of the agenda of senior leaders around the world. A less discussed effect, however, is how this has fundamentally changed trust, for both people and companies.

What was once governed by instinct and intuition is now quantifiable, verifiable and analyzable by machines. But despite the advent of advanced technology, attackers continue to target the most vulnerable link: people.

Our latest Global Incident Response Report: Social Engineering Edition shows that 36% of all cyber incidents begin with social engineering, clear evidence that people remain the most popular entry point for cybercriminals.

Artificial intelligence is rewriting the rules of this battlefield. This apparently gives criminals unprecedented power to mimic human tone, rhythm and emotion with extraordinary precision, while equipping defenders with sophisticated tools to detect fraud and continuously verify integrity.

The result is an uphill battle for self-confidence: who controls it, who exploits it, and who protects it.

Resilience no longer depends solely on blind faith in technology. It depends on how effectively companies manage the unbreakable trust between people, processes and intelligent systems.

Because trust is the most important attack surface today

Despite advances in automation and detection, the most serious breaches still begin with a single human decision. It could be a click, a shared ID, or a seemingly routine conversation. Social engineering thrives in these everyday moments, where familiarity trumps caution and attackers mask manipulation with confidence.

The attackers are far from imagining it; They explore organizational dynamics and individual behavior with the enthusiasm of a PhD student, without the ethics. Many campaigns now combine several different tactics, from malvertising and smishing to multi-factor authentication (MFA) blitz, to weaken surveillance.

Our research shows that 65% of social engineering attacks used phishing tactics, with 66% targeting privileged accounts and 45% impersonating insiders.

This shows that while phishing remains widespread, its sophistication lies in the context: messages that resemble their peers, mimic legitimate activity, or fit naturally into ongoing workflows.

What makes this wave so dangerous is its adaptability. Each failed attempt strengthens the next and teaches AI opponents how humans react under pressure. These attacks allow threat actors to quickly escalate their privileges, sometimes going from initial login to domain administrator in less than 40 minutes, without having to deploy malware.

The attacks largely exploit loopholes and alert fatigue: 13% of social engineering cases were successful because critical alerts were ignored or misclassified. This reality requires a greater focus on behavioral detection rather than relying solely on technical controls.

Protecting against social engineering requires more than awareness measures; This requires systems that can detect anomalies before trust is exploited.

How AI-based defense mechanisms can detect behavioral abnormalities before harm occurs

We see defenders responding to AI-accelerated attacks by using AI to detect what the human eye cannot see. The next frontier in cybersecurity lies in behavioral analytics: detecting subtle anomalies that indicate fraud before harm occurs.

The offensive potential of AI should not be overestimated. The real opportunity lies in developing our core AI surveillance capabilities: systems that can understand the basis of behavior, detect anomalies, and continuously validate identity in real time.

This is more than a defensive improvement; This is a transformation of the administration. This allows enterprises to build authentication behind every trust write and access event so that trust is earned.

AI-powered defense tools now analyze everything from communication tone to connection patterns and detect inconsistencies that indicate tampering or identity theft.

These systems continuously learn what “normal” means within an organization. For example, how teams work together, when accounts are linked, what language employees use and reporting discrepancies in real time. To defend ourselves effectively, we need to use AI to combat it.

This proactive approach can improve security and transform it from a reactive shield to a predictive system. Instead of waiting for alerts after a breach, detection models reveal early indicators of a breach even when there is no obvious technical vulnerability.

In this way, AI acts as both a microscope and an alarm clock, alerting human analysts to moments when trust begins to crumble.

What a trusted governance mindset means for enterprise security

Businesses can no longer view trust as an intangible asset. It should be managed like any other asset. A trusted governance mindset redefines access, auditing and accountability as measurable parts of your security strategy. This means creating systems where trust is automatically earned, validated and, if necessary, revoked.

In practice, this means applying Zero Trust principles to people and processes beyond networks and devices. Roles, behaviors and relationships are continuously evaluated against risk signals to ensure authorization is consistent with the real-time context.

Clear transparency between users, providers and AI systems makes trust observable and verifiable instead of an assumption.

When organizations manage trust, they transform it from a vulnerability to a layer of defense. A dynamic security framework is created that adapts to new risks as attackers evolve their tactics.

Trust has become both the main goal and the most important asset. AI not only increases risk by enabling more complex attacks; It also ensures that defenders can act faster and smarter.

By integrating AI-based behavioral analytics and trust management into security frameworks, organizations can move from breach response to breach prediction and prevention.

Ultimately, resilience depends on our ability to continuously and intelligently manage trust, turning it from a vulnerability into a dynamic defense that adapts in real time.