SecurityNew Study Finds That Ai Is Fueling An “unprecedented Increase In Cloud...

New Study Finds That Ai Is Fueling An “unprecedented Increase In Cloud Security Risks.”

The rise of non-human identities.

Permissions, Misconfigurations, and Non-Human Identities

Palo Alto mentions organizations are deploying their workloads faster than they can secure them – frequently without full visibility into how tools access, process or share sensitive data.

In fact, the report states that more than 70% of organizations are now using AI-based cloud services in production, a sharp increase year over year., while

  • palo alto warns that the rapid adoption of ai is expanding the attack surface in the cloud, increasing unprecedented security risks
  • excessive permissions and incorrect configurations cause incidents; 80% are related to identity issues, not malware
  • non-human identities outnumber poorly managed human identities, creating exploitable entry points for adversaries

enterprises’ rapid adoption of cloud-native artificial intelligence (ai) tools and services is dramatically expanding attack surfaces in the cloud and exposing companies to greater risks than ever before.

it is in accordance with the ‘cloud security status report‘, a contemporary article published by cybersecurity researchers at palo alto networks.

according to the article, there are some major problems with ai adoption: the speed at which ai is implemented, the permissions granted to it, misconfigurations. Additionally, that speed at which these tools are deployed is now seen as a major contributor to an “unprecedented increase” in cloud security risks.

Then there is the difficulty of excessive permissions. Data stores., while ai services frequently require broad access to cloud resources, apis. According to the study, 80% of cloud security incidents last year were related to identity-related issues and not malware.

Palo Alto likewise highlighted that misconfigurations are a growing difficulty, especially in environments that support AI development.

The report reveals that many organizations are assigning an overly permissive identity to AI-driven workloads. AI storage buckets, databases, and training pipelines are regularly exposed, and adversaries are increasingly researching malware, rather than merely trying to deploy it.

Finally, the research points to an increase in non-human identities such as service accounts, API keys, and automation tokens used by AI systems.

In many cloud environments, non-human identities now outnumber human identities, and many are poorly monitored, rarely rotated, and difficult to assign.

“The rise of large language models (LLM) and agency AI is pushing the attack surface beyond traditional infrastructure, ” the report concludes.

“Adversaries are targeting LLM tools and systems, the underlying infrastructure that supports model development, the actions taken by these systems and, most importantly, their memory reserves. Each of these represents a potential point of compromise. “

.

More From NewForTech

WatchGuard requires a patch to address the Firebox OS security vulnerability, so update to it now.

WatchGuard fixes a critical RCE vulnerability (CVE-2025‑14733) in the...

Payroll systems are under attack as attackers take control of accounts.

Attackers abuse help desk employees to gain unauthorized access...

A cheap malware that steals credentials, cryptocurrencies and more is is by SantaStealer.

SantaStealer targets browsers. Wallets. Email apps. Documents. Desktop screenshots.Fourteen...

Pornhub Premium members cyberattack with stolen data

According to Pornhub, a Mixpanel compromise exposed some premium...

Leonardo DiCaprio movie torrent hides complex PowerShell scripts

Fake movie torrents spread malware in stages without the...

The new tool lets anyone monitor messaging apps with just their phone number

Attackers can silently tap phones using only the victim's...