How Zero Trust can help businesses manage the growing security risk of agent AI

It has only been three years since ChatGPT was introduced to the public with great success. But the industry is already looking forward to the next big wave of innovation: artificial intelligence.

OpenAI returns to prominence with its new ChatGPT agent offering, which promises to “complete end-to-end tasks” on behalf of its users. Unfortunately, greater independence brings greater risks.

- Advertisement - Advertisement

The challenge for corporate IT and security teams will be to enable users to use the technology without exposing the organization to a new type of threat.

Fortunately, they already have an approach ready to help them: Zero Trust.

Great benefits, but new risks

Agent AI systems offer a significant advance over generative artificial intelligence (GenAI) chatbots.

While the latter reactively creates and summarizes content from instructions, the former is designed to independently plan, reason, and act proactively to complete complex multistep tasks. The agent’s AI can even change its plans on the fly if new information becomes available.

- Advertisement - Advertisement

It is not difficult to see the enormous potential for productivity, efficiency and financial benefits that come with this technology. Gartner predicts that by 2029, the company will “automatically resolve 80% of the most common customer service issues without human intervention, resulting in a 30% reduction in operational costs.”

But the same capabilities that make agent AI so attractive to businesses should also be cause for concern.

By requiring less control, malicious actors can attack and compromise an AI agent’s actions without worrying about users.

Because you can make decisions with irreversible consequences, such as deleting files or sending emails to the wrong recipient, the damage can be even greater if security is not built in.

- Advertisement - Advertisement

Furthermore, since agents can plan and reason in many areas, adversaries have more opportunities to manipulate them, for example through rapid indirect injection. This can be achieved by simply embedding a malicious message on the web page visited by an agent.

As agents are deeply integrated into the wider digital ecosystem, there is greater potential for access to highly sensitive accounts and information. And because they can gain deep insight into their users’ behavior, there are potentially significant privacy risks.

Why access control is important

To address these challenges, you need to start with identity and access management (IAM). If companies want to create a de facto digital workforce of agents, they need to manage the identities, credentials and permissions needed to do that work.

But today, most agents are generalists rather than specialists. The ChatGPT agent is a good example: you can schedule meetings, send emails, interact with websites, etc.

This flexibility makes it a powerful tool. But it also makes it difficult to apply traditional access control models based on human roles with clear areas of responsibility.

If a general agent is compromised by an indirect injection attack, its overly permissive access rights can present a vulnerability, potentially giving the attacker broad access to a variety of sensitive systems. That’s why we need to rethink access controls in the age of agent AI. In short, we must follow the Zero Trust mantra: “Never trust, always verify.”

Zero Trust Reinvented

What does zero trust look like in an AI environment with agents? Let us first assume that agents perform involuntary and hard-to-predict actions, which OpenAI also recognizes. And stop thinking of AI agents as an extension of existing user accounts. Instead, treat them as separate identities with their own credentials and permissions.

Access control should be applied at both the agent and tool level, which means you must control the resources that agents have access to. More granular checks like these ensure permissions are consistent for each task.

Think of it as “segmentation”, but not in the traditional sense of Zero Trust network segmentation.

Rather, the idea is to limit agents’ rights so that they only have access to the systems and data they need to do their job and nothing else. In some situations, it may also be appropriate to apply time-limited permits.

Next comes multi-factor authentication (MFA). Unfortunately, traditional MFA is not a good fit for agents. If an agent is compromised, requiring a second factor provides little security.

Instead, human oversight can serve as a second level of verification, especially for high-risk operations. This must be weighed against the risk of consensus fatigue: if agents trigger too many commitments, users may instinctively begin to approve actions.

Companies also need insight into the activities of their agents. Therefore, it is necessary to set up some kind of system to record your actions and monitor unusual behavior. This also reflects an important part of Zero Trust and will be critical for both security and accountability. Agent AI is still in its infancy.

But if companies want to take advantage of technology’s ability to operate with minimal oversight, they need to be sure that the risks are properly managed. The best way to do this is to never trust anything by default.

Tech Insider (NewForTech Editorial Team)
Tech Insider (NewForTech Editorial Team)https://newfortech.com
Tech Insider is NewForTech’s in-house editorial team focusing on tech news, security, AI, opinions and technology trends

Related Articles

Advertisement
Advertisement

Latest News

Roborock Robot Vacuum Cleaner - Smart Home Cleaning Device