- OpenAI warns that future LLMs may support zero-day development or advanced cyber espionage
- The company invests in defenses, access controls and a multi-layered cyber security program
- The New Frontier Risk Council will lead safeguards and management capabilities across all frontier models.
Future OpenAI extended language models (LLMs) may pose greater cybersecurity risks, as they could theoretically develop functional zero-day remote exploits against well-secured systems or usefully contribute to complex and stealthy cyberespionage campaigns.
This is the case according to OpenAI itself, which said in a recent blog that the computational capabilities of its AI models are “advancing rapidly.”
While this may seem daunting, OpenAI sees things from a positive perspective, saying that these advances also provide “significant benefits for cyber defense.”
Browser crashes
To prepare in advance for future models that could be exploited in this way, OpenAI is investing in hardened models for defensive cybersecurity tasks and developing tools that make it easier for defenders to perform workflows such as code inspection and vulnerability remediation.
According to the blog, the best way to achieve this is a combination of access controls, infrastructure hardening, exit controls and surveillance.
In addition, OpenAI has announced the upcoming launch of a program that will gradually provide users and customers working with cybersecurity tasks access to advanced features.
Finally, the Microsoft-backed AI giant announced plans to create an advisory group called the Frontier Risk Council. This group should consist of experienced cybersecurity experts and professionals and, after initially focusing on cybersecurity, expand its scope to other areas.
“Members will consider the line between useful and responsible capabilities and potential abuse, and these findings will feed directly into our assessments and safeguards. We will share more with the council soon,” the blog said.
OpenAI also said cyber abuse could be possible “from any cutting-edge industry model.” That is why it is part of the Frontier Model Forum, where it shares knowledge and best practices with industrial partners.
“In this context, threat modeling helps mitigate risk by identifying how AI capabilities can be weaponized, where critical chokepoints exist for various threat actors, and how breakthrough models can provide significant benefits.”
IN Reuters
