back to top

OpenAI Privacy Challenge: ChatGPT Fabricates Personal Data

OpenAI Faces Privacy Complaints in Austria

OpenAI, an artificial intelligence company, facing a privacy challenge in Austria. The advocacy group NOYB, short for None Of Your Business, has complained. The issue at hand is that OpenAI’s ChatGPT bot has been accused of repeatedly disseminating false information about an unnamed individual, as per a Reuters report. This could potentially violate EU privacy regulations.

The chatbot is said to have given out incorrect birthdate details for the person in question, rather than simply stating it didn’t have the required information. AI chatbots, much like politicians, are known to fabricate information with confidence, hoping it goes unnoticed. This is referred to as a hallucination. It’s one thing for these bots to invent recipe ingredients, but it’s a whole different matter when they fabricate details about real people.

OpenAI’s Response to the Complaint

The complaint further suggests that OpenAI declined to assist in erasing the false data, stating that such a modification was technically unfeasible. The company did propose to filter or block the data for certain prompts. OpenAI’s privacy policy allows users to submit a “correction request” if they notice the AI chatbot has generated “factually inaccurate information” about them. However, the company admits that it “may not be able to correct the inaccuracy in every instance”, as TechCrunch reported.

This issue extends beyond a single complaint. The chatbot’s propensity to fabricate information could potentially contravene the region’s General Data Protection Regulation (GDPR), which regulates the use and processing of personal data. EU residents have rights concerning personal information, including the right to have false data rectified. Non-compliance with these regulations can result in hefty financial penalties, up to four percent of global annual turnover in some instances. Regulators also have the power to mandate changes in information processing.

Maartje de Graaf, a data protection lawyer at NOYB, stated, “It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The complaint also raises transparency concerns about OpenAI. It suggests that the company doesn’t provide information about the origin of the data it generates on individuals or whether this data is stored indefinitely. This is especially important when considering data related to private individuals.

This is a complaint by an advocacy group, and EU regulators have yet to comment. However, OpenAI has previously admitted that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” NOYB has approached the Austrian Data Protection Authority and requested an investigation into the matter.

OpenAI is also dealing with a similar complaint in Poland. The local data protection authority initiated an investigation into ChatGPT after a researcher was unable to obtain OpenAI’s assistance in correcting false personal information. That complaint accuses OpenAI of several breaches of the EU’s GDPR, including transparency, data access rights, and privacy.

OpenAI Under Scrutiny in Italy

Italy is another country where OpenAI is facing scrutiny. The Italian data protection authority investigated ChatGPT and OpenAI, concluding that the company has violated the GDPR in various ways. This includes ChatGPT’s tendency to fabricate false information about people. The chatbot was initially banned in Italy until OpenAI made certain modifications to the software, such as new warnings for users and the option to opt out of having chats used to train the algorithms. Despite the ban being lifted, the Italian investigation into ChatGPT continues.

OpenAI has yet to respond to this latest complaint but did respond to the regulatory action taken by Italy’s DPA. “We want our AI to learn about the world, not about private individuals,” the company stated. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

More like this