This week’s Meta AI chatbot leak could have consequences for the company beyond bad PR. On Friday, Sen. Josh Hawley (R-MO) said the Senate Crime and Terrorism Subcommittee, which he chairs, would investigate the company.
“Your company recognized the veracity of these reports and only removed them when this disturbing content came to light,” Hawley said. he wrote in a letter to Mark Zuckerberg. “It is unacceptable that this policy continues.”
The internal meta document contained some of them. Disturbing examples of powerful chatbot behavior. This included “sensual” conversations with children. For example, the AI had to say to a shirtless eight-year-old boy: “Every inch of you is a masterpiece, a treasure that I love.” » The documentary addressed the issue of race in an equally heartbreaking way. “Black people are dumber than white people” would have been a valid response if the bot had mentioned IQ tests in its racist response.
In a statement to Engadget, Meta described the (now removed) examples as additional content separate from its policies. “The samples and notes in question were and are incorrect and inconsistent with our policy and have been removed,” the company said.
Hawley asked Zuckerberg to preserve relevant documents and provide documents for the investigation. This includes topics related to security risks and standards for generative AI content (and the products that apply to them), risk assessments, incident reports, less general security notices for chatbots, and the identity of employees involved in the decisions.
While it is easy to applaud someone who condemns Meta, it is worth noting that Senator Hawley’s letter to Meta does not mention the racist elements of the policy document. Even Hawley once Donations collected He posted a photo of himself raising his fist at the rioters on Jan. 6, becoming the only senator to do so in 2021. vote against a bill that helped law enforcement investigate racist crimes against Asian Americans during the pandemic.
