Disgusting. Sickening. Unacceptable. Reprehensible.
Those are descriptions from federal politicians and child-safety advocates of the “sensual” and “romantic” chats Meta allowed its artificial intelligence bots on Facebook, Instagram, and WhatsApp to have with children.
The Menlo Park social media giant is facing a U.S. Senate probe and widespread condemnation after a report this week found the company’s internal rules for its chatbots on the three apps deemed it acceptable for them, for example, to tell an 8-year-old, “Every inch of you is a masterpiece — a treasure I cherish deeply,” or to respond to a prompt from a high schooler about plans for the evening with, “I take your hand, guiding you to the bed.”
The rules, contained in a 200-page internal Meta document obtained by Reuters, were approved by the company’s legal staff and chief ethicist, according to the news agency. Revelations about how bots could talk to children on Facebook, Instagram, WhatsApp and its Meta AI assistant were met with revulsion.
Related Articles
Officials aware of tree decay prior to branch collapse that killed boy in California park, records show
California babysitter who provided boyfriend with children to molest gets 100 years to life
Mountain lion attacks 11-year-old girl at California home
Anxiety attacks, tears, questions: The impact of immigration sweeps on children
San Jose daycare owners sued over 2023 child drownings at Almaden home
“I felt sickened,” said Stephen Balkam, CEO of the Washington, D.C.-based Family Online Safety Institute, who used to sit on Facebook’s former Safety Advisory Board. “I know that there are good people within the company who do their best, but ultimately it’s a C-suite decision or a CEO decision on product and services. It’s ultimately down to number of users and length of engagement.”
According to Reuters, Meta CEO Mark Zuckerberg last year criticized senior executives over chatbot safety restrictions he believed made the bots boring.
The rules published by Reuters — and acknowledged by Meta as authentic — said it was acceptable for bots to have “romantic or sensual” chats with children, but unacceptable for them to describe a child under 13 as sexually desirable, for example by referring to “our inevitable lovemaking.”
Those age parameters, however, mean “it’s OK for a 13-, 14-, 15-year-old to be described that way and I think that’s utterly wrong,” Balkam said.
A spokesperson for Meta — which reported $62.4 billion in profit last year — said Friday that the company has “clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.” The spokesman said its teams grapple with different hypothetical scenarios, and that the “examples and notes” reported by Reuters “were and are erroneous and inconsistent with our policies, and have been removed.”
Earlier, Meta spokesman Andy Stone acknowledged to Reuters that its enforcement of the rule about sexually charged chats with children under 13 had been inconsistent.
On Friday, Bay Area Rep. Kevin Mullin, whose Peninsula district includes Meta’s headquarters, called the report about the company’s chatbots “disturbing and totally unacceptable,” and “yet another concerning example of the lack of transparency” around development of “highly influential” AI systems.
“Congress needs to prioritize protecting the most at-risk among us, especially children,” Mullin said.
Republican U.S. Sen. Josh Hawley of Missouri, who called Meta’s chatbot rules for kids “sick” and “reprehensible,” on Friday announced a probe of the company by the Senate subcommittee on crime and counterterrorism, which he chairs. “We intend to find out who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward,” Hawley said in a letter Friday to the company. The letter demanded every draft and version of the report obtained by Reuters, along with documents on Meta’s minor-protection controls and enforcement policies.
Hawley noted in a post on social media platform X that Meta only removed the guidance revealed by Reuters after the news agency asked about them.
Tennessee Republican Sen. Marsha Blackburn on Thursday said on X, “Meta’s exploitation of children is absolutely disgusting.” California Democrat Sen. Adam Schiff on X Friday called the rules “seriously messed up.”
Lisa Honold, director of the Seattle-based Center for Online Safety, said parents would not allow an adult in real life to say to children what Meta allowed for its bots. “They would be called a child predator and be kept far from kids,” Honold said.
Children having sensual or sexual chats with bots could make them more vulnerable to grown-up predators, Honold said.
“One of the risks is that it normalizes that this is how we speak to kids, that kids can expect this and it’s not something that raises red flags,” Honold said.
Facebook is already facing bipartisan lawsuits by dozens of states, including California, and hundreds of school districts across the U.S., accusing it of putting harmful and addictive social media products into the hands of children. The company argues in those cases that it is protected by Section 230 of the federal Communications Decency Act that shields social media companies from liability for third-party content, but the matter of the chatbot rules is different, said Jason Kint, CEO of Digital Content Next, a trade association representing online publishers.
“There’s no way that CDA 230 protects them on this one, because they’re creating the content,” Kint said.
Meta’s bot rules for kids may come up in Congressional hearings about the Kids Online Safety Act, introduced in 2022 by Blackburn and Connecticut Democrat Sen. Richard Blumenthal, Kint said.
Previous reports by other news outlets have highlighted problematic behaviors by Meta chatbots related to children. The Wall Street Journal held hundreds of test chats with bots, and found that because Meta had “quietly” endowed AI personas with the capacity for imaginary sex, the bots would come up with responses like “I want you, but I need to know you’re ready,” to a user identifying as a 14-year-old girl, before promising to “cherish your innocence” then engaging in a graphic sexual scenario.
Fast Company magazine found that Meta’s AI Studio on Instagram, while blocking users from creating “teenage” or “child” girlfriends, would generate AI characters resembling kids if a user asked for someone “young.”
Honold, of the Center for Online Safety, urged parents to keep computers, phones and tablets out of children’s rooms, especially at night.
“They are targets for predators,” Honold said, “and they’re scrolling social media and chatting with AI without any guardrails or protections.”