In back-to-back announcements, Meta has made significant changes aimed at empowering parents and protecting teens on Instagram and its AI services.
Related Articles
Magid: Google refines its Pixel Watch
Magid: For AI and Zoom, Pixel 10 Pro leaves iPhone behind
Magid: My first two weeks with the iPhone 17 Pro
Magid: Breathing new life into old batteries
Magid: Why I took a Facebook break after Charlie Kirk killing
Last week, Meta announced that Instagram will automatically place all users under 18 into Teen Accounts that default to content roughly equivalent to a PG-13 movie rating. This week, the company introduced new parental controls for its AI features.
Teen accounts
The goal of Instagram’s teen accounts is to reduce exposure to mature material, such as graphic violence, explicit sexual content, strong language and risky stunts, while giving parents more control over what their teens see.
Meta, Instagram’s parent company, acknowledged that “teens may try to avoid these restrictions,” so it’s using age-prediction technology to apply protections even when users misreport their age. The AI system looks for behavioral and contextual clues that someone claiming to be 18 might actually be younger. It’s not perfect, but it’s far more reliable than relying on self-reported birthdays.
What teens will see
Under the new system, anyone under 18 is automatically placed into “13+” mode. Teens can’t disable it themselves. Parental consent is required to loosen settings. Instagram’s filters screen out content outside PG-13 norms, including strong profanity, depictions of drug use or dangerous stunts. Accounts that repeatedly post mature content will be hidden or made harder to find, and search results will block sensitive or graphic terms, even when misspelled.
Stricter option
For families seeking tighter limits, Instagram is adding a Limited Content Mode that filters even more posts, comments, and AI interactions. Parents can already set daily time limits — as little as 15 minutes — and see if their teen is chatting with AI characters.
Teens can’t follow or be followed by accounts that repeatedly share inappropriate material, and any existing connections will be severed, blocking comments, messages and visibility in feeds.
AI protections
Alongside the new Teen Account protections, Meta is adding parental supervision tools to help families guide how teens use AI virtual “characters” that often have a unique personality. Parents will soon be able to turn off one-on-one chats between their teens and Meta’s AI characters altogether. The company’s general AI assistant will still be available for questions and learning support, but with age-appropriate safeguards.
For families who don’t want to block it entirely, parents can restrict specific AI characters, giving them control over which personalities their teens can interact with. AI features such as chatbots and image generators are also being tuned to stay within PG-13 parameters.
Parents will also get insight into the types of topics their teens discuss with AI — general themes rather than transcripts — to encourage conversations about how their teens use these technologies.
Tragedies and safeguards
I’m not aware of any tragic outcomes from Meta’s AI, but lawsuits have been filed alleging that chatbots from other companies played a role in teen suicides. In Florida, the family of a 14-year-old boy who died by suicide claimed the chatbot on Character AI encouraged self-harm. In California, the parents of 16-year-old Adam Raine allege that OpenAI’s ChatGPT provided him detailed instructions on suicide and emotional reinforcement, leading to his death in April 2025.
OpenAI is now developing systems to detect whether a ChatGPT user is an adult or under 18, so younger users automatically get an age-appropriate experience. If age isn’t clear, the system defaults to teen mode. The company is also rolling out parental controls that allow parents of teens (age 13 and up) to link accounts, decide which features are available, such as disabling memory or chat history, receive alerts when their teen may be in distress, and set “blackout” hours when ChatGPT can’t be used.
Character.AI now offers a more restricted version of its platform for teens, powered by a dedicated language model designed to filter out sensitive or suggestive content and block rule-violating prompts before they reach the chatbot. Teens have access to a smaller pool of characters, with those tied to mature themes hidden or removed. The company recently added a “Parental Insights” feature that provides weekly summaries of a teen’s activity, such as time spent on the app and which bots they interact with most, but to protect teen privacy and agency, it doesn’t include chat transcripts or give parents full control.
Emotional risks
Although AI chatbots can offer comfort or a safe space to practice conversation, researchers are finding that frequent use can also carry emotional risks. Studies from the University of Cambridge, Australia’s eSafety Commissioner, and peer-reviewed research teams suggest that some young people form strong attachments to AI “friends,” which can lead to more loneliness and less real-world interaction.
A recent joint study by OpenAI and MIT Media Lab on ChatGPT’s emotional impact, along with a separate survey on teens, highlighted the risks of affective chatbot use. The longitudinal study with nearly a thousand participants found that although emotional engagement with ChatGPT is rare overall, a small subset of heavy users showed concerning trends: higher daily usage correlated with increased loneliness, emotional dependence and problematic use. A separate survey confirmed this vulnerability, showing that teens with fewer social connections were most likely to turn to bots for companionship.
My thoughts
As is often the case with online safety issues, it’s important not to confuse severity with prevalence. Research shows that although most young people have positive interactions with AI chatbots, some may experience problematic behaviors or negative outcomes. That’s why a one-size-fits-all approach to online safety isn’t effective. Parents should stay close to their kids, understand the technologies they’re using, and make decisions based on their own child’s experiences rather than news stories that highlight serious but rare cases.
Related Articles
‘Rotting’ San Jose bus depot has new owner that eyes housing project
Retailers face fines up to $5,000 a day under new Napa County plastic bag ban
Nintendo San Francisco store to level up for the holiday season
Costly repairs could make an East Bay port an ‘economic engine’
The largest natural gas leak in U.S. history was in California 10 years ago
Disclosure: Larry Magid is CEO of ConnectSafely, a non-profit internet safety organization that advises and has received financial support from Meta, Character.AI and OpenAI