Instagram Knows It’s Making Your Teenager Depressed. It Made Them More Depressed Anyway to Keep Them Online.

In 2021, a Facebook whistleblower named Frances Haugen leaked internal research documents showing that Facebook’s own researchers had concluded that Instagram was harmful to the mental health of teenage girls — particularly around body image — and that the company had chosen not to act on those findings in any meaningful way because doing so would reduce engagement metrics.

That was four years ago. The algorithm has been refined many times since. The engagement metrics are higher than ever. The mental health data for teenagers is worse than ever. These facts are related.

What the Internal Documents Actually Said

The leaked Facebook research, which became part of the Congressional record after Haugen’s testimony, included findings that 32 percent of teenage girls said that when they felt bad about their bodies, Instagram made them feel worse. The research also found that teenagers who reported experiencing suicidal thoughts traced the impulse to Instagram at significant rates.

The company’s internal response to these findings was to explore design changes that might reduce the harm. Most of those changes were not implemented because modelling suggested they would reduce time-on-app by measurable amounts. The trade-off between teenage mental health and engagement metrics was made explicitly, documented internally, and then the engagement side of the equation won.

How the Algorithm Creates This

Instagram’s recommendation algorithm is optimised for engagement — time spent, likes, comments, shares, and above all, return visits. Content that produces strong emotional responses drives more engagement than content that produces mild emotional responses. Negative emotions, particularly anxiety, envy, and inadequacy, produce strong responses. Content that makes teenagers feel inadequate about their appearance keeps them scrolling in search of validation.

The algorithm does not have a mental health objective. It has an engagement objective. The fact that one pathway to higher engagement runs directly through teenage psychological vulnerability is not a bug in the design. It is an emergent property of the optimisation target.

This is the same principle that drives AI-generated disinformation reaching a billion people — outrage and anxiety travel faster and further than calm, accurate information. The algorithm amplifies what spreads, and what spreads is what provokes strong emotional reactions, regardless of whether those reactions are healthy for the people having them.

What Meta Has Done About It

After the Congressional hearings and significant media pressure, Meta introduced several features: a “Take a break” reminder, supervision tools for parents, and an option to see a chronological feed instead of an algorithmic one. These features exist. They are not turned on by default. They require action from users or parents to activate. They reduce engagement for users who use them, which is why they are not defaults.

In 2024, Meta launched Instagram Teen Accounts with additional restrictions. The restrictions apply to accounts where the user has identified as a teenager. There is no verification. A 15-year-old who sets their birth year to 1998 when signing up is an adult in Meta’s system.

The Legal Exposure Is Building

Over 40 US states have filed lawsuits against Meta alleging that Instagram knowingly harmed minors. The UK’s Online Safety Act creates obligations around age verification and harm prevention that Meta is legally required to comply with. The EU’s Digital Services Act imposes similar requirements. Meta’s legal team is managing a multi-front regulatory war that it can afford to fight for years.

The teenagers affected by the algorithm cannot afford to wait years. Neither can their parents. The solution is not to ban Instagram. It is to change the legal standard so that the company’s internal documentation — which clearly shows knowledge of harm and a deliberate choice to prioritise engagement over it — carries legal consequences. That standard does not yet exist in any jurisdiction at the scale required.

ST

Synthetic Truth

Independent coverage of AI, work, and money. No corporate sponsorship, no stock portfolio, no incentive to mislead. Just honest analysis on where technology, power, and the economy are headed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Free Newsletter

AI is changing everything.
Stay ahead of it.

Get the unfiltered truth about AI, jobs, and money — straight to your inbox. No hype. No fluff.

No thanks, I prefer to stay uninformed