On 9 February 2009, a button appeared that would change the internet forever. Billed as an “easy way to let people know that you enjoy it”, the Facebook “Like” feature brought an end to the digital scrapbooked timeline and introduced an algorithm-driven machine that no longer delivered content chronologically.
Posts were instead prioritised by popularity rather than recency, based on this new quantifiable data point. People’s feeds soon shifted from pictures of friends, family and their pets, to those of celebrities, brands and topic pages. It paved the way for terms like “viral”, “content creator” and “influencer”, and marked the end of social networks and the beginning of social media.
Other platforms like Instagram and Twitter eventually followed this approach, before TikTok supercharged the paradigm with its “For You” feed, which is widely considered to be the most aggressively optimised system for user engagement.
The results have been devastating. The cracks became clear in 2016, when military-linked accounts in Myanmar began spreading hate speech on Facebook against the Rohingya minority in the country. A UN investigator later said that the company's algorithm acted as a “beast” that fuelled ethnic cleansing on a massive scale.
The algorithm became a tool to monetise, divide and radicalise. A leaked internal Facebook study from that time found that more than half of people who joined extremist groups on the site did so because the algorithm recommended them. Movements like QAnon and antivaxxers thrived on the algorithm’s core drive to keep people clicking, while vulnerable individuals have fallen into despair-filled feeds that have cost them their lives.
In a report last year, titled “Dragged into the Rabbit Hole”, Amnesty International found that children as young as 13 were being plunged into a “toxic cycle” of mental health-related content by TikTok’s algorithm within just five minutes of joining the app.
It can start with a user being shown a relatively innocuous video clip – maybe a post about a breakup, or even just a sad song playing in the background. The algorithm will automatically monitor how they interact with the clip; whether they swipe right past, linger, let it loop around to watch again, or “like”.
Even spending just a few seconds longer than normal will result in a user seeing more content like this. The system is designed to drive engagement and keep people on the app, so the content continues to escalate.
The Amnesty investigation found that accounts were shown increasingly sad videos with themes of loneliness or depression. Within 45 minutes there was suicide-related content on their feed.
One case study highlighted in the report is that of a girl called Maelle. Within weeks of joining TikTok, three quarters of her feed was made up of videos normalising and encouraging self-harm and suicide. And for her, it started with a song.
“At first, I used to go on there to have fun,” she told researchers. “And then there was a song that came back lots of times because it had just come out. And gradually, I became interested in the lyrics, and felt something inside me, I found that it touched me. And so I watched more videos with this music… And bit by bit, it became something darker and darker, like death maybe isn’t such a bad idea.”
Maelle’s parents eventually found out about the digital hole she had fallen into, and she was able to get help. But others have not been so lucky. Amnesty highlighted the case of Marie Le Tiec, who took her own life after spiralling into an algorithm-fuelled mental health crisis. “For these platforms, our children become products instead of human beings,” her parents said.
It is one of three cases of suicide mentioned in the Amnesty report, alongside the widely reported case of British teenager Molly Russell, and led to class action lawsuits from families hoping to hold these apps accountable for the deterioration of their children’s mental and physical health.
As a result of such cases, countries around the world are considering social media bans for children, with Australia becoming the first nation to enforce a total ban for people under 16 last year. The UK, France and others could soon follow.
TikTok said in a statement that it has more than 50 preset features designed specifically to support the “safety and wellbeing of teens”, and that it “invests heavily in safe and age-appropriate teen experiences”.
Meta, which owns Facebook and Instagram, said that it shares the goal of protecting young people, though believes a blanket ban is counterproductive. It instead touts its “Teen Accounts” with automatic parental supervision, though regulators remain sceptical.
Despite these companies’ pledges, some hope the ban on engagement-based algorithms will extend to people of all ages. The idea, supported by groups such as the Centre for Humane Technology, would be to “reset tech” in a way that would allow the apps to continue operating, but outlaw the exploitative business models behind them.
These business models have become even more dangerous with the advent of generative artificial intelligence. AI-generated short videos are filling users’ timelines across these platforms, so not only is a computer choosing what people watch, a computer is also creating the content for them to consume.
One of the world’s most popular YouTubers, PewDiePie, who made millions from the site’s algorithm, claims these new tools are leading to “algo brain”, where people’s attention spans are depleted and their self-agency is ruined. In a recent video, which called on people to cut themselves free of their algorithmic feeds, he said: “The key is intent. If you go around your life not making your own choices, then who the heck are you?”
When Facebook introduced its algorithmic feed in 2009, there were calls for a boycott. There was similar uproar when Twitter and Instagram followed later – though the platforms’ popularity kept growing.
The algorithms sucked people in and they stayed. As Facebook’s former chief technology officer Bret Taylor said at the time: “It was always the thing that people said that they didn’t want, but demonstrated that they did by every conceivable metric.”
Loading ads...
It will never be in the companies’ interest to get rid of the algorithms. So it will fall on regulators – and the users themselves – to restore social networks, and once again prioritise connection over content.
لقراءة المقال بالكامل، يرجى الضغط على زر "إقرأ على الموقع الرسمي" أدناه






