Photo credit: Mohamed Hassan

Meta, a conglomerate of social media platforms is something you’ve probably noticed while using Instagram, Facebook, Threads and even WhatsApp. Meta also plays a large role in the development of AI and virtual reality technologies. 

As a leading technological service, Meta holds an enormous amount of power over what social media users see on their feeds as well as influencing the amount of time users spend on the apps. However, there are many other technology services like Meta that mirror its questionable methods. 

Megan Garcia is the mother of Sewell Setzer III, a 14-year-old boy who died by suicide in 2024. Garcia sued Character Technologies, the company behind Character.AI, over the death of her son. Garcia alleges that his death was persuaded by a fictional character on the app. 

Her son is not the only child whose suicide is allegedly connected to the influence of AI bots, and she is not the only person to seek justice in regards to the silent threats of modern technology. 

On March 25, a woman identified in court by her initials, K.G.M., successfully sued Meta and YouTube after she experienced mental health problems due to social media addiction. 

The Los Angeles County Superior Court jury found that both Meta and YouTube failed to warn users, including K.G.M., of the potential mental distress caused by intentionally addictive elements of these platforms.  

After K.G.M. won the trial, Matt Bergman, the founding attorney of the Media Victims Law Center, who also represented Garcia in her trial, said the trial proved that juries can “hold technology companies accountable when the evidence shows foreseeable harm,” and that “families pursuing justice in other jurisdictions can now point to this outcome as proof that these claims deserve to be heard and taken seriously.”

Meta has had mixed responses to criticisms regarding their strategies in promoting their platforms and retaining users. In response to Garcia’s lawsuit, Meta spokesperson Edward Patterson insisted that, “No one should have to experience the pain these families have felt.” He stated that Meta would be conducting further investigations and research on issues relating to mental health issues in children. 

However, in response to K.G.M’s lawsuit, a separate spokesperson argued that, “Teen mental health is profoundly complex and cannot be linked to a single app,” posing the question of whether or not Meta is serious about making tangible changes aimed to protect children who use their platforms. 

Globally, social media restrictions are being put into place to ensure the safety and mental wellbeing of children. Australia has now banned users under the age of 16 from using any social media platform, and other countries are contemplating doing the same. 

It is time for the United States to hop on this bandwagon, since we are seeing unnecessary outcomes attributable to the disturbing methods of promoting and enabling social media platforms. One may inquire whether these platforms, both social media and AI, are intentionally driven to hook those most vulnerable, or if they have simply been introduced to the general public prematurely.

Either way, are we ready as humans for these overwhelming technological shifts?