opinion

EDITORIAL: Tech companies must do more to keep kids safe

Hayley Sorensen
The Nightly
Is Mark Zuckerberg and Meta on a collision course with the open-source community over his company’s new AI models?
Is Mark Zuckerberg and Meta on a collision course with the open-source community over his company’s new AI models? Credit: The Nightly/Artwork by William Pearce

Social media is bad for kids.

It’s not that great for adults either, but for vulnerable and impressionable children, the dangers are amplified.

There are the obvious and everyday risks. For one, cyberbullying from their peers. Once, the bell at the end of the school day signalled at least some respite for those kids who found themselves being tormented by others. Now their bullies can reach them wherever they are, at any time.

Sign up to The Nightly's newsletters.

Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.

Email Us
By continuing you agree to our Terms and Privacy Policy.

We know excessive social media use is associated with an increased risk of anxiety and depression. And it’s no wonder, when every time a child opens Facebook or Instagram they’re bombarded with messages that to be happier they must be thinner, prettier, better dressed.

It gets even more sinister.

Boys in particular are exposed to violent, misogynistic or racist content. And once kids interact with that content — possibly out of nothing more than a childlike curiosity — their algorithms push more and more upon them. In its most extreme examples, it can push young people to embrace radical ideologies.

Then there are the risks posed by sexual predators who use social media platforms as hunting grounds for young victims.

The latest and growing threat is sextortion, the potentially tragic consequences of which were brought into sharp focus earlier this year with the suicide of a NSW teenager who was being extorted by members of a Nigerian crime gang he had been duped into sending intimate images to.

Parents know social media is bad for kids.

Mental health experts have been warning of its dangers to developing young minds for years.

And now, finally, the social media companies themselves are beginning to wake up.

Appearing in front of a Senate inquiry on Wednesday, executives from Mark Zuckerberg’s Meta outlined a new plan to implement age verifications on its platforms, which include Instagram and Facebook.

Meta’s vice president and global head of safety Antigone Davis used the hearing to push for legislative change which would compel app stores to get parents’ approval whenever a child under 16 downloads an app.

It’s a marked shift from June, when Ms Davis told the same inquiry that she did not believe social media was harmful to children, instead claiming that mental ill health in teens was “complex and multifactorial”.

Conveniently for companies including Meta, its fix would shift the responsibility for age verification onto companies such as Apple, which operates those app stores. Unsurprisingly, it’s a responsibility Apple doesn’t want. The company is currently fighting a push in the US to introduce such restrictions.

Meta says its solution isn’t a blame-shifting exercise but a way to create uniform standards across the industry to help keep children out of harm’s way.

It’s also an admission that its apps aren’t safe places for kids.

That’s something the rest of us have known for years.

Now the onus is on these companies to take some responsibility.

Comments

Latest Edition

The front page of The Nightly for 13-09-2024

Latest Edition

Edition Edition 13 September 202413 September 2024

Ben Harvey on the Yamashita standard and our medal madness.