The Economist: The court cases that could shape how AI develops

The Economist
Several court cases involving AI are due in 2025.
Several court cases involving AI are due in 2025. Credit: The Nightly

Words, pictures, music and now video: generative artificial intelligence seems like a near-magical tool for creating new and original content in unlimited quantities.

Yet to its detractors it is a scam, chewing up human-made, copyrighted work and spewing out pale, derivative imitations.

Who is right? In 2025 a combination of litigation and legislation will begin to provide some answers.

Sign up to The Nightly's newsletters.

Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.

Email Us
By continuing you agree to our Terms and Privacy Policy.

Start with the litigation.

Representatives of nearly every creative industry have filed copyright-infringement complaints against generative-AI companies for using their material, without payment or permission, to train their AI models.

Most of the legal action is in America, where OpenAI and Microsoft are being sued by The New York Times, and Anthropic is being pursued by parties including Universal Music Group.

In Britain, Stability AI is being sued by Getty Images. All deny wrongdoing.

These and other disputes may be settled out of court: Some see the lawsuits as a negotiating tactic by content companies to make tech firms cough up.

OpenAI has made at least 29 licensing deals with platforms and publishers, from Reddit to the Financial Times, according to a tally by Peter Brown of the Tow Centre for Digital Journalism at Columbia University. (The Economist Group, our parent company, has not taken a public position.)

The value of OpenAI’s deals alone already exceeds $350m, by Mr Brown’s reckoning.

A rocky time in court in the coming months could cause that figure to rise.

If claimants resist settling, legal precedents will be set in 2025 that could shape the tech industry for years to come.

In America the tech companies are narrow favourites to win.

Their “fair use” defence (essentially, that copyrighted material can be used without explicit permission in some cases) has got them off the hook in previous copyright cases, such as a legal complaint against Google Books nearly a decade ago.

However, “if they get to a jury, anything is possible”, cautions Matthew Sag of Emory University’s School of Law.

Stability AI faces a harder test: Britain’s copyright law is somewhat stricter than America’s, and Getty is also claiming trademark infringement, after some of Stability’s generated images reproduced its logo.

As courts deliberate over existing laws, legislatures will debate new ones, in particular on “deepfakes”, which use AI to insert a person’s likeness into an existing photo or video, often of a pornographic nature.

This is worrying parents (whose children are being harassed with “nudifying” apps), celebrities (whose likenesses are being stolen by con artists) and politicians (who have found themselves the targets of AI-powered disinformation).

In March the American state of Tennessee passed the Ensuring Likeness Voice and Image Security (ELVIS) Act, to protect performers from having their image or voice used illegally. California has passed laws to stop political deepfakes.

Copyright law may also be reformed.

The European Union, Japan, Israel and Singapore have already introduced exceptions to allow the use of copyrighted material, without permission or payment, in the training of AI models, at least under some circumstances.

Some in Silicon Valley worry that tech investment could flow away from America to more relaxed jurisdictions.

Yet, so far, no country seems willing to become a regulatory wild west.

Japan seems minded to tighten the exceptions it has set, to protect copyright interests.

Most countries are coalescing around a moderate position: a “race to the middle” is most likely, believes Mr Sag.

The emerging compromise is that tech companies will have to find ways to allow copyright-holders to opt out of having their content used for training.

Tech firms will also have to make AI tools better at handling abstract concepts without regurgitating copyrighted material (for instance, being able to draw a generic superhero without reproducing images of Superman).

That may prove easier said than done.

Do not be surprised if the year ahead is one in which AI generates more questions than regulators can answer.

Comments

Latest Edition

The Nightly cover for 02-01-2025

Latest Edition

Edition Edition 2 January 20252 January 2025

US patriot’s Bourbon Street carnage raises fears of new surge in Islamist attacks.