THE NEW YORK TIMES: To beat the algorithm, you need to start paying attention to how you pay attention

“Attention is not neutral,” Antón Barba-Kay, a philosopher at University of California, San Diego, writes in “A Web of Our Own Making: The Nature of Digital Formation.”
“It is the act by which we confer meaning on things and by which we discover that they are meaningful, the act through which we bind facts into cares.”
When we cede control of our attention, we cede more than what we are looking at now. We cede, to some degree, control over what we will care about tomorrow.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.The politics of attention are on my mind because a recent court case has sharpened the need to describe what, exactly, has gone wrong in our digital lives. In 2020, the Federal Trade Commission sued Meta for creating an illegal monopoly in the personal social networking market. Last month, a US District Court in Washington ruled in Meta’s favour.
The FTC argued that there was a discrete market of personal social networking in which the only competitors were Facebook, Instagram, Snapchat and an app I’ve never heard of called MeWe.
Meta’s rebuttal was simple: It also competes with TikTok and YouTube, among others. It might have begun life as a social network, but it is, today, something else entirely. Only 17 per cent of time spent on Facebook is spent viewing content posted by friends. On Instagram, it’s 7 per cent.
The ruling includes images showing how all these apps have come to resemble each other: Reels on Instagram and Facebook are virtually indistinguishable from Shorts on YouTube or videos on TikTok.
I can look at Meta’s products and see that it has responded agilely to the innovations of its competitors. It’s made me like Meta’s apps less, but I can’t deny that when I open them, I’m likelier to be drawn into a scrolling hole that I need to wrench myself back from.
Competition seems, to me, to have made these apps better at fulfilling their corporate purpose and worse for human flourishing.
“Meta’s goal is to get users to spend as much time on its apps as possible, and it tunes its algorithms to show users the content they most want to see,” the court writes.
I think that’s generous. What Meta shows me is what Meta most wants me to see, which is whatever its prediction models believe will get me to spend as much time on its apps as possible. The algorithms serve the company’s ends, not my ends.
If Meta wanted to know what I want to see, it could ask me. The technology has long existed for users to shape their own recommendations.
These companies do not offer us control over what we see because they do not want us to have it. They do not want to be bound by who we seek to be tomorrow.
Attention is sometimes an act. But it is first an instinct. This is why even the most basic attempt at mindfulness — watching 10 breaths go by, without your attention wandering — requires such concentration.

Algorithmic media companies exploit the difference between our attentional instincts and aspirations. In so doing, they make it harder for us to become who we might wish to be.
Seeing these companies as seeking a form of control over our attention reveals, I think, the inadequacy of antitrust law for this particular task.
The point of antitrust policy is typically to increase competition in a market, unlocking entrepreneurial ferocity and genius by lowering the barrier to entry. But is fiercer competition for my attention, or my children’s attention, desirable?
There are many markets, from meatpacking to hospitals, in which corporate concentration is choking off competition, raising prices and retarding innovation.
But there are many kinds of products in which more innovation can lead to more destruction. Do we need vapes that are more compulsively usable? Is it good that online gambling firms are spending so much on slick marketing to find new users?
Do we really want artificial intelligence companies competing to create the most addictive pornbot? The question, I think, is under what conditions algorithmic media becomes such a product.
Max Read, a technology critic, wrote an insightful essay in his Substack newsletter arguing that ideas I’m circling here are best understood as a modern temperance movement, “positioning the rise of social media and the platform giants as something between a public-health scare and a spiritual threat, rather than (solely) a problem of political economy or market design,” he writes.
This approach, he goes on to say, is “distinctly not ‘populist’ ... so much as progressive in the original sense, a reform ideology rooted in middle-class concerns for general social welfare in the wake of sweeping technological change.”
I think there’s truth in all of that. TikTok’s effects on our wallets matter less, to me, than its effect on our souls. But I don’t see the division here as between populists and progressives — groups that substantially overlap anyway.

The FTC lost the Meta case because it is limited in its mission and its tools, but at least it was trying to do something about the power these platforms exert over our society. Where was everyone else?
The division I see here is between progressivism and liberalism as we now understand it. Modern liberalism is built around the idea that the government should make it possible for people to pursue their happiness as they see fit, so long as they are not harming others.
It has much to say about individual rights and little to say about the common — or even the individual — good.
Liberalism carries, at its core, a trust that social experimentation will lead to better forms of social organization. That has freed it — and freed us — from the shackles of repressive traditions.
But it can be confounded when adults are freely making decisions that don’t harm others but perhaps harm themselves. And it has created a loophole that algorithmic media companies have driven a truck through: We’re just giving people what they want, they say. Who are you to judge what they want?
It’s not an easy question to answer.
Two ideas
But it feels to me like the outlines of an agenda — or at least ideas worth debating and trying — are coming more clearly into focus. Much of it revolves around two ideas: First, children should be more insulated from the ubiquity of digital temptations.
Second, companies that want to shape so much human attention need to take on more responsibility, and liability, for what might go wrong.
States all across the country are banning cellphones in public schools. A number of states have forced age verification on porn sites and rather than comply, Pornhub simply blocked access in those states. Sens. Brian Schatz, D-Hawaii, and Ted Cruz, R-Texas, were among the sponsors of the Kids Off Social Media Act, which would ban social media companies from offering accounts to children younger than 13, and ban them from delivering algorithmic recommendations to kids younger than 17.
Rep. Jake Auchincloss, D-Mass., recently introduced a series of bills that I think are promising: The Deepfake Liability Act would condition the immunity these platforms now have from lawsuits on their efforts to beat back deepfake porn and cyberstalking; the Education Not Endless Scrolling Act would place a 50 per cent tax on digital advertising revenue over $2.5 billion; and the Parents Over Platforms Act would strengthen age verification.
But will any of these bills pass? And even if they do, is all of this just fighting the last war? Just as social networks became algorithmic feeds, now personalized AI systems are upending our digital lives once again.
The algorithms that Meta uses to serve up online video were but a rest station on the path to the AI chatbots that are weaving their way into our lives as assistants, teachers, counsellors, lovers and friends.
None of us know how it will change adults to fall into intimate relationships with AIs, to say nothing of what it will mean for children to grow up in a world where AI companionship is omnipresent.
It could be better than today’s opaque algorithms, offering us the ability to ask for what we want and actually get it. But are we so certain what teenagers will want from AI companions is something they should have?
And what happens when corporations find it is more profitable to have the AIs we treat as friends manipulate what we want to better serve their bottom lines?
Which is why, in the end, I don’t believe it will be possible for society to remain neutral on what it means to live our digital lives well. Absent some view of what human flourishing is, we will have no way to judge whether it is being helped or harmed.
This line from Barba-Kay might be corny, but it has the virtue of being true: “If the present technological age has a lasting gift for us, it is to urge as decisive the question of what human beings are for.”
This article originally appeared in The New York Times.
© 2025 The New York Times Company
Originally published on The New York Times
