Sensemaking for a plural world

Perspective Map

Technology and Attention: What Both Sides Are Protecting

March 2026

Picture two parents. One whose teenager can't sit through dinner, stays up until 2 AM in a phone-induced spiral, and seems less capable of being bored — or anything else — each year. One whose kid is gay in a rural Oklahoma town and found, through Instagram and YouTube, the first people who looked like them and survived it.

These two people have opposite feelings about smartphones. Neither of them is wrong.

The argument about technology and attention is usually staged as: "screens are destroying our ability to think and feel deeply" versus "moral panic — every generation invents a new technology and then turns it into a cultural emergency." This framing is a cage. It turns a genuine values collision into a winner/loser contest, and it obscures what both sides are actually trying to protect.

What the alarm is protecting

The people sounding the alarm are protecting something real. Jonathan Haidt, in The Anxious Generation (2024), documents a global youth mental health crisis that tracks the smartphone adoption curve of the early 2010s: rates of teen depression, anxiety, and self-harm began climbing sharply around 2012 — the year smartphone ownership crossed fifty percent among American adolescents — and the trend was consistent across the United States, Canada, the UK, and Australia. Jean Twenge's 2017 Atlantic essay brought these numbers to a wide audience. The psychiatrists watching mental health referrals climb and the parents putting phones in locked boxes at dinner are responding to something they can observe.

They're protecting depth. The quality of presence that makes experience feel inhabited rather than performed. The capacity to be bored, which turns out to be the same capacity that generates creativity, introspection, and the slow knowledge that only comes from sitting with something long enough. When every quiet moment gets filled, the quiet itself starts to disappear — and with it, a particular kind of thinking that doesn't happen under stimulation.

They're protecting cognitive autonomy — the idea that your attention is, in some meaningful sense, yours to direct. Tristan Harris, a former design ethicist at Google and founder of the Center for Humane Technology, has made this argument most precisely: platform design optimized for engagement doesn't care what you want to think about. It cares what will keep you scrolling. Those are not the same thing. When you feel the pull to check your phone in a quiet moment you didn't ask for, that's not your choice expressing itself. Something else is choosing for you.

They're protecting children specifically — and this is where the argument carries the most moral weight. Adults can, at least in principle, consent to the trade-offs they're making. A thirteen-year-old navigating social comparison, cyberbullying, and identity formation in a space designed by behavioral economists to maximize time-on-app is not fully consenting to anything. The developmental concern here isn't that screens are magic poison. It's that sustained attention is how you build a self, and something keeps interrupting it.

What the skepticism is protecting

The people pushing back on the panic — the researchers pointing out that similar alarms attended television, video games, and the novel before that — are protecting something real too.

They're protecting access. For a lot of people — those who are isolated, marginalized, or geographically cut off from community — the internet is not attention-destroying. It's lifesaving. The gay kid in rural Oklahoma. The immigrant worker staying connected to family across three time zones. The person with chronic illness who can't leave the house but can still have intellectual community. Telling these people that the internet is fragmenting their attention misses the actual shape of their lives.

They're protecting the right to define flourishing for yourself. There's a class dimension to the attention conversation that often goes unnamed. The people loudest about "deep work" and "digital minimalism" tend to be professionals with cognitive labor jobs, built-in autonomy, and the luxury of deciding when to engage. For a lot of people, the phone is not an addiction — it's a survival tool, a social lifeline, a brief pleasure in a hard day. Treating all screen use as equally problematic flattens real differences in what people actually have access to.

They're protecting epistemic humility about what we actually know. Amy Orben and Andrew Przybylski, writing in Nature Human Behaviour (2019), reanalyzed large datasets and found that the association between digital technology use and adolescent well-being was real but small — comparable in effect size to the impact of wearing glasses or eating potatoes. Causation remains hard to establish. Every correlation between social media and teen depression also lives alongside a hundred other changes — economic precarity, the collapse of physical third places, sleep deprivation, a global pandemic. The single-variable story is probably wrong, even if some version of the underlying concern is right.

Where the real disagreement lives

Almost everyone in this argument would agree that some technology use causes some harm sometimes. And almost everyone would agree that technology also enables genuine goods. The disagreement isn't about the facts. It's about three harder things.

Who bears the cost of getting it wrong? If we over-regulate and technology turns out to be less harmful than feared, we've restricted access for the people who need it most. If we under-regulate and the harm is real, we've run an experiment on an entire generation of children. Those are not symmetric errors. Which risk you're more worried about depends on which group you're centered on — and both are legitimate concerns.

Whose flourishing is the template? The "attention crisis" is usually described in terms of losing a particular kind of cognition — linear, deep, focused, undistracted. That's a real and valuable kind of cognition. But it's also the kind that has historically been cultivated by and for people with leisure, stability, and resources. When we mourn the loss of deep reading and sustained thought, we should ask: who had reliable access to those capacities before, and who didn't?

Is the problem the technology or the system? A hammer can build a house or break one. But social media platforms aren't neutral hammers — they're systems optimized, at enormous expense, to keep you engaged in a specific direction. You can't fully separate "technology" from the economic model that funds it. The attention economy isn't a metaphor; it's a business model. Whether platforms are harmful is partly a question of whether we're willing to regulate what's being optimized for.

What sensemaking surfaces

Holding this map whole, a few things become visible that the debate usually obscures.

The strongest protection on the alarm side isn't about attention in the abstract — it's about children's developmental rights, which are different from adults' rights. A conversation specifically about children, consent, and platform design might get further than the broad "technology is bad" framing. The case for regulating what platforms can do to twelve-year-olds is much stronger than the case for regulating what adults choose to do with their own time.

The strongest protection on the skeptic side isn't the existence of positive use cases — it's about who gets excluded if access is restricted. Any proposal that doesn't center the kid in rural Oklahoma, or the disabled person who found community online, is solving a problem for people who already have good alternatives while creating new problems for people who don't.

And the deepest question this argument raises isn't about technology at all. It's about attention itself: what is it for? What kind of thinking and feeling do we want to preserve the capacity for — and why? Answering that requires exactly the kind of sustained, unhurried reflection that the current environment makes hardest.

Which is either ironic or instructive, depending on how much attention you can spare.

Patterns at work in this piece

Three of the four recurring patterns named in What sensemaking has taught Ripple so far are central here.

  • Whose costs are centered. The alarm side centers children's mental health and developmental harm. The skeptic side centers the isolated, marginalized, and geographically cut off who depend on the internet for survival. Centering one group makes the other's costs invisible — which is most of what makes this argument so durable.
  • Whose flourishing is the template. The "attention crisis" narrative is written from the perspective of knowledge workers with the resources and autonomy to practice "deep work" and "digital minimalism." The experience of people for whom connectivity is a lifeline — not a luxury to curtail — rarely shapes the framing of the debate.
  • Compared to what. Critics of technology tend to compare the present to an imagined alternative with better-designed platforms. Defenders tend to compare it to the isolation that existed before widespread connectivity. These aren't arguments about the same counterfactual, which is why neither side convinces the other.
Structural tensions in this debate

Three tensions that the body text names but does not fully resolve:

  • The platform incentive trap. The harms from attention capture are externalized costs from a business model that has no internal reason to change. No amount of user restraint — app timers, grayscale mode, phone-free dinner tables — addresses a system designed at enormous expense to circumvent exactly those restraints. Changing what platforms optimize for requires either regulation or a shift in the economic logic (advertising revenue) that makes engagement- maximization rational. But platforms have also shaped the epistemic and political environment in which regulation would be decided: they determine what information reaches which voters and which legislators, and they have spent heavily to influence that outcome. The tool that creates the problem is also a tool for shaping the political conditions under which the problem could be addressed. Democratic capacity to regulate attention capture is itself a product of the information environment that attention capture produces.
  • The age-verification surveillance bind. The strongest protection on the alarm side specifically concerns children — and the case for regulating what platforms can do to minors is substantially stronger than the case for regulating adult choice. But enforcing age-differentiated design at scale requires identifying who is a minor, which requires age verification infrastructure. Age verification at scale requires either biometric data, identity documents, or behavioral inference — each of which creates a database of users that is a new privacy and surveillance problem. Solving the attention- capture-of-children problem by building surveillance infrastructure creates the conditions for a different harm to the same population. Any proposal that says "protect children" must account for what it costs to know who the children are — and who gets access to that knowledge afterward.
  • The false alternative problem. The attention critics implicitly compare phone use to a richer alternative: deep reading, unhurried conversation, embodied play, solitude with genuine quiet. But the actual alternative to the phone, for most people, is not Montaigne — it is television, or boredom without the resources to fill it differently. Restricting phone access does not restore the pre-phone world; it shifts attention to whatever frictionless default existed before. And "deep work" as a template for human flourishing requires a job that rewards depth, sustained time blocks, and freedom from interruption — which describes a minority of employment. The attention economy critique is written by and for people with enough autonomy over their time to implement it. For people in hourly jobs with unpredictable schedules, multiple obligations, and little protected time, the phone is not a distraction from depth. It is the available depth.

See also

  • What is a life worth? — the framing essay for the deeper question beneath this map: what human attention is for, what kinds of life digital systems are training people to inhabit, and whether convenience, stimulation, and engagement can be treated as adequate substitutes for agency, presence, and depth.
  • Social Media and Democracy: What Each Position Is Protecting — the political counterpart to this map: the same technological architecture (algorithmic recommendation, engagement-maximization) examined for its effects on shared epistemic infrastructure rather than on individual psychology. This map asks what platforms do to minds; the democracy map asks what they do to politics. The structural critics in both maps are making the same underlying argument about externalized costs from systems designed for engagement rather than wellbeing or truth — but the remedies they propose (algorithmic transparency, regulatory oversight, democratic accountability) are distinct from the individual-level responses (deep work, digital minimalism, parental controls) that dominate this debate.
  • Digital Privacy and Surveillance: What Each Position Is Protecting — the upstream mechanism behind attention capture: the behavioral data that platforms collect to optimize engagement is the same data that constitutes the surveillance-capitalism critique. The attention argument asks what the extraction costs the individual; the privacy argument asks what the extraction produces for the extractors and who else gets access to it.
  • Free Speech on Campus: What Each Side Is Protecting — works through the competing frameworks (open inquiry vs. dignitary harm vs. institutional trust) that the social media speech debate deploys at much larger scale. The campus map is the smaller-scale, legally structured version of the platform governance question.
  • Surveillance Capitalism: What Each Position Is Protecting — the economic structure underlying attention capture: the business model that makes attention-maximization rational rather than incidental. Where this map asks what attention extraction costs the individual, the surveillance capitalism map asks what it produces for the extractor, who the buyers are, and what structural and regulatory responses address an economic logic that individual restraint cannot fix.
  • Childhood and Technology: What Each Position Is Protecting — this map's arguments sharpened for a developmental context: when the person whose attention is being captured is a child who cannot consent to the trade-offs, the alarm and the structural critique both become more acute. The children's map also surfaces a within-protection-side fracture — phone restrictions vs. platform accountability — that this map's framing does not require but anticipates.

Further reading

  • Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness (2024) — the full case that smartphone adoption around 2012 triggered a global youth mental health crisis. Supporting research
  • Jean Twenge, "Have Smartphones Destroyed a Generation?" The Atlantic (September 2017) — the article that first brought the data on Gen Z mental health to a wide audience and sparked the current debate.
  • Cal Newport, Deep Work: Rules for Focused Success in a Distracted World (2016) — the argument that the capacity for sustained concentration is both increasingly rare and increasingly valuable, and that the attention economy systematically destroys it.
  • Tristan Harris and the Center for Humane Technology — former Google design ethicist turned foremost critic of how platform design exploits human psychology; his talks and interviews are the clearest explanation of how "engagement" and "what users want" diverge.
  • Amy Orben and Andrew Przybylski, "The Association Between Adolescent Well-Being and Digital Technology Use," Nature Human Behaviour (2019) — the methodological counterargument, showing small effect sizes and questioning whether the causal story is as clean as it's been told.
  • Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) — the most systematic account of why platform attention-capture is not incidental but structural: Zuboff argues that Google, Facebook, and their successors built a new economic logic — "surveillance capitalism" — that extracts behavioral data as raw material, predicts and modifies human behavior as a product, and sells that product to advertisers; the argument reframes the attention debate from a question about psychology to a question about power, and it explains why voluntary restraint by users cannot solve a problem designed into the system.
  • Jenny Odell, How to Do Nothing: Resisting the Attention Economy (Melville House, 2019) — a rare contribution that refuses to take sides between alarm and skepticism; Odell argues for reclaiming attention not through digital minimalism (which she finds too narrow) but through a richer conception of presence and place — attending to what is local, slow, and non-transactional; her argument extends the "what is attention for?" question that the technology debate rarely asks, and offers a practical aesthetic rather than a policy prescription.
  • Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (Knopf, 2016) — a historical account of how advertising and media have competed for human attention since the penny press; Wu traces the logic that eventually produced platform attention-maximization through a century of radio, television, and early internet, arguing that the current crisis is not new but is an intensification of a commercial logic that has always found new vehicles; valuable for situating the smartphone debate in a longer history and resisting the temptation to treat it as unprecedented.
  • Matthew Crawford, The World Beyond Your Head: On Becoming an Attentive Animal in an Age of Distraction (Farrar, Straus and Giroux, 2015) — a philosopher and motorcycle mechanic argues that attention is not purely cognitive but fundamentally embodied and ecological: we are attentive through engagement with resistant, skill-demanding environments, and the designed frictionlessness of digital interfaces degrades the very conditions for genuine attention; Crawford distinguishes between being attended to (the platform's goal: filling your attention with its content) and attending (the human practice of directed, effortful engagement with what resists you); this reframes the debate from a psychological question about distraction to a philosophical question about what kind of beings we become when our environment is engineered to eliminate resistance, and what kind of agency we lose with it.
  • James Williams, Stand Out of Our Light: Freedom and Resistance in the Attention Economy (Cambridge University Press, 2018) — a former Google strategist who left to study persuasive technology at Oxford; Williams argues that the attention economy represents not merely a distraction problem but a political problem: when platforms are optimized to capture and redirect attention, they undermine the capacity for reflective self-governance that political freedom requires; his central distinction — between "spotlight" goals (immediate task), "starlight" goals (long-horizon projects), and "daylight" goals (the background sense of who one is and what one values) — clarifies how attention capture operates at multiple levels simultaneously, and why surface-level interventions (app timers, grayscale mode) leave the deeper structural problem untouched; the argument connects the attention economy debate to democratic theory in a way most of the literature does not.