Brave New Enshittification
We didn’t sleepwalk into the surveillance state. We built it eagerly, one tap at a time, because it came wrapped in convenience, validation, and a dopamine-bright ‘like’ button.
Now, as lawmakers fumble toward rules for yesterday’s social platforms, artificial intelligence is already rewriting the terrain. We are drafting treaties for the last war while the next one is being waged in real time, quietly, at scale, and without our consent.
This is Digital Soma: a condition of algorithmic sedation where engagement masquerades as agency and personalization becomes control. In this system, platforms optimize not for human flourishing but for extraction. You are no longer just the user. You are the product, the labor, the data exhaust, and the raw material used to shape and sell someone else’s reality.
It is brilliant. It is efficient. And it is hollowing out our sense of self, truth, and collective responsibility.
What We Discussed
Definition of Enshittification
Enshittification, a term coined by Cory Doctorow, is the predictable life cycle of digital platforms under extractive capitalism: they begin by serving users, then pivot to serving business customers, and ultimately collapse inward to serve only themselves. Value is steadily siphoned away from the public and concentrated upward, leaving behind degraded experiences, distorted incentives, and hollowed trust. What starts as connection curdles into capture.
Applied to the today’s ethically ambiguous landscape, enshittification is not just a product problem. It is a civic condition.
In the early phase, platforms offer pleasure, utility, and belonging. Feeds feel alive. Discovery feels generous. The like button feels harmless. Users are courted, habits are formed, and dependence is normalized.
LIKE ANY DRUG.
Then the shift. Optimization turns from people to profit. Algorithms (unrestricted) stop asking what is meaningful and start asking what keeps attention longest, spreads fastest, or monetizes best. Nuance is penalized. Outrage outperforms truth. Calm loses to compulsion. The system does not care what you believe, only that you stay.
In the final phase, enshittification fuses with AI. Automation accelerates extraction. Synthetic content floods the commons. Surveillance becomes ambient, invisible, default. Users are no longer merely tracked; they are modeled, predicted, nudged, and pre-shaped. This is where Digital Soma takes hold: not force, but sedation. Not censorship, but saturation.
Here, enshittification becomes existential. You are simultaneously:
• the audience being influenced
• the data being harvested
• the signal used to manipulate others
Agency thins. Choice becomes theater. Participation feels voluntary while outcomes are quietly predetermined.
HOW HAS THE DIGITAL LANDSCAPE BECOME THE CHURCH OF HATE?
Because hateful content is structurally advantaged by the systems that decide what we see.
Not because people are uniquely cruel.
Not because outrage is new.
But because modern platforms quietly reward the emotions that travel fastest through the nervous system.
HERE’S WHY HATE GOES VIRAL SO RELIABLY
1. Platforms optimize for arousal, not truth
Algorithms are trained to maximize engagement: clicks, shares, comments, watch time. Hateful content triggers high-arousal emotions like anger, fear, and disgust, which spread faster than curiosity or nuance. Calm reflection does not spike metrics. Rage does.
Hate is efficient.
2. Outrage collapses complexity into teams
Hateful content simplifies the world into villains and victims, us and them. That clarity feels relieving in a noisy, uncertain environment. It offers moral certainty without cognitive effort. Once identity is activated, people share to signal belonging, not accuracy.
Virality becomes a loyalty test.
3. AI learns from our worst impulses
Recommendation systems are not moral agents. They learn from what performs. If divisive content keeps people glued to screens, the system amplifies it without context or consequence. Over time, algorithms become better at provoking than informing.
The machine does not hate. It optimizes.
4. Hate hijacks the attention economy
In a crowded feed, subtlety starves. Hateful content uses shock, dehumanization, and absolutism to break through the noise. Even condemnation can boost reach, because the system registers attention, not intent.
Every reaction feeds the fire.
5. Social media removes friction that once slowed harm
Historically, hate requires effort: organizing, printing, persuading. Now it requires a post. Anonymity lowers social cost. Speed outruns reflection. By the time harm is recognized, the content has already replicated.
Virality is faster than accountability.
6. Enshittification completes the loop
As platforms decay, safeguards weaken. Moderation is underfunded or automated. Trust erodes. The system becomes optimized for extraction, not care. Hateful content thrives because it is cheap, effective, and profitable.
This is not a bug. It is a business outcome.
THE DEEPER TRUTH
Hateful content goes viral because our digital systems reward emotional combustion over human flourishing. They turn the most reactive parts of us into fuel, then sell the resulting chaos back to us as “engagement.”
The tragedy is not that people are easily manipulated.
It’s that we built a world where manipulation scales better than empathy.
Digital Soma is the condition where platforms do not coerce behavior. They sedate it. Users feel awake, expressive, and autonomous while their attention, emotions, and beliefs are being subtly shaped by systems optimized for profit and scale.
Hateful virality is one of its clearest symptoms.
1. Stimulation replaces awareness
Digital Soma begins with constant sensory and emotional stimulation. Feeds are infinite. Notifications are intermittent. Content is personalized in real time.
Hateful content thrives here because it produces high emotional contrast. Anger, fear, and moral outrage cut through the fog faster than complexity or care. The platform learns that these emotions wake the user briefly from sedation, so it supplies more.
The user feels alert.
The system deepens the trance.
2. Sedation through repetition, not force
With Digital Soma, control does not look like censorship. It looks like overexposure.
Hateful narratives repeat with slight variations: new villains, familiar frames, recycled talking points. Over time, repetition normalizes extremity. What once shocked becomes background noise.
Nothing is imposed.
Everything is suggested.
3. Identity capture replaces persuasion
Digital Soma doesn’t argue with you. It mirrors you.
Hateful content succeeds because it flatters identity: you are right, you are threatened, you are part of the group that sees clearly. Sharing becomes an act of self-definition, not communication.
The user believes they are choosing.
The system is reinforcing.
4. AI closes the loop
Once AI enters the system, Digital Soma accelerates.
Algorithms model not just what you like, but:
• what upsets you
• what confirms your worldview
• what keeps you engaged longest when you feel wronged
Hateful content becomes data-rich training material. The system learns how to provoke more efficiently, with less effort, and at greater scale.
The platform does not radicalize.
It refines.
5. Agency dissolves into participation
With Digital Soma, participation feels like freedom. Silence feels like absence. Logging off feels like disappearance.
Hateful content spreads because:
• reacting feels necessary
• disengaging feels irresponsible
• opting out feels like surrender
The user remains active.
Their agency thins.
6. Enshittification seals the condition
As platforms decay, safeguards weaken. Moderation becomes automated or symbolic. Trust erodes. The system optimizes only for survival and revenue.
Hateful content is cheap, sticky, and profitable.
So it remains.
Digital Soma does not collapse the platform.
It keeps it functional at the cost of human coherence.
Final Thoughts:
In this brave new world, the danger is not that machines will overpower us, but that platforms will perfect a system where resistance feels unnecessary, critique feels exhausting, and opting out feels impossible. Enshittification succeeds not by breaking technology, but by breaking our relationship to meaning, truth, and one another.
We didn’t sleepwalk into the surveillance state. We built it eagerly, one tap at a time, because it came wrapped in convenience, validation, and a dopamine-bright ‘like’ button.
Now, as lawmakers fumble toward rules for yesterday’s social platforms, artificial intelligence is already rewriting the terrain. We are drafting treaties for the last war while the next one is being waged in real time, quietly, at scale, and without our consent.