Truth has always been one of philosophy’s favorite subjects. Every era returns to it, trying to define what it means to “know”.
But Ludwig Wittgenstein, in his first book, the Tractatus Logico-Philosophicus, posed a quieter and more radical question: can we even reach truth through language?
Language, for him, is not a transparent window to reality. It’s a structure, a game with its own grammar and limits. We live inside it. He wasn’t saying language can’t map reality at all, he was saying it can only map a certain kind of reality, and the most important things often slip outside what can be stated.
So maybe the problem of truth is not only what is true, but how truth can be expressed, believed, and shared.
My claim is that algorithms and GenAI shift the bottleneck from producing reasons to producing belonging, and that makes truth harder to coordinate.
From reasoning to believing
In my previous post, Machine Sophists and the Dialectic of Reason, I described GenAI as a kind of high-throughput reasoning tool, a machine that can generate and test arguments at a scale humans never could.
But even if such reasoning were perfect, would people believe it?
Cognitive scientists like Hugo Mercier and Dan Sperber have argued that belief formation is not primarily a search for truth. It’s a negotiation between evidence and identity.1
We don’t evaluate ideas in a vacuum, we evaluate them as members of groups. Belief signals belonging. It says: these are my people; this is how we see the world.
The paradox of belonging
Belonging isn’t just social glue. It also draws a boundary: us only makes sense if there is a them. That means belonging can protect people, but it can also exclude and punish difference.
This is why “belonging” doesn’t automatically expand our social circles. It can even do the opposite: shrink life into tighter identity bubbles, making people atomized, segmented, and emotionally attached to micro-groups while becoming more detached from society as a whole.
Farhad Dalal calls this the “paradox of belonging”: naming a group can create an illusion of cohesion, but the cohesion depends on exclusion, and often collapses when you look closely at who gets left out, silenced, or treated as a threat. 2
The algorithmic amplification of identity
Social media technologies, especially the infinite-feed pattern of TikTok, YouTube, Instagram, etc. and their recommendation algorithms magnify this dynamic.
They don’t just show us more content; they show us more of what keeps us watching. The result:
- We get pulled into specific narratives of the world.
- We form micro-cultures that share emotional tones and preferred evidence.
- Trust and identity merge: we feel safe inside the same stories.
These groups don’t need to share geography. They can be scattered around the world, connected by belief rather than proximity.
That’s new. And powerful. Because once trust becomes digital, detached from embodied experience, it can also be manipulated, segmented, and optimized.
The paradox of synthetic belonging
Algorithms don’t invent this paradox; they industrialize it. GenAI doesn’t destroy truth directly. But it destabilizes the grounds of truth, the social processes through which we verify and agree.
By “synthetic belonging,” I mean belonging produced by engineered feedback loops, feeds and AI interactions, that mimic recognition without requiring mutual obligation. In a world of infinite reasoning and infinite feeds:
- Truth becomes harder to anchor.
- Belief becomes easier to simulate.
- Belonging becomes the new commodity.
And perhaps the deepest question isn’t “what’s true?” but “who do I trust enough to believe with?”
Belonging helps us feel stable and safe. It calms us, but it can also make us less willing to revise beliefs, even when evidence changes. In an algorithmic world, that same need for stability can pull us away from shared reality.
Three mechanisms make this worse:
- Trust used to be grounded in real-world reputation; online it can float on vibes, repetition, and algorithmic reinforcement.
- Personalized feeds help you find a tribe, but they shrink the shared public world we need for coordination.
- Platform-mediated belonging is steerable: groups can be segmented, nudged, and manipulated (even by bots).
Then AI scales the whole dynamic. Belonging used to be a scarce, embodied thing, slow to build and hard to fake. That scarcity made it a partial anchor for truth. Now belonging can be algorithmically manufactured and AI-maintained at scale, care and recognition simulated on demand. The same mechanism that protects us from uncertainty also makes us easier to steer away from reality. The social internet turned this ancient function into a planetary feedback system. What we trust may now depend less on reality, and more on which identity feels safest to inhabit.
That’s the paradox: belonging saves us from isolation, while truth becomes secondary to identity. If language already constrains what truths can be shared, algorithms now constrain who we share them with, and identity determines whether we’ll listen at all.
A harder limit under the abundance
One might hope abundance could solve this: if algorithms fragment us, maybe more options mean more chances to find genuine connection. But there’s a harder limit underneath.
Even if platforms and AI scale sources of belonging, humans don’t scale the same way. There’s still a hard cap on how many relationships and group memberships we can actively maintain. So when the feed expands, we don’t simply “add more people”, we’re forced to choose, drop, and filter.
In that sense, abundance pushes us back onto an underlying biological scarcity: attention, time, and the cognitive bandwidth required for stable social bonds. The result can look paradoxical: the world offers more communities than ever, yet many people end up with narrower, more intensive identity bubbles, more segmented, more replaceable, and often more detached from shared society.3
Practicing truth-seeking
Truth-seeking in an attention + identity economy looks less like “finding the best argument” and more like designing habits and institutions that make it safe to change your mind.
The hard task now isn’t producing arguments. It’s building trust systems, personal and collective, that reward revisability, not tribal certainty.
What might that look like? Spaces where changing your mind signals intellectual honesty rather than betrayal. Relationships where disagreement doesn’t threaten belonging. Institutions that make updating beliefs feel safer than defending them. None of this is easy when the feed rewards certainty and punishes nuance.
I don’t have a blueprint. But I suspect the answer involves recovering something algorithms can’t easily manufacture: the slow, mutual vulnerability that makes trust real.
Footnotes
Footnotes
-
I first ran into a version of this idea while reading this Psychology Today article ↩
-
F. Dalal, “The Paradox of Belonging,” Psychoanalysis, Culture & Society, vol. 14, no. 1, pp. 74-81, 2009, doi: 10.1057/pcs.2008.47. [Online]. Available: https://www.dalal.org.uk/the%20paradox%20of%20belonging.pdf ↩
-
R. I. M. Dunbar, “Coevolution of neocortical size, group size and language in humans,” Behavioral and Brain Sciences, vol. 16, no. 4, pp. 681-694, 1993, doi: 10.1017/S0140525X00032325 https://www.psy.ox.ac.uk/publications/945583 . Available: https://www.researchgate.net/publication/235357146_Co-Evolution_of_Neocortex_Size_Group_Size_and_Language_in_Humans ↩