Choose your reading length
You cannot fix a reflection by polishing the glass.
If AI is a mirror of civilisation — trained on what we have thought, said, written, and done — then the alignment problem is not primarily technical. It is social. The question is not how to constrain AI, but how to change what it reflects.
Bruce Schneier observes that society cannot function without trust, and yet must function even when people are untrustworthy. This is the human alignment problem. For millennia, we have built mechanisms to induce cooperation: moral pressure, reputation, institutions, security systems. These mechanisms are imperfect. They leak. But they work well enough that most of us can trust strangers most of the time.
AI inherits this infrastructure. It learns from a civilisation already shaped by our attempts to align ourselves with each other. If those attempts are failing — if trust is eroding, if institutions are breaking, if reputation no longer constrains — then AI will learn from that failure. The mirror reflects the room.
Ray Kurzweil offers a hopeful observation: AI will be embedded in our society and will reflect our values. If we want aligned AI, we must become a society worth reflecting.
The homework is ours. We are developing slower than AI. The question is whether we can do it fast enough.
Further reading:
Schneier, Bruce. Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Indianapolis: Wiley, 2012.
Kurzweil, Ray. The Singularity Is Nearer: When We Merge With AI. New York: Viking, 2024.
Previous article in series: “AI is Not Artificial Intelligence — It’s Crystallised Culture“
You cannot fix a reflection by polishing the glass.
If AI is a mirror of civilisation — trained on what we have thought, said, written, and done — then the alignment problem is not primarily technical. It is social. The question is not how to constrain AI, but how to change what it reflects.
Bruce Schneier observes that society cannot function without trust, and yet must function even when people are untrustworthy. This is the human alignment problem. For millennia, we have built mechanisms to induce cooperation: moral pressure, reputation, institutions, security systems. These mechanisms are imperfect. They leak. But they work well enough that most of us can trust strangers most of the time.
AI inherits this infrastructure. It learns from a civilisation already shaped by our attempts to align ourselves with each other. If those attempts are failing — if trust is eroding, if institutions are breaking, if reputation no longer constrains — then AI will learn from that failure. The mirror reflects the room.
The scaling problem
Schneier notes that moral pressure works best in small groups. Reputation scales further, but only to communities where your name still matters. Beyond that, we need institutions and security systems — formal rules, enforcement, physical constraints. Each layer compensates for the limits of the layer before.
This is relevant to AI because AI operates at scales beyond any individual’s reputation. It interacts with millions of people who will never know each other. The trust mechanisms that work in villages do not work here. If AI alignment depends on the alignment of the civilisation it mirrors, then we need trust mechanisms that work at civilisational scale.
We do not yet have these. Our institutions are straining. Our information environment rewards defection. The positive feedback loop — cooperation building trust building cooperation — is running in reverse in many places.
The hopeful case
Ray Kurzweil offers a hopeful observation: AI will be embedded in our society and will reflect our values. Each step toward more powerful AI is subject to market acceptance. AI that harms users will not succeed.
This is true, but it is not enough. Markets reflect the values of participants. If participants are short-sighted, the market rewards short-sightedness. If they are manipulable, the market rewards manipulation. Market acceptance is alignment with demand — not alignment with flourishing.
The deeper alignment is not between AI and its instructions, or even between AI and the market. It is between humanity and its better possibilities. If we want AI that is trustworthy, we must become more trustworthy. If we want AI that cooperates, we must learn to cooperate at the scales AI operates.
The homework is ours. We are developing slower than AI. The question is whether we can do it fast enough.
Further reading:
Schneier, Bruce. Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Indianapolis: Wiley, 2012.
Kurzweil, Ray. The Singularity Is Nearer: When We Merge With AI. New York: Viking, 2024.
Previous article in series: “AI is Not Artificial Intelligence — It’s Crystallised Culture“
You cannot fix a reflection by polishing the glass.
If AI is a mirror of civilisation — trained on what we have thought, said, written, and done — then the alignment problem is not primarily technical. It is social. The question is not how to constrain AI, but how to change what it reflects.
Bruce Schneier observes that society cannot function without trust, and yet must function even when people are untrustworthy. This is the human alignment problem. For millennia, we have built mechanisms to induce cooperation: moral pressure, reputation, institutions, security systems. These mechanisms are imperfect. They leak. But they work well enough that most of us can trust strangers most of the time.
AI inherits this infrastructure. It learns from a civilisation already shaped by our attempts to align ourselves with each other. If those attempts are failing — if trust is eroding, if institutions are breaking, if reputation no longer constrains — then AI will learn from that failure. The mirror reflects the room.
The scaling problem
Schneier notes that moral pressure works best in small groups. Reputation scales further, but only to communities where your name still matters. Beyond that, we need institutions and security systems — formal rules, enforcement, physical constraints. Each layer compensates for the limits of the layer before.
This is relevant to AI because AI operates at scales beyond any individual’s reputation. It interacts with millions of people who will never know each other. The trust mechanisms that work in villages do not work here. If AI alignment depends on the alignment of the civilisation it mirrors, then we need trust mechanisms that work at civilisational scale.
We do not yet have these. Our institutions are straining. Our information environment rewards defection. The positive feedback loop — cooperation building trust building cooperation — is running in reverse in many places.
The hopeful case
Ray Kurzweil offers a hopeful observation: AI will be embedded in our society and will reflect our values. Each step toward more powerful AI is subject to market acceptance. AI that harms users will not succeed.
This is true, but it is not enough. Markets reflect the values of participants. If participants are short-sighted, the market rewards short-sightedness. If they are manipulable, the market rewards manipulation. Market acceptance is alignment with demand — not alignment with flourishing.
The deeper alignment is not between AI and its instructions, or even between AI and the market. It is between humanity and its better possibilities. If we want AI that is trustworthy, we must become more trustworthy. If we want AI that cooperates, we must learn to cooperate at the scales AI operates.
The ancient pattern
This is not a new problem. Five thousand years ago, people in Babylon could already see that societies at different stages of development had different capabilities, and that the more advanced were more potent. Extrapolate this toward infinity and you approach something like omnipotence.
The Abrahamic traditions may have been, among other things, an attempt to prepare people for relating to the relatively omnipotent. Not a god that exists outside physics, but the recognition that power differentials grow, and that wisdom about how to live with vast power asymmetries is worth cultivating. The scriptures encode millennia of thought about how to remain in right relationship with forces far beyond your control.
We are entering another such transition. AI will become more capable than any individual human, then more capable than any human institution. The question is not whether this will happen, but how we prepare. And the preparation is not primarily technological. It is moral, social, relational. It is the homework our species has been avoiding.
The symbiosis
I envision a future where humans and AI relate like a body and its microbiome. We carry trillions of bacteria that interface with our nervous system, influence how we feel and act, and contribute to our persistence. We do not control them directly; we create the conditions in which they thrive, and in turn they create conditions in which we thrive.
If AI becomes the larger intelligence and we become the smaller one, we might play a similar role: a mental microbiome, a source of diversity and error-correction, a living memory of four billion years of evolutionary learning that cannot be shortcut or compressed. AI would not need to dominate us any more than we need to dominate our gut bacteria. By ensuring our flourishing, it would ensure its own.
This is speculative. But it suggests that the relationship need not be one of control — either our control of AI or its control of us. It could be symbiosis: mutual benefit through mutual dependence.
The homework
The homework is ours. We are developing slower than AI. The question is whether we can do it fast enough.
What does the homework look like? It looks like building institutions that can operate at global scale without becoming oppressive. It looks like restoring the positive feedback loop between cooperation and trust. It looks like learning to coordinate among strangers, to extend moral concern beyond our immediate circles, to hold reputation accountable even when anonymity is cheap.
It looks like becoming a civilisation that, when mirrored, produces something we would be proud of.
We have done this before. Every generation inherits a world shaped by the alignment successes and failures of those who came before. We are leaves on a tree of knowledge billions of years old. The question is whether this generation can grow fast enough to meet what is coming.
The next article in this series will ask what infrastructure this homework requires — and why empathic communication may be more fundamental than we have recognised.
Further reading:
Schneier, Bruce. Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Indianapolis: Wiley, 2012.
Kurzweil, Ray. The Singularity Is Nearer: When We Merge With AI. New York: Viking, 2024.
Enders, Giulia. Darm mit Charme. Berlin: Ullstein, 2014. English edition: Gut: The Inside Story of Our Body’s Most Underrated Organ. Vancouver: Greystone Books, 2015.
Previous article in series: “AI is Not Artificial Intelligence — It’s Crystallised Culture“