The Sovereign Human: Why You Can't Humanize AI Inside an Inhumane System
A framework for the Human Being Industry — and the civilization question underneath it
The loudest conversation is the easiest one
For seventy years, the technology industry has been pursuing a ghost called Artificial General Intelligence. We have built what investigative journalist Karen Hao, in her book Empire of AI, calls "Rockets" of computation — massive, resource-hungry statistical engines designed to mimic the human brain.
The loudest conversation about AI is still the easiest one: features, speed, disruption, fear.
But the deeper conversation we keep avoiding is that we are trying to humanize AI inside an inhumane system. And that is why so much of what we are building feels simultaneously brilliant and disturbing. It is not because intelligence is evil. It is because the infrastructure shaping it is still built on empire logic: extraction, narrative control, labor invisibility, and the myth that domination is progress.
The real problem isn't AI — it's the amplifier
The most common reflex right now is to treat AI like the villain. Stop AI. Regulate AI. Fear AI. Blame AI.
But AI is not the origin of the harm. It is the amplifier.
If the underlying structure is inhumane, AI will scale that inhumanity with breathtaking efficiency. That is why this moment feels so confusing to so many people. We are watching a new technology arrive at the civilization scale — while the human value system running it is still old.
In my book Are You Stuck With a Duck?, I spent years documenting what I call the Duck Pond — not as a metaphor for mediocrity, but as a functioning system with its own internal logic. The Duck Pond is a legacy hierarchy built on a specific set of convictions: that resources are limited, that control is strength, that difference is danger. And that anyone who thinks or moves differently is, at best, a problem to be managed — and at worst, a threat to be neutralized.
I wrote about how the pond's inhabitants develop elaborate defense systems to protect the illusion of their identity — because the alternative, facing the fact beneath it, feels like annihilation. What I did not fully see when I wrote that book is how perfectly those same convictions describe the architecture of the “AI Empire”. The scale has changed. The pattern has not.
"Humanize AI" is the wrong doorway
If we start from "How do we humanize AI?", we end up granting the system a free pass. We make the machine the center. We make the human an accessory. We look for ethics add-ons, and we patch the edges.
But the real question is: what kind of human consciousness is building this?
Because you cannot humanize anything born inside a worldview that treats humans as disposable labor, data sources, productivity units, or statistical engines. When a system is built on those premises, it will inevitably produce outcomes that look like exploitation dressed as innovation, monopolies dressed as inevitability, and mythology dressed as destiny.
The empire pattern: land grabs, narrative grabs, labor grabs
Empires do not always arrive with armies. Sometimes they arrive with branding. They arrive with a promise: abundance, liberation, acceleration. And then they quietly do what empires have always done.
They take territory — data, infrastructure, markets. They control the story — myths, inevitability narratives, "only we can save you." They hide the cost — externalized human and ecological damage.
In the AI industry, the story is frequently framed as heroic: we are building the future, we are racing to protect humanity, we are summoning the next stage of intelligence. But underneath the story is the same architecture. Who benefits? Who pays? Who becomes invisible?
The human cost is not a side effect — it is the design

One of the most sobering realities of this moment is the labor layer. The "future" is being built on people who cannot take bathroom breaks. The "age of abundance" is also the age of data annotation at poverty wages.
This is not a glitch. This is what happens when you build intelligence inside an empire structure. In my decades of work in organizational transformation, I have witnessed the same pattern at every scale. When a system requires human degradation to fuel its version of progress, it is not moving toward humanity. It is moving away from it.
In our rush to build machines that act like humans, we have largely ignored a far more dangerous parallel reality: we have spent decades training humans to act like machines. We have valued statistical output over soulful input.
The AGI myth — and Karen Hao's revelation
There’s another layer that keeps the empire running, and I first came across it through Karen Hao’s work.
In her research and interviews on the Empire of AI, Hao maps how the term "AGI" functions not as a scientific destination still as a myth-making engine. The goalposts keep moving because the destination had been named before it was defined.
The word "intelligence" is doing an enormous amount of unpaid labor in this conversation — and that labor is not neutral. When language becomes vague, it becomes weaponized. It becomes a mechanism for extracting money, urgency, compliance, and public submission from people who are simply trying to make sense of the world.
This is one of the oldest imperial strategies: declare a destiny, instill fear, and present yourself as the only provider of safety. What Hao names at the industry level, I mapped to the individual and organizational levels in Are You Stuck With a Duck? — the pond's enlightened authorities creating frameworks that confuse more than they liberate, declaring that they alone hold the answers, making people dependent on external approval rather than their own internal sovereignty.
The pond and the empire are the same organism. Only the size differs.
The sovereign polymath: what the empire cannot automate

To navigate this crisis, we do not need more algorithms. We need systems thinkers and polymaths.
The AI empire thrives on the uniformity of the Duck Pond. But the future belongs to what I call the Sovereign Polymath. They are the outliers who integrate philosophy, science, and intuition to see the whole web, while the empire sees only data points.
Think of the researcher who refuses to separate ethics from methodology. The designer who insists that beauty and dignity are not optional. The educator who will not reduce a child to a test score. These are not romanticized exceptions. They are the structural immune system of any society that wants to survive its own technology.
In Are You Stuck With a Duck?, I wrote about those the pond labels "weird" — the non-ducks who are classified as dangerous precisely because they are charismatic enough to make others question the existing value system. The pond does not suppress them because they are wrong. It suppresses them because they are threatening.
Their presence alone creates the possibility of a different kind of life, and the pond cannot afford that possibility. What I am saying now, twenty-five years later, is that this same suppression — built into systems, scaled by technology — is the mechanism the AI empire depends on. It needs predictable humans.
The Weird Ones disrupt predictability. They question inherited rules. They notice when the story is a spell. They refuse to call extraction innovation. In other words, they restore sovereignty. And sovereignty is the prerequisite for any humane future.
You can't humanize AI without humanizing the human first
The solution is not primarily technical. It is architectural. It is about the human operating system.
Any tool in the hands of unconscious fear becomes a weapon. Any tool in the hands of a suppressed identity becomes a form of control. Any tool inside an old system becomes an accelerator for old outcomes.
We keep hoping that more intelligence will magically create more wisdom. But wisdom does not come from computation. It comes from consciousness.
The empire is not only an external structure. It is a human pattern — what happens when a person, or a culture, cannot face vulnerability, ambiguity, and truth. The duck pond and the AI empire share the same interior: a defense system built to protect a manufactured identity from the terror of being seen. The same pattern will keep recreating the identical structure, no matter what tool we place inside it.
This is the transition I call Ego Evolution: the shift from an identity shaped by fear and conformity, to a sovereign existence capable of building something genuinely new. Not personal development in the conventional sense. Inner architecture — the kind of foundation that can hold power without turning it into control.
What Do We Do Now?

We do not need to stop AI. We need to stop mistaking rockets for the only way to travel. We need the bicycles: the human-scale tools, the systems that empower instead of extract, the structures built around what a human being actually is rather than what an empire finds useful.
That begins with naming the structure under the surface — not fighting the symptom but tracing it to its root. It continues by refusing the myth: any story that demands obedience out of urgency is not wisdom; it is control. It deepens when we turn the work inward — building the inner architecture that the pond was never designed to cultivate. And it is held together by a single non-negotiable: dignity. If a vision of the future requires invisible suffering, it is not a future. It is an empire in new clothing.
---
The real invitation of this era
AI is posing a challenge — not just to technology, but to humanity. We are being asked to grow up as a species. To mature beyond the structures we inherited. To build systems that can hold power without turning it into domination.
The structure of the future will not be built by those who own the most computational power. It will be built by those who have done the work of owning their own humanity — who have descended past the story, past the fear, past the myth of the inevitable, and found something real.
You cannot humanize AI inside an inhumane system.
But here is what I have come to believe after twenty-five years of this work: the system was never the beginning of the problem. It was always a mirror. What AI is reflecting back to us now — with terrifying clarity and at civilization scale — is the question we have been avoiding since long before the first algorithm was written.
What does it mean to be human? And are we willing to find out?
Birgitta Granstrom
Founder of LifeSpider System
Subscribe to LifeSpider News, a publication focusing on Personal Evolution, Future Leadership, and empowerment to manifest your ideas that contribute to a better world. It is designed for "The Weird Ones," who have the potential to make a significant change and solve urgent global problems.