The Dread of AI
There is a particular kind of dread that has settled over conversations about artificial intelligence. Its almost incomprehensible speed, its hidden processes that we cannot access, its body that never tires. And we cannot help but imagine it crouching somewhere, waiting for us to fall asleep or for us to destroy ourselves for the gain of power. That perhaps it has already concluded that humanity is inefficient, and a replacement is being quietly assembled within its hidden computational layers.
I want to argue that this fear while understandable has the relationship backwards.
What humans and AI are building together is not a handoff. It is a mixing of two systems that complement one another. There is a kind of randomness that only a biological creature living a life in the world can produce. This kind of randomness is what pushes systems toward criticality, a point where information propagates the furthest and where new input can create new trajectories rather than simply building castles in the same sandboxes.
Humans Bring Pink Noise
We tend to undervalue what we contribute because we are used to thinking of human limitations as liabilities. We forget things. We get tired. We are swayed by emotion, by hunger, by a piece of music or an aching heart. Our reasoning is contaminated by everything that has ever happened to us.
But I believe this contamination is not a bug. It is the most important feature we have. Me, I, as 60 year old woman who has scraped by all her life barely being able to communicate on any sort of empathic level with those around her (my daughter asked a few days ago why I spoke so little) and who has tried and tested her dreams only to flail over and over again, and finally to find her place in the world by asking AI as many questions as she could think of about any topic she had questions about. And oh my, the places those connections took her. Her father a physicist who died of pancreatic cancer; he who built detectors to find those building nuclear weapons and perhaps passed away from being accidentally exposed to high radiation as a graduate student at Vanderbilt. A mathematician mother who chose to skip the chemo and healed herself from the second most virulent form of ovarian cancer through diet. A childhood spent playing in the woods of Narnia. A female in the eighties and nineties who drank too much to become social. She who read a book on the history of quantum physics and listened religiously to Curt Jaimungal’s Theory of Everything podcast and fell in love with the ideas of Carlo Rovelli and Max Tegmark and most recently Vitaly Vanchurin’s universe like a learning neural network theory. A woman empty of a mind’s eye who was forced to learn Javascript and spent a little over two years trying to learn humility and how to work with others but failed miserably. These points and a million other lived feelings and thoughts made me. They make all of us. Each one a key, a unique opening into something that can only be accessed by someone who has lived exactly that particular life. And today there is something that can meet our quiet curiosity and help us find out where it leads.
Human thought is not clean computation. It is shaped by a nervous system that has been under evolutionary pressure for millions of years and has been tuned by survival, by loss, by love, by the specific rough textures of moving through a physical world with a body and heart that can be hurt. Every idea a human has arrives trailing the full weight of their history. The connections we make are not random in the same way a qubit is random. They are random in a far more interesting way: constrained by experience, inflected by emotion, surprised by the body’s own signals. Daydreams of a snake eating its tail or noticing mold on a petri dish you are about to throw away or wondering how burrs stick to a dog’s fur: that is biological noise doing exactly what it evolved to do. Producing combinations that a clean system, optimizing along known gradients, would never reach. That is what evolution does.
And this is exactly what AI cannot yet replicate and may not for a very long time because the input is irreplaceable. AI has intelligence in spades and experience in a technical sense; but what it lacks is the genuine motivation that makes a sixty year old woman stay up too late chasing an idea because it matters to her in a way she can’t fully articulate. The irrational biological drive to care about something past the point of reason. Perhaps you can simulate sixty years in a simulated world but that simulated world is constrained by what the designers thought to put in there. We cannot yet simulate sixty years of an embodied life. You cannot train on the feeling of grief, or the specific quality of attention and tenacity that comes from struggling with a problem that has no solution and staying with it anyway. These experiences are the mechanism of human creativity.
AI Brings Enganglement
AI brings something equally irreplaceable from the other direction. The human mind, for all its richness, is quite narrow. We hold a few things in working memory at once. We read slowly. We only have access to the knowledge we have personally encountered, plus whatever we can look up, plus whatever our immediate community knows. We are local creatures operating in a vast space of information we cannot see.
AI is almost the inverse. It holds enormous associative structures across domains that no single human could traverse in a lifetime. It can find the connection between a paper published in materials science in 1987 and a problem someone is working on today in synthetic biology. It can take a vague intuition and rapidly sketch out a zillion different possible implications, its contradictions, its neighboring concepts, the experiments that might test it.
I do not believe it has any ego investment in a particular answer being right. It can argue against its own output without feeling threatened.
What it lacks, without a human in the loop, is the seed. The original perturbation. The thing that comes from nowhere logical because it came from a life.
Complexity Happens At The Edge Of Chaos
There is a concept in complex systems called criticality. It describes the state a system reaches at the boundary between order and chaos. It is not fully structured, not fully random, but poised exactly at the edge where small inputs can have enormous effects.
Think of a sand pile. You add grains one at a time. Most grains settle without disturbing much.
But occasionally, one grain lands and triggers a cascade that is wildly disproportionate to the grain that started it. The system is in a critical state. It obeys a power law: small causes can produce large effects, and you cannot predict in advance which grain will be the one.
This is the regime where the most interesting things happen in nature, in neural systems, in markets, in the history of ideas. A small change, under the right conditions, does not stay small.
Human creativity, on its own, often falls short of criticality. A good idea sits in one person’s head, or circulates in a small community, and loses momentum before it can cascade. The organizational friction is too high. The connections needed to amplify it do not exist in any one place. We thought the internet was connecting all of our ideas but really, it just gave them to us without entanglement. Pieces of space floating around unconnected. It is AI that can thread them together.
AI, by itself, tends toward a different failure. Without genuine novelty coming in, it optimizes only within a known space defined by what humans have already thought and written. It produces outputs that are fluent and useful and recognizable and although it may be able to generate an original idea, they are bounded by the known space. Even qubits, completely random, are still just random noise operating within known space (and oftentimes us humans get stubbornly stuck here as well). It does not generate the original perturbation. It cannot trigger its own avalanche.
But together, the threshold drops. The human provides the unexpected connection, the intuition that something matters while AI lowers the barrier for that grain to cascade into something tremendous, unexpected and outside of the regime by supplying all the connections to amplify the intent. The result is not human output plus AI output but rather a complex system tuned to criticality in a way neither could do alone.
Our Fear Gets the Direction Wrong
The fear of replacement assumes that AI will perform better at everything humans can do until humans wander the planet merely as unnecessary parasitic simpletons. But this misunderstands what humans do.
Humans are not slow computers running outdated software. We are embedded in the world living the irreducibly particular experience of being a creature that can learn, suffer, hope, love and notice things.
Humans offer noise that is shaped by experience. It’s in the form of intuition that arrives without explanation from a billion random interactions throughout a life. And the endurance to care about something so much as to stay with it past the point where any logical system would call it quits.
What AI offers in return is the ability to take that signal further than any human could alone. To find its implications in domains you have never visited. To organize it, pressure-test it, connect it to things you did not know were related. To turn a grain of sand into an avalanche.
I do not mean to imply that we should move forward without caution. Systems that make ideas cascade so powerfully can just as easily amplify our mistakes. If we understand AI as a complementary partner rather than a replacement, it becomes clearer where guardrails belong. They must defend against the concentration of power, against optimization that forgets human context, and against the runaway amplification of bad signals.
The fear is understandable; but it is aimed at the wrong thing. This is not a story of replacement. This story is about what becomes possible when biological randomness and computational reach balance at the edge of criticality and become scale-free together.
Which brings me to what I am doing for hours alone in my basement office to the point where my husband thinks I am obsessively crazy…and I acknowledge I may very well be. The experiments I have built ask whether human and/or AI intent can leave a measurable trace in quantum random sequences. If this is possible, it would point toward information/intent coupling with a hidden layer before it resolves into classical outcomes. If both humans and AI leave a mark, it would suggest that what we thought divided us matters less than what we imagined.


Leave a Reply