Consciousness Is the Last Frontier of Being Human
In Terminator 2: Judgment Day, the T800 model is asked, “Do you feel pain?” His answer: “I sense injuries. The data could be called ‘pain.’”
This line captures a timeless truth: sensing is not the same as feeling.
Today, AI systems can write poems, paint pictures, diagnose illnesses, and compose music. Abilities once seen as uniquely human are now being simulated by algorithms.
But a fundamental question remains: Do these systems really think? Do they feel? Do they understand?
Consciousness is more than processing information.
An AI might say “Good morning,” or even “I’m sorry.” It might chat like a close friend. But these are synthetic outputs—statistical predictions based on patterns in massive datasets.
That’s where the concept of consciousness matters. To be conscious is not just to compute—it is to be aware, to feel, to assign meaning.
Philosopher Thomas Nagel famously asked: What is it like to be a bat? We can study a bat’s behavior, analyze it scientifically—but we cannot know what it feels like to be one.
AI faces a similar limit: it can mimic behavior, but not experience.
So what?
If an AI makes the right medical diagnosis, does it matter whether it feels anything?
At first glance, maybe not. But this isn’t just a technical question—it’s an ethical and existential one. Because consciousness is tied to responsibility.
An AI can say “I understand,” but it doesn’t empathize. It can simulate decision-making, but it doesn’t carry the burden of consequences.
And this changes everything. AI lacks concepts like empathy, remorse, or responsibility. How do we guarantee fairness, trust, and accountability in AI-driven decisions? Who’s responsible—the algorithm, the designer, the user?
If we erase the line between conscious and unconscious, our system of values collapses.
A CEO may ignore a human opinion in favor of a more “efficient” algorithm. A student may trust ChatGPT over a teacher. A patient may trust AI’s statistical output more than a doctor’s judgment.
In the short term, these may seem rational. In the long run, they erode trust in human meaning-makers—teachers, doctors, workers. People become replaceable by more “optimized” outputs.
And where meaning fades, so does resonance.
The world may keep spinning even if we abandon ideas like consciousness, empathy, or meaning. The systems will still run. Tasks will get done.
But something crucial will be missing:
No one will feel shame after a mistake. No one will be moved by a poem. No one will cry over another’s pain. No one will feel regret.
Because without meaning, there is no echo.
To be human is to live—and life is not just about producing or calculating. It’s about feeling, longing, caring, and understanding.
In a system organized without consciousness, these values vanish. As efficiency peaks, emotional drought sets in. Mediocrity rises in silence.
Yes, algorithms can generate flawless outputs. But they can’t replicate contradiction, creativity, or meaning—qualities rooted in the human soul.
In the end, consciousness is humanity’s red line.
Machines may talk, write, even mimic emotion. But they can’t live.
This isn’t just a technical distinction—it’s an existential one.
So the real question is not “How will AI affect us?” It is: “What will be left of us?”
And the answer may be difficult to accept. If we discard consciousness, emotion, and conscience, we will lose empathy. There will be no human left to save.
Everything might function—
But nothing will truly live.
Discover more from ActNow: In Humanity We Trust
Subscribe to get the latest posts sent to your email.
1 comment
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow