The Hard Problem Isn’t a Problem
Why pleasure, pain, and motivation make the hard problem disappear
Update: I wrote a follow-up that approaches this from evolution and how a point of view could arise at all. You can read it here: Consciousness Didn’t Appear, it Accumulated
People often talk about consciousness as if it presents a deep mystery that science keeps running into and failing to resolve. Even if we explain everything the brain does, they say, there still seems to be something left over. We can describe the mechanisms, but not why it feels like something to be conscious. That leftover question is what David Chalmers famously labeled “the hard problem” of consciousness.
I’ve never really felt the force of this problem the way others seem to. Not because consciousness is simple. It clearly isn’t. But because the question itself feels framed in a way that quietly leaves out the one thing that explains why consciousness exists at all. When that missing piece is put back into the picture, the mystery loses much of its force.
What follows isn’t an attempt to explain every detail of the mind. It’s an attempt to show why this particular problem keeps feeling profound even after most of the real work has already been done.
Start With the Most Basic Fact About Action
Here’s a fact that’s so obvious it often goes unstated: the only reason anyone ever does anything is because they want something to be different than it is right now.
That difference might be external, like wanting food, safety, warmth, or comfort. Or it might be internal, like wanting pain to stop, tension to ease, or boredom to go away. But every action fits this pattern. When nothing needs to change, nothing happens.
This isn’t abstract philosophy. It’s just how behavior works. Action arises from pressure, from imbalance, from something not being okay yet.
What I Mean by Motivation
A lot of confusion in discussions about consciousness comes from how loosely people use the word “motivation,” so it helps to slow down and be precise.
By motivation, I don’t mean following rules, executing a program, reacting automatically, or producing behavior that looks goal-directed from the outside. All of those things can happen without anything actually mattering to the system doing them.
What I mean by motivation is simpler and more basic. Motivation means that something genuinely matters to the system itself. One outcome has to count as better than another. One state has to be worse than another. There has to be a reason, from the inside, to move from here to there.
Without that internal difference, there may still be movement or output, but not motivation in any meaningful sense. Very simple organisms can move and react through basic chemical processes without feeling, but they don’t have preferences or priorities. Consciousness appears when behavior needs something to matter to the organism itself.
Why Information Alone Isn’t Enough
This is where the difference between information and feeling becomes important, and where many people start to sense a gap without quite seeing why.
Imagine a creature that gets injured and a small indicator turns on that says “damage detected.” That’s information. Information can be ignored.
Now imagine a creature that feels pain when it gets injured. Pain doesn’t just report damage. It creates pressure to act. A warning light has no skin in the game. Pain does.
Lights can be ignored. Suffering can’t.
Systems that can register damage without caring about it don’t last very long. Pain exists because it works. It functions as a control signal that can’t simply be brushed aside.
Hunger Makes the Same Point Even More Clearly
Hunger strips the issue down to its essentials.
You don’t eat because you’ve reasoned your way to the conclusion that eating is good. You eat because hunger feels bad. Weakness feels bad. Headaches feel bad. Relief feels good.
Now imagine a creature that doesn’t feel hunger, doesn’t feel weakness or discomfort, doesn’t enjoy eating, and doesn’t feel fear about dying. In that case, there isn’t even a reason for it to be doing anything in the first place. And without any felt reason to act, there’s certainly no reason for it to chew and swallow food.
When nothing feels worse than anything else, there’s no reason to prioritize one action over another. Without prioritization, survival behavior never gets off the ground. Hunger isn’t an extra feature layered on top of eating. It’s the reason eating happens at all.
This is often where the supposed mystery of consciousness starts to thin out, even if people don’t articulate it that way yet.
Feeling and Motivation
At this point, people often ask why all this processing has to feel like something at all. Why couldn’t the same work happen without experience?
That question assumes feeling is optional, as if behavior and motivation could exist first and experience could be added later. But when you look closely at how motivation actually works, that picture starts to fall apart.
Feeling is what motivation looks like from the inside. When outcomes differ in how they feel, they differ in how much they matter. When nothing feels better or worse, nothing matters more than anything else, and there’s no reason for behavior to happen in the first place.
This is where the hard problem is supposed to appear. Instead, the question loses its footing.
Why Philosophical Zombies Fall Apart
Philosophical zombies are supposed to behave exactly like humans while lacking any inner experience. They avoid harm, seek food, protect themselves, argue, plan, and reflect, all without feeling pain, pleasure, hunger, fear, or relief.
At first glance this can seem coherent, but only because it quietly assumes the very thing the thought experiment is supposed to rule out.
Why would such a creature do anything at all? If nothing feels better or worse to it, then nothing matters more than anything else. And without that difference, there’s no reason to choose one action over another.
Either the creature would do nothing, or it would have to care. And caring is just another way of describing feeling. The zombie idea only works if motivation is assumed without any account of where it comes from.
Once that assumption becomes visible, the thought experiment stops carrying weight.
Preferences Don’t Come From Nowhere
Sometimes the zombie idea gets patched by saying the creature could still have preferences, even without feeling.
But preferences aren’t free-floating. A preference means wanting one outcome rather than another. Wanting means having a reason to change the current state of the world. A reason means something is better or worse for the system itself.
When nothing feels better or worse, there are no preferences. There are only patterns. Patterns alone don’t amount to motivation.
This is another place where the problem seems deep only because something essential has been assumed rather than explained.
Why Simple Machines Don’t Count
This is usually where thermostats or other simple systems enter the conversation. They appear to show goal-directed behavior without feeling.
But a thermostat doesn’t want the room to be warm. A person wants the room to be warm, and the thermostat operates downstream from that preference.
The thermostat didn’t choose the goal. It doesn’t care whether it succeeds or fails. It wouldn’t mind if the room froze or overheated. Its apparent motivation comes from upstream, from something that can actually feel and care.
This is true of systems like this across the board. They don’t originate goals. They carry out goals created elsewhere. Remove creatures capable of pain or pleasure, and all apparent motivation disappears with them.
Doing Something and Wanting Something
This distinction gets blurred easily, and once it does, everything else starts to slide.
A calculator performs math without caring about math. A door closes automatically without wanting to be closed. Execution can happen without motivation. Motivation depends on feeling.
Confusing those two makes zombies and machine “preferences” seem plausible when they aren’t.
Reflexes and Automatic Behavior
Sometimes reflexes get raised as a counterexample. They don’t undermine the picture.
Not every action needs to be consciously chosen. What matters is how the overall system is organized. Reflexes exist inside organisms whose broader behavior is shaped by things that feel good or bad. They’re tools within that structure, not replacements for it.
Why the Hard Problem Feels Compelling
At this point it’s worth asking why the hard problem has felt so persuasive to so many smart people.
The confusion usually starts with the framing. When the mind is treated as an abstract system first and a living organism second, it’s easy to imagine puzzles that don’t survive contact with biology. When survival, motivation, and internal stakes are left out of the picture, experience starts to look like an extra ingredient rather than a working part.
Once you take organisms seriously, many of these puzzles begin to look misframed rather than unresolved.
“You Only Explained Why It’s Useful”
This is the strongest objection, and it deserves a direct answer.
The concern is that explaining why feeling is useful doesn’t explain why it exists. That distinction makes sense in some contexts, but it doesn’t hold up when talking about living systems. Traits don’t appear and persist by accident. When a trait shows up across organisms and is tightly connected to survival, it’s there because it does something important.
Feeling isn’t a decorative extra layered onto behavior. It plays a central role. It’s the mechanism that makes motivation possible in the first place.
Once that’s clear, the idea that a separate mystery still remains starts to lose its grip.
What’s Left After the Hard Problem
None of this suggests that consciousness is simple. The brain is enormously complex, and there’s a great deal we still don’t understand about how it works in detail. But complexity isn’t the same thing as mystery in principle.
The lingering sense that something profound has been left unexplained comes from treating feeling as optional, as if behavior and motivation could exist on their own and experience were added later. That picture reverses the order. Consciousness feels like something because that’s how motivation works at all.
Remove pleasure and pain, and nothing has a reason to happen. No preference, no urgency, no reason to act. Seen in that light, the “hard problem” no longer looks like a deep puzzle about reality. It looks like a problem created by leaving the most important part out of the picture in the first place.
That doesn’t solve consciousness. It just shows that this particular mystery was never really there.
This piece focuses on why feeling relates to behavior.
The follow-up looks upstream at why organisms would develop experience in the first place: Consciousness Didn’t Appear, it Accumulated



This piece focuses on why feeling relates to behavior. I just published a follow-up that looks upstream at why organisms would develop experience in the first place: https://bryanrichardjones.substack.com/p/consciousness-didnt-appear-it-accumulated
Bryan
I’ll keep it simple. Your essay quietly assumes that motivation cannot exist without experience. But if that were true, AI systems with reinforcement learning would already be conscious.
You try to block that by redefining motivation as “genuine caring.” But now you’re using a word that already presupposes interiority.
That’s circular.
You haven’t eliminated the hard problem. You’re just redescrbiing consciousness in motivational language.
That’s explanatory substitution, not resolution.