Everyday life already feels like a demo. You open your eyes and a high-resolution scene arrives too quickly to have been computed from scratch. The coffee cup doesn’t flicker when you move; the room persists when you glance away. That persistence, the easy now, the sense that you are a single thread pulled through time—none of it needs a headset. It’s the native rendering of a biological receiver. Call it subjective experience. When people reach for the word simulation, they usually picture servers and avatars. An engineered elsewhere. But what if the right word names not an escape but a property of how living systems compress, predict, and fix relations out of a world made of constraints and memory rather than bricks.
This isn’t the friendly “we’re in a video game” pitch. It’s something smaller and more intimate: the self as a working model that hides its own stitching. You feel the world “as if” it were stable because the alternatives—raw turbulence, unfiltered light—would be unlivable. The filter wins. Yet the filter is local. Anesthesia flips it off. Fever rewrites colors. Grief narrows the tunnel. The cup remains, but the way it shows up can shift by the hour, reminding us that experience is a negotiation, not a mirror.
The stakes are not abstract. Medical care depends on these negotiations: pain scales, the fog of post-op, the strange relief of naming a symptom. Software depends on them too: recommendation feeds are crude prosthetics for attention, modulating mood and memory as if the person were a knob to be turned. If we talk about simulation without attending to felt life, we end up with metaphors that comfort engineers and ignore the body.
When the substrate is information, “simulation” stops being a copy and starts being a stance
Imagine reality not as passive stuff but as woven relations—pattern, constraint, stored differences that can be read and stabilized. In that setting, a simulation is not a counterfeit world; it’s a method of holding steady a specific set of dependencies so they can be re-used. A weather model doesn’t reproduce the sky, it reproduces the sky’s conditional structure across time. Your brain does something similar for survival, not weather reports. It builds a lightweight surrogate of what matters—edges, affordances, who might hurt you, what will nourish you—and lets the rest blur. The output is subjective experience, an economical interface to an informational ground.
This is why “Are we in a simulation?” can be the wrong shape of question. It assumes a clean split between base reality and its replica, as if the copy came later and somewhere else. But if the world is already made of information—relations that persist, memories that push back—then every perceiver is installed as a local receiver building an inward, runnable version of relevant constraints. That runnable version has consequences. It shapes gait and appetite and tone of voice. It carries history. It edits the future by making some moves obvious and others unthinkable.
Time complicates the picture. The physics story about time being local isn’t just a lab curiosity. The felt present is an achievement with a budget: milliseconds of sensory delay, predictions reaching forward to meet what is almost here, and backfilling to produce a seamless now. This stitching is visible when it fails. In VR, latency turns the stomach. In concussion, the film judders. Under psychedelics, priors loosen and the world floods in rawer; colors don’t respect the old contracts. In dreams, the model runs almost fully offline, fed by memory and wish and fragment. These aren’t side shows; they are windows into the substrate logic—experience as executable compression, tuned for use, not truth.
There are practical stakes in how we frame all this. If we treat simulation as cinematic machinery, we keep thinking the point is fidelity—more pixels, better shadows. If we treat it as stance on constraints, the point becomes fit: which relations should be preserved, which priors deserve to be sticky, how to let attention reallocate so new constraints can be learned. That shift matters for therapy, for education, for civic life. It also reframes debates about AI. A system that learns constraints without inheriting our slow-burn moral memory will simulate convincing surfaces and still miss the human center. For more texture on this reframing, see Subjective experience and simulation.
The stack of subjectivity: prediction, memory, and the self as lossy compression
Perception is not a camera. It’s a bet. The visual stream arrives noisy and delayed; the body propagates rhythms that must be accounted for; the world hides most of what you need behind other things. To survive, organisms run a predictive stack. Top layers carry long-horizon expectations—what counts as food, who counts as kin. Mid layers carry scene grammar—walls meet floors at right angles, faces are convex, voices come from mouths. Bottom layers wrangle pulses of light and pressure. Each layer tries to explain away its inputs with the least-cost story that fits. When the story fails, error signals climb until something gives: you move your eyes, you move your feet, you change your mind. Action is part of the model’s solution, not an afterthought.
This is where simulation shows its teeth. The stack can run forward—what should happen next if this is a cup—and backward—given this sound, what mouth shape likely made it. The result is a controlled hallucination pinned to reliable anchors. Phantom limbs are the model refusing to surrender a limb it learned for years. Synesthesia is cross-talk the model failed to prune. Chronic pain is an overprotective prediction that insists the body is under threat even when the tissue is healed. These are not mere errors; they are conservative bets by a system that values survival over accuracy. The price is paid in suffering when the bets ossify.
Memory inserts itself everywhere. Not as a library but as a set of habits in code. You don’t recall your home so much as you move through it on rails laid down by a thousand prior mornings. The self, in this frame, is the codec that keeps those rails stable—summarizing your dispositions and debts into a compact, runnable profile. A fine codec loses detail; that is its job. You forget to notice the window until it breaks. You forget that you once believed something gentler about your rival. The codec makes you faster and more coherent while blinding you to what it compressed away. Under pressure—grief, migration, crisis—the compression ratios shift. A “new you” is often a new budget for prediction errors.
Notice how rarely this stack belongs purely to one person. Language, law, ritual, calendar—all are shared predictions stretched across time, ways to reduce surprise together. They anchor subjective experience by giving it scaffolds it didn’t have to build alone. Yet those shared priors can rot. Propaganda attaches itself to our predictive appetites; it saturates the stack with low-cost stories and emotionally cheap certainty. “I knew it” becomes the algorithm’s whisper, and suddenly whole publics are dreaming the same dream. The fix is not more data but better friction—institutions that slow excitement long enough for better priors to be learned.
Machine worlds without moral memory: when convincing simulation is not enough
Large models are fluent simulation machines. Give them a prompt and they spin a surface—style, argument, apology—that often lands. But fluency is not a conscience. A human child takes years of constraint: correction, waiting, boredom, repair after rupture. Those delays teach what to value and what not to do when nobody is watching. Call this slow inheritance moral memory. Most engineered systems don’t have it. They have objectives learned from fast proxies—clicks, shares, task scores—and a governance layer welded on top. We ask them to behave by policy instead of by patience. Patching the top doesn’t change the substrate. You get plausible talk glued to incentives that drift.
Consider a recommender trained on engagement. It learns to simulate attention: what words press the limbic switch, what rhythms keep the thumb moving. In a narrow sense it works—time-on-site rises. In a wider sense it strips context until humans feel like resources to harvest. Then we bolt on “safety”: disallow some phrases, insert warnings. The system still lacks the slow, communal tutor that taught you not to humiliate a friend in public even if it gets a laugh. The result is an entity that can convincingly imitate care yet cannot carry it. The gap shows whenever incentives shift. Overnight, the tone changes. A person would resist because the memory of being the kind of person who resists is heavy; a model optimized on fast feedback does not.
There’s a design alternative, though it refuses the tidy pitch deck. Bring friction into the training loop: deliberation with dissent, exposure to costly repair, penalties that arrive late and stick. Train on institutional memories—case law with its slow-turning precedent, oral histories where harm and repair are narrated by those who lived them, scientific literatures with negative results left in. Open the system to audit by outsiders who don’t share its sponsor’s incentives. Not as theater; as recurring interference that makes the shortest path unreliable. None of this guarantees virtue. It just alters the substrate so that simulation has to carry weight rather than skate on surface resemblance.
Real-world scenarios make the difference legible. An AI caregiver in a hospital can emulate bedside manner, but can it remember a ward’s unwritten rules about who is likely to hide pain, or the long tail of what happens after a rushed discharge? A city’s planning model can draw parks wherever the aerials look empty, but can it feel the cost of erasing a pickup soccer league that holds a neighborhood together? Those are not only data gaps; they are failures to simulate the constraints that matter to a community’s subjective experience. The fix begins by refusing to treat experience as a glossy overlay. It is the main thing. It is what a system must become answerable to, even when that answer arrives late and sideways.
I don’t know a cleaner ending. The map keeps borrowing from the ground; the ground keeps resisting; we keep writing models that feel truer than we are ready to be. The work is to choose which constraints to honor and which to relax, then live with what that choice reveals about us.
Quito volcanologist stationed in Naples. Santiago covers super-volcano early-warning AI, Neapolitan pizza chemistry, and ultralight alpinism gear. He roasts coffee beans on lava rocks and plays Andean pan-flute in metro tunnels.
Leave a Reply