There is a strange feature of the human mind that most of us only notice slowly, and often uncomfortably, as we get older.
If we are not careful, we begin drifting away from reality.
And the reason has less to do with intelligence or education than we might think. It has more to do with the way our minds handle certainty — and with something deeper still: the way the nervous system is built.
The human brain runs on predictions.
It builds models of the world — called priors — based on past experience. These models help us anticipate what will happen next and reduce uncertainty. Without them, life would feel chaotic. So the brain strengthens predictions that seem to work.
Over time, those predictions become beliefs. Beliefs become narratives. Narratives become identity. And identity begins to feel like reality itself.
This is not a flaw in the system. It is the system doing exactly what it was designed to do. Efficiency. Reduced cognitive load. Faster decisions under pressure. The prior that has worked before gets selected again — faster, with less deliberation, more automatically.
The problem is not that this happens. The problem is that we rarely notice when it stops being useful.
Once a belief becomes strong, something subtle begins to happen.
We start experiencing the world through it. We notice evidence that confirms it. We explain away evidence that contradicts it.
Psychology has many names for pieces of this loop: confirmation bias, cognitive dissonance, motivated reasoning, defense mechanisms. But underneath all those terms is a simple mechanism.
The mind begins referencing itself.
Reality is filtered through the model, and the model becomes harder to change — not because the person is irrational, but because the prior has become high-precision. It was built under conditions of high stress or early necessity. Getting that prediction wrong felt costly. So the brain holds it with high confidence and requires substantial evidence to revise it.
High-precision priors are not errors. They are expensive lessons the nervous system decided to remember very well.
The difficulty is that a high-precision prior also filters out the experiences that would update it. The model selects for confirming evidence, explains away disconfirming evidence, and generates behavior that produces more of the same. The loop becomes self-sealing.
The nervous system is not tracking everything equally. It is organized around three domains that evolution has flagged as non-negotiable: things that, if disrupted, threaten survival or meaning in ways the system cannot afford to ignore.
When one of these domains is chronically threatened — early in life, under sustained pressure, or in conditions of unresolvable uncertainty — the priors built around it become extremely high-precision. They run fast, they run automatically, and they are very resistant to ordinary updating.
This is where the drift begins. Not in cognition. In the nervous system's attempt to maintain orientation, attachment, and autonomy in an environment that felt unreliable.
Human beings do not just hold beliefs internally. We build systems around them.
Beliefs become religious doctrines, political ideologies, cultural norms, personal identities. And once a belief is embedded in language and social structure, questioning it stops being an intellectual exercise. It becomes a threat to belonging.
This is where the Upstream Signal Model becomes important — because what looks like a cognitive problem is actually a nervous system problem. The person isn't defending a belief because they've thought it through carefully. They're defending it because the belief is load-bearing. It is doing orientation work. It is keeping the system stable. And the nervous system experiences challenges to it the same way it experiences challenges to safety.
We don't argue with beliefs. We argue with the signals those beliefs are regulating.
If this process continues unchecked, something gradual happens.
The mind becomes more invested in protecting its model than updating it. Analysis begins serving the belief instead of challenging it. Narratives become more elaborate. Identity becomes more fused with worldview. And the distance between the model and reality slowly increases.
Different people begin inhabiting different psychological worlds — each reinforced by its own language, evidence, and community. You can see this playing out collectively right now. Entire groups living in different realities. And each group, from the inside, experiencing itself as simply perceiving the truth.
This is not primarily a problem of education or information. More information fed into a high-precision prior system does not update the prior. It gets filtered through the existing model. This is why arguments rarely change minds. Why facts don't dissolve ideology. Why people can be confronted with direct evidence and explain it away.
The update has to happen somewhere other than the narrative layer.
This is one of the model's core claims, and it is worth sitting with.
Analysis — the effort to understand, explain, or reason through experience — is not a neutral observer of the compression stream. It is embedded in it. Analysis runs on behalf of the system that generated the pressure. It uses the same priors, the same categories, the same survival beliefs that shaped the experience in the first place.
This is why insight alone doesn't produce change. You can understand a pattern completely — trace it to its origins, name the belief, build a coherent narrative — and the body still responds the same way in the same situations. Because the pattern doesn't live at the analysis layer. It lives downstream. In procedural memory. In the body. In the high-precision priors that run faster than thought.
Analysis can reach the conceptual layer. It can update narratives. It can build new frameworks — like this one. But it cannot, on its own, reach the procedural priors that generate the emotional and physiological responses analysis is trying to explain.
The insight is real. And the signal underneath it keeps running. Understanding the pattern and being able to feel differently are two different operations running at two different levels of the stack.
This is not a failure of the person or the therapist. It is a description of the architecture.
Deconstruction is a different operation than analysis. Where analysis tries to understand within an existing framework, deconstruction questions the framework itself.
It asks: what am I assuming? What categories am I using that I didn't choose? What is the prior underneath this belief — and was it ever actually tested, or just inherited?
This matters because many of the priors that generate the most pressure are not consciously held beliefs. They are inherited structures — absorbed from culture, family, community — that feel like observations about reality rather than predictions the brain learned to make. Deconstruction makes them visible. And visibility is the first step toward the possibility of revision.
But deconstruction alone is also insufficient. Seeing the prior is not the same as updating it. The nervous system still needs to encounter something different — an experience, a relationship, a somatic reality — that gives the prior something to revise toward.
This is the question the model keeps returning to. If analysis can't reach procedural priors, what can?
The answer is: experience that registers at the level where the prior lives.
Procedural priors were built through embodied experience — especially repeated early experience that taught the nervous system how the world works. To update them, the system generally needs new embodied experience that contradicts the old prediction. And that experience needs to register in the body, not just be processed cognitively.
The common thread is not technique. It is level. All of these approaches work at the layer where the pattern actually lives — not just the layer where it gets talked about.
There is a piece of this that gets missed almost entirely in conventional mental health framing: none of the above is possible without capacity.
Capacity refers to the available resources in the system — what this model calls the tank. Sleep, nutrition, safety, physical health, allostatic load, co-regulation, rest. These are not lifestyle add-ons. They are the prerequisite conditions for any other kind of update to happen.
A nervous system running on chronically depleted capacity cannot take in new information well. It cannot tolerate the discomfort of disconfirming evidence. It cannot stay present long enough for relational repair to register. It defaults to high-efficiency, low-energy responses — survival strategies — because that is what a depleted system does.
You cannot update priors from an empty tank. The system in survival mode is not a system in learning mode. These are different operational states.
This is why accommodation matters — not as an accommodation to weakness, but as a structural prerequisite. Before we ask the system to update, we need to ask: does it have the resources to do that? Is the environment safe enough? Are the basic needs being met? Is the allostatic load low enough to create any margin at all?
Capacity is not the context for good work. Capacity is the mechanism.
The philosopher Terrence Deacon offers a concept that maps directly onto this model: constraint.
A constraint is not a limitation in the ordinary sense. It is a structural condition that makes certain outcomes possible and others impossible. Constraints are what give systems their form — what allow complexity to organize, what make certain kinds of work possible at all.
Care is a constraint in this sense. When care is present — when a signal is met rather than managed, when a need is addressed rather than suppressed, when a nervous system is allowed to complete rather than loop — something becomes possible that was not possible before. Not because a problem was solved. Because the constraint was met, and the system could move.
This is the distinction between relief and completion. Relief reduces pressure temporarily without resolving the constraint. Completion resolves the constraint — the loop closes, the prior updates, the signal stops running. The system does not need to keep activating around something that has been genuinely met.
And when the system is not spending its resources managing unresolved constraints — when it is not running loops, maintaining defensive narratives, holding high-precision threat priors on high alert — something else becomes possible.
This is where the model points — not just at what goes wrong, but at what becomes available when things go right.
A nervous system with sufficient capacity, updated priors, and access to attunement does not just experience less distress. It operates differently. It moves differently through the world.
The explore stream is what opens when the system is not in survival mode. Play, learning, creativity, curiosity, grooming, aliveness — these are not luxuries. They are the outputs of a system that has enough. They are what the nervous system does when its constraints are being met and its priors are not on high alert.
This is what the model is pointing toward. Not the absence of difficulty, but a system that can move — that can update, complete, repair, and expand. A machine that runs with coordination rather than chronic compensatory effort.
The individual nervous system cannot solve this alone.
The same processes that create rigidity in individuals — prior strengthening, confirmation bias, social validation, language reinforcement — operate at the collective level too. Cultures, communities, and institutions can develop high-precision priors just as individuals do. And they face the same difficulty: the update has to happen at the level where the pattern lives, not just the narrative layer.
What we need — individually and collectively — is something more coordinated than analysis. More targeted than insight. More attuned than advice.
We need systems that can meet signals at the level where they actually live. That can insert care at the level of the body, the relationship, the environment — not just the story. That can build capacity before demanding flexibility. That can accommodate the constraint before expecting the update.
The signal isn't asking to be solved. It's asking to be met. The machine that can do that — that can receive a signal, interpret it without fusing with its strategy, insert care at the right layer, and allow completion — that is what attunement is. That is what this model is trying to describe.
Wisdom may not be the accumulation of answers.
It may be the ability to remain flexible in the presence of uncertainty. To hold models lightly. To revise beliefs when new evidence appears — not just at the conceptual layer, but in the body, in behavior, in the actual experience of being in the world differently.
This kind of flexibility does not come naturally. The system moves toward rigidity. Certainty is efficient. Priors that have worked before get selected again.
But the capacity to update — to stay curious, to remain open, to allow the signal to complete rather than loop — that can be cultivated. It requires capacity. It requires safety. It requires attuned relationship. It requires the willingness to sit with the discomfort of not-yet-knowing long enough for something new to form.
And it requires a model for what we're actually working with.
That is what this site is attempting to offer.
The Upstream Signal Model draws from predictive processing research (Friston), emotion construction theory (Barrett), allostatic regulation (Sterling & Eyer), constraint theory (Deacon & Kauffman), and clinical observation. It represents theoretical interpretation and conceptual synthesis — not established clinical protocol. This page is for educational and reflective purposes only. It is not therapy, diagnosis, or medical advice.