Here is a thought experiment.
Imagine you are trying to diagnose a problem with a machine. The machine has thousands of moving parts. It runs many different processes simultaneously. Sometimes it overheats, sometimes it stalls, sometimes it vibrates — and all three of those things look the same from the outside.
Now imagine the only tool you have is a single word that means any of those three things — and the word has been used so many times, for so many different problems, that it no longer reliably points at any mechanism in particular. It just means: the machine is struggling.
That is roughly where we are with mental health language.
Not because the people who built this language were careless. But because language works through compression — and compression, by design, loses detail. The word captures the surface pattern. It cannot capture the mechanism. And when the word becomes the lens through which we understand experience, we stop looking for the mechanism. The map gets folded in half and we navigate by a very crude version of the terrain.
This essay is about what that costs us — and why it matters that we do better.
Before we can talk about what's wrong with our language, we need to talk about what language does.
Language doesn't just describe experience. It shapes it. When you apply a word to a felt state — when you move from a raw signal in your body to a concept — you are not labeling something neutral. You are running a prediction. You are telling your brain what category this belongs to, what it means, what caused it, and what should happen next.
This is exactly what the Upstream Signal Model describes. By the time a signal becomes a word, it has passed through multiple layers of compression. Raw sensation becomes pressure. Pressure gets matched to past patterns. Past patterns generate emotional strategies. Strategies get labeled. Labels get narrated. Each layer adds interpretation and loses specificity.
The reason this compression happens is not arbitrary. Predictive systems must reduce complexity — it is the whole design. What Karl Friston describes as free energy minimization: the brain's continuous drive to reduce uncertainty at the lowest possible computational cost. A concept allows the system to reuse a pattern instead of computing the world from scratch each time. Compression is not a flaw. But it means that every word you reach for is already a shortcut — and shortcuts lose the details they were built to skip.
The emotion word is not the signal. It is the most compressed prediction the system generated about the signal.
And here is the problem: once you apply the word, it becomes a prior. It shapes what you notice next. It pulls toward confirming evidence and away from contradicting evidence. The word stops being a label and starts being a lens.
Which means that if the word is wrong — if it names the wrong mechanism, if it collapses multiple distinct experiences into one category — it does not just fail to help. It actively interferes. It pulls the system away from the information it needs to update.
The real issue isn't that emotional words are imprecise. It's that they are layer-blind.
Emotional experience isn't a flat thing. It has structure. A signal moves through a system that has at least five distinct levels of processing — each doing different work, each requiring different kinds of intervention to reach:
Everyday emotional language mixes these layers without distinction. "Anxiety" might mean elevated arousal at layer 2, avoidance strategy at layer 3, threat prediction at layer 4, cognitive rumination at layer 5 — or some entanglement of all of them at once. The word does not say. The word cannot say, because it was built downstream of all of them.
Emotional words are not wrong. They are layer-blind. And when layers collapse, mechanism disappears — along with the precision needed to know what can actually reach it.
This is the real problem with mental health language. Not that words are too vague. But that they make invisible the very distinctions that matter for intervention. Once mechanism disappears:
Most mental health suffering persists not because we lack information, but because we keep trying to intervene at the label layer — the most downstream, most compressed, most detached-from-mechanism layer available.
The category "depression" groups together multiple mechanisms that may share a surface presentation but arise from different predictive dynamics. That is not a complaint about oversimplification. It is a mechanistic claim. If the brain is a predictive system — and emotional experience is constructed from signals moving through layers of prior-based compression — then the same surface presentation can be produced by at least several different failures at different layers. Flattened affect, low motivation, withdrawal, cognitive slowing: the output looks identical. The generating mechanism does not.
These are different states. They have different mechanisms. They require different responses. And in some cases, the intervention that helps one would actively harm another. Rest and validation given to a stuck loop may just give the loop more fuel. Pushing toward feeling for someone in resource depletion may cause harm. Behavioral activation for someone mid-revision may abort the update before it completes.
We have one word, one first-line intervention, and decades of research averaged across all four — which is part of why the research keeps returning inconsistent results. We are measuring outcomes across different phenomena with the same name.
This is not a criticism of clinicians. It is a description of what happens when the map doesn't have enough resolution to navigate by.
Trauma has become so expansive as a word that it now points at almost anything that hurt. Which means it no longer reliably points at a mechanism.
But if we look at what is actually happening in the nervous system, at least three distinct things get called trauma — and they are not the same thing.
When we talk about trauma research, trauma interventions, trauma-informed care — we are averaging across these. We are comparing outcomes without specifying which mechanism was present. We are asking which approach works without first asking: works for which of these?
This one may be the most important.
Shame is used as though it describes a single experience. But the word collapses something that needs to be split — because the two things it names are not just different in degree. They are mechanistically opposite. One is a signal that something important is happening. The other is a premise that the system has organized itself around. Treating them the same way is like using the same intervention for a warning light and a broken engine.
The culture around shame work largely addresses the second kind. Which means that people who are experiencing the first kind — who are in the middle of a legitimate, important, costly update — are sometimes being treated as though the feeling itself is the wound. As though what they need is to feel less shame, rather than to be supported through the revision the shame is tracking.
Some shame is the nervous system doing exactly what it should do. And we don't have a word for that. So we treat it like a problem to solve instead of a process to support.
This is not a minor distinction. It changes everything about the intervention.
The same logic applies across the diagnostic landscape. Autism as a category describes meaningfully different experiences — sensory processing differences, social prediction model differences, interoceptive differences, executive function differences, demand avoidance — that likely have distinct mechanisms, distinct profiles of difficulty, and distinct needs.
Anxiety is similarly broad. The mechanism for panic disorder, generalized anxiety, social anxiety, and OCD share a surface similarity — elevated arousal, avoidance, high-precision threat priors — but the specific priors, the specific loops, and the specific points of intervention are different. Treating them all as "anxiety" and measuring outcomes accordingly tells us very little.
In each case, the same problem is operating. A word that was built to describe a surface pattern gets treated as though it describes a mechanism. Research is designed and interventions are measured at the level of the word. And we keep getting inconsistent results that we attribute to individual variation — when a significant portion of that variation may simply be the word pointing at multiple different things.
It is not just the diagnostic categories. The problem goes deeper — all the way into everyday emotion language.
Emotion words — the raw vocabulary of feeling — are themselves predictions. They are generated downstream in the compression process. By the time a signal has been labeled "anxious" or "angry" or "sad," it has been through multiple layers of prior-based processing and is carrying assumptions about cause, meaning, category, and appropriate response that may or may not reflect what is actually happening in the body.
But here is what is almost never asked: what is this word pointing at, exactly?
When someone says "I'm anxious," we do not know which of these is happening. And more importantly, they often don't know either — because the word flattens all five into the same category and forecloses the inquiry before it begins.
The word becomes a destination instead of a starting point. The investigation stops. The prior gets confirmed. And the actual mechanism keeps running, unnamed, underneath.
This is not only a clinical problem. It is a cultural one.
Most people do not have access to a therapist, let alone a therapist who works at the mechanistic level. Most people learn to understand themselves through language — through the words they hear from family, culture, school, social media, and the therapeutic vocabulary that has slowly filtered into everyday speech.
And the vocabulary is getting more common without getting more precise. We have more words about mental health than at any point in history. We are using them more freely. We are encouraging people to name their experience — which is genuinely good. But if the names we are offering are over-compressed, if they are pointing at surfaces rather than mechanisms, we may be giving people tools that feel like insight and function like priors. The word creates the sense that something has been understood. The loop closes at the label. The signal keeps running.
You cannot update a system using the most downstream output of that system. You cannot understand the signal by examining the word it produced. The map has to be built at the level of the mechanism — or it cannot navigate the terrain it is trying to describe.
Worse: the language around mental health often carries embedded predictions. "Depression" predicts something about duration, about cause, about treatment. "Trauma" predicts a particular kind of wound and a particular kind of healing. "Anxiety disorder" predicts that the system itself is disordered — not that it is running a rational response to something it learned was necessary.
When someone accepts a label, they may be accepting its embedded predictions along with it. And those predictions shape what they look for, what they try, and crucially — whether they believe something different is possible.
Language that points at mechanisms instead of surfaces.
Not diagnostic categories, but descriptions of what is happening: which layer, which loop, which kind of prior, which kind of update is being called for. Not "depressed," but — is this rest, or is this a loop that cannot exit? Not "anxious," but — is this a signal, or a strategy, or a premise, or the felt texture of an update in progress?
This is harder. It requires sitting with more uncertainty for longer. It requires resisting the compression reward that comes from applying a familiar category. And it requires a model precise enough to hold the distinctions — which is part of what this framework is trying to build.
The goal is not to replace clinical language entirely. Categories have their uses — for communication, for research, for systems that require shared vocabulary. But categories should be treated as what they are: compressed maps that point roughly at a region of terrain. Not the terrain itself. Not the mechanism. Not the answer to the question of what this particular person's particular signal is actually trying to do.
The signal is always more specific than the word. The mechanism is always more specific than the diagnosis. And the reason intervention so often fails — or helps some people and not others, or produces effects that don't replicate — is not that the system is too complex to understand. It is that we keep addressing the label layer when the problem lives somewhere else entirely.
The path back toward the signal — toward what is actually happening, in this body, in this system, at this layer — requires a different kind of vocabulary than the one we have been using.
That is what this project is trying to build toward. Not better labels. A map that knows which layer it is pointing at.
The Upstream Signal Model draws from predictive processing research (Friston), emotion construction theory (Barrett), allostatic regulation (Sterling), and clinical observation. The distinctions drawn in this essay — particularly regarding depression subtypes and trauma mechanisms — represent theoretical interpretation and conceptual synthesis, not established clinical protocol or formal diagnostic criteria. This page is for educational and reflective purposes only. It is not therapy, diagnosis, or medical advice.