What You Know vs. What You're Assuming
- Eric Kebschull

- Apr 21
- 3 min read
Many organizations think they are good at diagnosing problems. They gather data, analyze it through excel (er, I mean Claude in excel...because it's 2026), build elaborate slide decks (again, probably AI now), discuss in long drawn out meetings, and form a "consensus" on how to solve the problem... which is likely a forced agreement or implicit peer pressured agreement with upper management's view.
But for argument's sake, lets say it was a relatively honest and organic consensus. It feels right, and it feels well formulated, and it feels comprehensive. Thus, it is considered a thorough diagnosis to work off of.
But what organizations rarely notice is that the diagnosis contains three different kinds of claims being treated as one: things they actually know, things they've reasonably inferred, and things they're quietly assuming.
Allow me to illustrate this in action: I worked with an organization that had been watching their primary output metric grow steadily for nearly a decade. Revenue was healthy, leadership was confident, and there was no red flags or smoke rising that could be seen with the naked eye.
What the data actually showed, once we slowed down and looked carefully, was that the growth rate had been decelerating for three years before the numbers finally reversed. Sure, the output was still growing, but the trajectory had already changed directions.
When I the presented the organization with these findings, the room was quite surprised. After all, how could growth be a bad thing? It is perfectly reasonable to assume that, but that is where nuance matters. I then asked them what they knew at the time versus what they were assuming; that's when the conversation got uncomfortable quickly!
In short, they knew output had grown, they had inferred that growth meant the model was working, and they had assumed that a growing number meant a healthy system.
But those are three different claims; and only one of them came from the data.
The diagnosis they'd been operating on was not wrong, exactly. But it was built on a foundation that had not examined. No one had separated the tiers, thus no one had thought to question whether growth and health were actually the same thing in this context.
Unfortunately, they were not.
This tier distinction is more common than most people want to admit. The inferences feel so natural that they stop feeling like inferences, and the assumptions stop feeling like assumptions. By the time a decision gets made, the information has been flattened into "what we know," and the load-bearing assumptions are invisible. The ability to question those assumptions becomes far more difficult as a result. Compound that with the lack of adequate inquiry muscle into surfacing and testing those assumptions, and the human dynamics attached to those decisions (for example, the avoidance of threat or embarrassment when the assumption called out, the losses of competency or relevancy when the assumption is revealed to be faulty, etc).
So what does one do to change this? The move is not complicated, but it requires discipline.
Before your next significant diagnosis conversation, try this. Take the central claim your team is operating on and ask three questions: 1) what do you actually have data on, 2) what are you inferring from that data, and 3) what are you assuming that the data doesn't actually tell you?
You will almost always find something in that third category doing significant structural work in your thinking. Name it, write it down, and treat it as a hypothesis rather than a fact until you have reason to do otherwise. Success in this discipline provides a clearer sense of where their data ends and the room's interpretation begin. That distinction, when consistently maintained, is one of the more underrated capabilities for organizational adaptability and organizational learning.



Comments