Operational Learning: Signals vs. Noise
If you can't tell signal from noise, you'll manage opinions and call it leadership.
The QBR deck lands in everyone's inbox the night before. Thirty slides. Nine dashboards. Every metric color-coded green or yellow. On-time delivery: 94%. Customer satisfaction: 4.1 out of 5. Inventory turns on target. Forecast variance within acceptable range.
The VP of Operations walks into the room knowing something the deck doesn't show. For six weeks, her team has been running a manual workaround to hit the on-time number. Two people, every Tuesday and Thursday, pulling orders by hand and rerouting them before the system closes the week. Nobody asked them to start. Nobody told them to stop.
The metric is accurate. The operation is held together with tape.
The meeting runs ninety minutes. Finance presents their numbers. Sales presents theirs. Supply chain has a third version. The conversation is mostly about whose numbers are right. The output is three action items and a follow-up meeting scheduled for two weeks out. Nothing about the workaround gets mentioned. Nobody designs a fix. The metric stays green.
If you can't tell signal from noise, you'll manage opinions and call it leadership.
Most large organizations don't lack data. They lack a way to classify it. They've invested in the dashboards, the platforms, and strategy decks. The data is cleaner than it's ever been. The presentations are sharper. The meetings are longer. However, the gap between what the numbers say and what the operation actually needs has never been wider.
This gap isn't data quality. It's that nobody has built a system to separate what's worth acting on from what's just worth knowing.
Information tells you what happened. A signal tells you what to do about it.
They look identical on a dashboard.
They are completely different in practice.
This is a room most operations leaders will recognize. A leadership team sits down for a weekly review. Every metric is formatted the same way. Same font, same color logic, same visual weight. You've seen the version where a supply chain director spends twelve minutes defending a fill rate number that nobody in the room is going to act on regardless of the outcome. Decision-grade numbers sit next to context numbers sit next to numbers nobody has validated in months. Everything looks equally important because everything is presented the same way. So the first ten minutes become a negotiation about whose version is right. By the time the room agrees on reality, there's no time left to decide what to do about it.
That's not a reporting failure. That's a design failure.
The on-time delivery number in that QBR was information. Accurate, consistently tracked, professionally formatted. But it wasn't a signal. Nobody had defined what a signal would look like. Nobody had said: if we are manually routing more than a certain number of orders per week to hold this number, that is a trigger. That warrants a conversation. That changes the decision.
Without that design, the workaround stayed invisible. Not because anyone was hiding it. Because the system had no way to surface it.
Here's the tension. A signal has three properties that most reporting data never has. A named owner. Not a team, not a department. A person who is accountable for acting on it when it moves. A defined threshold. The specific point at which this number stops being context and becomes a trigger. And a clear answer to one question: what changes if this moves?
If the answer is "we'd want to know" or "we should probably discuss it," you have information. If the answer is "we would stop, redirect resources, escalate, or restart a decision," you have a signal.
Most dashboards mix all of those together. And because they look the same, they get treated the same. That's how a pricing decision gets relitigated every quarter, not because conditions changed, but because nobody defined what a change in conditions would actually look like. That's how a project KPI stays green while the team works nights to keep it there. The numbers told the truth. The operation wasn't in them.
The fix isn't reducing what you track. It's classifying what you track so the conversation does different work.
Three categories cover most operating environments. They're simple, but the discipline of keeping them separate is what changes the operating rhythm.
The first is decision-grade. This number has a named owner, a defined action threshold, and a clear trigger. When it crosses, the owner doesn't wait for the next scheduled review. They act, escalate, or convene. The threshold is agreed in advance, not debated in the moment. These are the metrics you'd bet your operating decisions on. In manufacturing environments, the signal that matters is often yield variance by line. Not yield itself, which is informational. Variance by line, because that's the leading indicator of a process drift nobody can see in the aggregate.
The second is informational. Useful context. No action required. Think of the customer satisfaction score sitting at 4.1 in that QBR. Stable for three quarters. Nobody was going to change resource allocation based on it, but every quarter someone asked a question about it and the room lost five minutes. These metrics belong in the deck. They don't belong in the conversation about what to do next. When they start generating debate, that's usually a sign they've been misclassified.
The third is needs validation. Tracked but not yet trusted.
This is the category that quietly does the most damage. Hubbard's research on measuring intangibles makes the point well: organizations routinely treat unvalidated metrics as decision-grade simply because they've been on a dashboard long enough. The version that shows up most often is a demand forecast or pipeline metric that's never been back-tested against actuals. It lives on the weekly deck for months. People start planning headcount around it. Nobody remembers when that started. Nobody agreed it should have. Metrics in this category need an owner, a proxy to validate against, a timeline, and clear criteria for graduation. Without that structure, they drift into decision-grade by default.
The discipline is to keep all three separate. Not on different slides. In the way the conversation is structured. Decision-grade runs first, with owners present and triggers visible. Informational runs second, quickly, without debate. Validation items get a brief status update and a date. A well-run operating review can cover all three in sequence and still finish on time. But only if the classification was done before the meeting, not inside it.
This is where the trade-off is worth naming. When you triage your signals, you're giving up the comfort of treating every metric as equally important.
Some leaders resist this because it feels like deprioritizing things that matter. It isn't.
It's the only way to make sure the things that actually require action get the attention they deserve, instead of getting buried in the same deck as the things that are just good to know.
What you protect: decision speed, clarity under pressure, and the ability to act on real conditions instead of whichever argument landed last. What you give up: the feeling that tracking everything means you're on top of everything.
That feeling was always an illusion. The VP of Operations knew her on-time number was held together with a workaround. The dashboard felt complete. The operation wasn't.
You don't need a new digital tool or platform to close this gap. You need a conversation and a classification. Pull the metrics that show up most often in your operating reviews. Run each one through three questions. What decision changes if this moves by 10%? Who is the named owner accountable for acting on it? What threshold makes it urgent versus notable?
If you can't answer all three, it's not decision-grade yet. That doesn't mean you stop tracking it. It means you stop pretending it's a trigger. Expect some resistance the first time you run this exercise. Every number on a dashboard belongs to someone and reclassifying it as informational can feel like a demotion. The goal isn't to shrink the dashboard. It's to be honest about what each number is actually asking the team to do. That honesty is what makes the operating review useful instead of theatrical.
Field-Test Question What are the 3 metrics you'd bet your operating decisions on today, and what would make you stop betting on them?
Sources
Davenport, T. H., & Harris, J. G. (2007). Competing on analytics: The new science of winning. Harvard Business School Press.
Hubbard, D. W. (2010). How to measure anything: Finding the value of intangibles in business (2nd ed.). John Wiley & Sons.
Kaplan, R. S., & Norton, D. P. (2008). The execution premium: Linking strategy to operations for competitive advantage. Harvard Business Press.
Liker, J. K., & Convis, G. L. (2012). The Toyota way to lean leadership. McGraw-Hill.
Pfeffer, J., & Sutton, R. I. (2006). Hard facts, dangerous half-truths, and total nonsense: Profiting from evidence-based management. Harvard Business School Press.