Operational Learning: Cadence as Control

If nobody owns the design of your operating rhythm, the rhythm stops controlling anything.

Operational Learning: Cadence as Control

The weekly operations review runs every Monday at 9 a.m. It has run every Monday at 9 a.m. for as long as anyone can remember.

The VP of Supply Chain opens with the same dashboard. Inventory levels, fill rates, lead times. The numbers move a little each week. Sometimes someone asks a question. Usually the meeting ends ten minutes early.

Three floors up, the commercial team reviews pipeline every Friday. They look at forecasts, conversion rates, regional performance. Their numbers tell a different story than the supply chain numbers, but the disconnect goes unnoticed because the two reviews never touch. Different rooms. Different rhythms. Different calendars.

The problem isn't in either meeting. It's in the space between them.

A demand signal shifts mid-month. The commercial team sees it Friday. Supply chain won't look at it until Monday. By Monday, the signal has aged into a mismatch. Orders are committed. Materials are allocated. The Monday review surfaces a variance that was preventable four days earlier, if anyone had been looking at the right altitude, at the right time.

The escalation lands on the COO's desk Tuesday morning. The question comes fast: Why didn't we catch this sooner?

The answer nobody gives out loud: because the cadence made it structurally uncatchable.


That gap between two perfectly reasonable review cycles is where a surprising amount of operational damage accumulates. Not because anyone made a bad decision. Not because the data was wrong. But because the rhythm of review didn't match the speed of what was actually moving.

Cadence isn't a schedule. It's your control system. And when nobody owns its design, it stops controlling anything.

The operating surface I described earlier in this series, the minimum set of places where learning makes contact with decisions, only works if the rhythm underneath it matches the speed of what's actually moving. That rhythm is cadence. And in most organizations, it belongs to no one.

Most organizations don't think about cadence as something that was designed. They think about it the way they think about the conference room layout or the fiscal calendar. It's just there. Monday ops review. Thursday leadership sync. Monthly business review. Quarterly strategy session. The rhythm runs. Nobody asks why it runs at that frequency, or whether that frequency still matches the operational reality it was built to serve.

There was a reason, once. Someone set that Monday review because Monday was when the weekend production data came in and decisions needed to be made before the week accelerated. Someone set the monthly business review because the reporting cycle closed on the 25th and leadership needed five days to digest the numbers. The frequency was a design choice, matched to the speed of the signals it was meant to surface.

But conditions change. Reporting cycles compress. Teams restructure. The signals move faster, or slower, or come from entirely different sources. The cadence doesn't adjust. It just keeps running.

dormant cadence. Not a broken one. A dormant one. The meeting fires. The agenda circulates. The dashboard populates. Attendees show up, read numbers, nod, leave. The cadence generates every artifact of a functioning review, except the one that matters. Nothing downstream changes because of it.

Lencioni (2004) called unproductive meetings the most painful problem in business. But the deeper cost isn't the meeting that wastes time overtly. It's the one that looks productive and changes nothing. A dormant cadence consumes calendar space, generates slide decks, produces action items that roll forward week after week. From a distance, it looks like governance. Up close, it's furniture.

Two failure modes feed dormant cadence, and both operate quietly.

The first is the inheritance problem. A leader transitions into a new role and inherits an existing rhythm. The Monday review has been running for two years. It has a standing invite, a recurring deck template, a distribution list. The new leader sits through three cycles. Same agenda. Same attendees. Same metrics reviewed in the same order. After the third cycle, they might wonder: Why is this weekly? But the answer doesn't surface, because the original design intent has been forgotten. The cadence has outlived the conditions that created it. Changing it feels less like a design decision and more like a political statement. So it runs. Quarters pass. The rhythm persists. The reason doesn't.

The second is the seam problem. Each function builds a cadence that makes sense for its own operating reality. Finance reviews monthly because that's when the books close. Operations reviews weekly because production cycles demand it. Sales reviews biweekly because pipeline velocity sits somewhere in between. Inside each function, the rhythm is rational. The metrics look fine.

But the connections between cadences belong to no one. Inputs from one team arrive after the other team's review has already closed. Decisions fall into dead space between two calendars that were never designed to work together. The operators who live at those seams can feel the mismatch. They see the variance building. They know the signal is aging. But they have no forum to surface it, because the cadence architecture doesn't include a connection point between the two rhythms. Leadership can't see the damage because their view is organized by function, not by flow.

The cost of that invisibility extends beyond any single missed handoff. Every signal is only as timely as the cadence that surfaces it. Every reopen trigger is only as real as the rhythm that checks it. Drift doesn't just accumulate between standards and practice. It accumulates between reviews, building in the silence when no one is looking.

A schedule repeats. A control system calibrates. The distinction starts with frequency. Is the rhythm matched to how fast things actually move, or to calendar convenience? A domain where risk shifts weekly gets a weekly look. A domain where conditions are stable for months gets a monthly look, and nobody apologizes for it. The frequency is a deliberate choice, revisited when conditions change, not an inherited default that persists because changing it would require explaining why.

But calibration means nothing if no one holds the instrument. Someone has to be accountable not just for what gets reviewed, but for whether the rhythm itself is still right. Plenty of people own the content of a review. The finance director owns the numbers. The operations lead owns the production metrics. The program manager owns the project updates. But the design of the review itself has no owner. Its frequency. Its attendee list. Its connection to adjacent rhythms. Its continued relevance. Cadence design sits in the space between roles, which means it sits unclaimed.

And even a well-calibrated, well-owned cadence creates blind spots if the connections between rhythms go unattended. The gap between two internally rational cadences needs an explicit owner. Not someone who attends both meetings. Someone whose job includes watching whether the timing between those rhythms is producing dead space. When the weekly review and the monthly review create a structural gap, someone needs to own that gap as a design flaw, not accept it as an inevitability.

This is a room most ops leaders recognize but rarely get to design from scratch. I've been fortunate to have that opportunity more than once, and the pattern that works is not what most people expect. It doesn't start with redesigning the cadence. It starts with listening to it.

Every time I've taken on a new team or a new segment, the first move is the same. Sit through the existing rhythm. Understand it. Learn why it runs the way it does, even when the original reason has been forgotten. Wrap your arms around the technical underpinnings so that when ownership conversations happen, you know exactly what you're expected to own and what the real dependencies are. That listening phase isn't passive. It's the prerequisite for earning the right to change anything.

The redesign happens later, and it happens with the team, not to them. The most important cadence conversation I've been part of wasn't one where I walked in with a new operating rhythm and rolled it out. It was one where we sat down together and asked: what do we actually need to look at, how often does it move, and what's the right frequency to catch problems before they become escalations?

Partway through, the questions changed. The team stopped asking what we should cover and started asking what we actually needed to decide. It was a quiet crossing, from agenda thinking to ownership thinking, and it happened without anyone announcing it. The cadence that came out of that conversation wasn't mine. It was theirs.

And the proof that it took hold wasn't in the metrics, although those improved. The proof was in what the team did afterward. They protected it. They didn't let it bloat with agenda items that belonged in a different forum. They pushed back when someone tried to add a standing topic that didn't earn its place. They held the line on the design because they recognized it as their operating instrument, not something imposed from above. Edmondson (2019) would call that psychological safety in action: a team secure enough to challenge how things work and invested enough to protect what they built together.

That's cadence as a control system. Not because the leader enforces the rhythm, but because the team owns the calibration.

The trade-off is patience. You don't get to redesign on day one. You can't walk into an inherited cadence and declare it dormant before you understand what it was built to carry. The listening costs time. The collaborative design costs more. But the payoff is a rhythm the team will defend, one that survives your next transition because it belongs to the operating system, not to the person who set it up.

None of this argues for more meetings or faster cycles. The answer to a dormant cadence isn't acceleration. It's deliberate calibration. Some reviews should speed up. Some should slow down. Some should stop entirely, because the conditions they were built for no longer exist and the time would be better spent elsewhere.

The question isn't whether your cadence is fast enough. The question is whether anyone is accountable for whether it's still right.

Look at your current cadence architecture. Not the content of the reviews. The architecture itself. Which rhythms were deliberately designed for current conditions? Which were inherited from a prior leader or a prior operating reality? Where are the seams between functions where signals age and decisions fall into dead space?

Which of your standing reviews would still run exactly the same way if you left tomorrow, and is that a sign of good design, or just inertia?

Sources

Kaplan, R. S., & Norton, D. P. (2001). The strategy-focused organization: How balanced scorecard companies thrive in the new business environment. Harvard Business School Press.

Lencioni, P. (2004). Death by meeting: A leadership fable about solving the most painful problem in business. Jossey-Bass.

Worley, C. G., & Lawler, E. E. (2006). Designing organizations that are built to change. MIT Sloan Management Review, 48(1), 19–23.

Sull, D., & Eisenhardt, K. M. (2015). Simple rules: How to thrive in a complex world. Houghton Mifflin Harcourt.

Bernstein, E., Bunch, J., Canner, N., & Lee, M. (2016). Beyond the holacracy hype. Harvard Business Review, 94(7), 38–49.

Edmondson, A. C. (2019). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley.