Expect the Unexpected (And Build for It)

THE ASSUMPTION PROBLEM

Every system is built around what we expect. The typical user, the usual transaction, the scenario most likely to occur. This is just practical — you design for what you know, for what happens most of the time. Known knowns. It makes sense to start there. But somewhere along the way, designing for the expected becomes assuming the unexpected won't arrive. And that assumption is where most systems quietly begin to fail.

What systems struggle to account for is what falls outside that design. The input that behaves unexpectedly, the scenario that sits just beyond what was anticipated, the combination nobody thought to test. These are the unknown unknowns — and how a system responds to them says far more about its quality than how it handles everything that was planned for.

The unknown unknowns are not the exception to the rule. They are the test of it.

IN DATA AND BI

Anyone who has worked in data long enough has a story like this. An automated sales report runs every morning without issue — until a new market is onboarded and their source data uses DD/MM/YYYY where the system expects MM/DD/YYYY. For weeks, transactions on dates where the day value exceeds 12 are silently dropped. The dashboard shows that market underperforming. Leadership pulls back investment. Eventually a manual reconciliation surfaces the truth. A date format. A regional exception. A business decision made on broken ground.

It happens in subtler ways too. A cost centre stops populating a field — a quiet upstream change — and nulls flow in where numbers should be. The system treats null as zero. A product suddenly looks far more profitable than it is. Or duplicate records enter from a CRM migration, splitting one customer into two, making churn look higher and revenue per customer look lower than reality. In each case the pipeline didn't break. It just processed confidently and wrongly, because nobody had engineered for the exception.

Rolling out an automated BI reporting system, or any system, involves rigorous testing. And what that process teaches you over time is not how to eliminate exceptions but how to recognise them, handle them cleanly, and build with enough experience to know where the edges tend to be. Every scenario caught early is one less surprise waiting down the road and there is real value in having already seen something, already solved it, already built past it. In fact finding exceptions early in testing is a good sign. It means the system is being genuinely stress tested, and that the team knows what to look for. At Cubot BI, that experience has been earned across implementations and refinements over years — and it shows in how we design. We know which patterns hold, where systems tend to be fragile, and how to build in ways that make exceptions manageable rather than disruptive. A good BI system is not one that never encounters issues. It's one where issues are surfaced quickly, resolved cleanly, and added to a body of knowledge that makes the next implementation sharper than the last. That's how experience compounds. And it's the exceptions along the way that built it.

WHEN DECISIONS GET AUTOMATED

The stakes get significantly higher when decisions stop being reviewed and start being automated. A human analyst looking at a dashboard might pause at an anomaly — question it, flag it, dig in. An automated decision system doesn't pause. It acts. Pricing engines, inventory replenishment, credit scoring, customer segmentation — all increasingly driven by agents that take inputs and produce outputs at a speed and scale no human team can match.

When those inputs contain an exception the system wasn't built for, the decision doesn't wait for a review meeting. It executes. At scale. Before anyone notices. The error isn't a data quality issue sitting in a report — it's now a business action that has already happened, multiplied across thousands of transactions or customers. The exception didn't just reveal the system. It ran through it unchecked.

Automating decisions without anticipating exceptions isn't efficiency. It's risk at scale.

IN AI

Large language models are extraordinary at the expected. Feed them a common question, a well-trodden problem, a scenario well represented in their training — and they perform impressively. But push them toward the edge — an unusual context, a domain-specific nuance, a question that sits just outside the pattern — and something important happens. Confidence doesn't drop. The answer still arrives, fluently, authoritatively. It's just wrong.

Hallucination is essentially the AI equivalent of a null being treated as zero. The system encountered something it wasn't built for and processed it anyway, without flagging the exception. As AI agents become more embedded in how organisations operate — summarising reports, generating recommendations, informing decisions — the exception problem doesn't go away. It gets faster and harder to trace. An AI that doesn't know what it doesn't know is not a tool you can trust at the edges. And the edges are where business actually happens.

WHEN THE COST IS CATASTROPHIC

The Boeing 737 MAX MCAS system was not designed to handle the exception of a single faulty sensor input pushing the nose down repeatedly. Engineers had anticipated the system working. They had not adequately anticipated it failing in this specific way, at this frequency, with pilots who had not been trained for it. Two crashes. 346 lives. The exception had been there all along — untested, unengineered for, assumed away.

The 2008 financial crisis unfolded similarly. The models that rated mortgage-backed securities and priced risk across the global financial system were built on a foundational assumption — that US housing prices could not fall simultaneously and nationally. It had never happened before. It was therefore treated as an exception too unlikely to engineer for. When it happened, the models didn't flag the anomaly. They had no framework for it. The system didn't just fail — it amplified the failure, because every decision built on those models was now wrong in the same direction at the same time.

In both cases the exception was not unimaginable. It was just inconvenient to imagine. So the assumption held, the system was built around it, and when reality produced the exception — as it always eventually does — there was nothing to catch it.

The exception was always coming. The only question was whether the system was ready.

BEYOND THE AVERAGE

Outside of systems and technology, exceptions have a different quality entirely. They are not problems to be managed — they are where everything interesting lives. The athlete who shouldn't have made it but did. The company that had no right to exist in a crowded market but was exceptional enough to survive and matter. The scientist whose idea was too strange for the mainstream until it wasn't. Every field that matters is shaped not by its averages but by its outliers.

The bell curve produces the mean. It doesn't produce the remarkable. Societies, institutions and systems that over-engineer for the average — that sand down the edges, that optimise relentlessly for the expected — don't just fail to produce greatness. They actively make it harder. The interesting stuff has always lived at the edges. In data, in decisions, in life.

Every system starts with known knowns; the scenarios planned for, the inputs anticipated, the paths designed. But it's the unknown unknowns that define it. They arrive without announcement, outside every assumption the system was built on. The question was never whether they would come. It was always whether the system was ready when they did. So as you build, whether it's a BI implementation, an automated decision pipeline, an AI layer, or an organisation — ask yourself: how much of your design is for the expected? And how much is for everything else?

How good is the system you're actually building?

Next
Next

Congruence is the Clarity