There’s an interesting post over at Bruce Schnier’s blog where he discusses where security did, and didn’t, work with the Christmas underwear bomber incident. As is his usual inclination, he points out that the threat wasn’t new, security (on the whole) worked, and, of interest to us, the fact the more information would not have helped prevent the threat.
After the fact, it’s easy to point to the bits of evidence and claim that someone should have “connected the dots.” But before the fact, when there millions of dots – some important but the vast majority unimportant – uncovering plots is a lot harder.
This is a lot like the challenge we’ve been talking about under the banner of The value of information. How do we make sense of weak, conflicting and volumous signals we see in the environment outside our business, fuse this with strong signals from data inside the business, and create real insight? Granted, sometimes we’re aware of the signals (or at least the shape of their outline) we need to go looking for, much like Tesco’s decision to integrate weather forecasts and historical till information to predict customer demand. In other circumstances, we’re not so sure what we’re looking for. The business equivalent of predicting (and responding to) the underwear bomber might be managing exceptions in a complex, global supply chain, countering a competitor’s new product launch, or supporting a social case worker dealing with a unexpected crisis in a client’s domestic situation.
It’s tempting to create counter measures – prescriptive workflows designed to resolve a problem – to each of these scenarios on a case-by-base basis. Or even just throw up our hands and continue with the tribal processes of old. But, as Bruce points out, this doesn’t work. The challenge with taking action against specific threats is that the terrorist will simply use a new tactic next time, or you’ll be confronted with yet-another situation. Soon you’ll have overloaded your knowledge workers with exception scenarios which only address yesterday’s problems. You’ve started an arms race which you cannot win.
Bruce’s solution, in the context of security, is to integrate information into an operational decision making framework which wards against generic attacks.
What we need is security that’s effective even if we can’t guess the next plot: intelligence, investigation and emergency response.
This prompts me to think of two things:
First, we might need to add third dimension to that figure from Inside vs. Outside: Precision, to compliment Inside/Outside and Information Age. (Here, the engineer in me is going to split hairs over the definitions of focus, precise and accurate.) This new dimension captures how precise our need is. The Tesco example from above prefers precise signals, signal which communicates a single message. The exception manager might require imprecise signal, a derivative communicating a generic message aggregated generated by correlating a number of (in)precise signals. (A note of caution though, is to remember the recent impact of derivatives on the global financial markets.)
Second, we might want to rethink about how we conceptualise and use information information in our business. We currently have a very linear view, with information generation and consumption tightly connected to the stages of our value chain. It would be interesting to see how some of the ideas and frameworks behind the value of information could be fused with a decisioning framework like OODA. This would provide a tool to simplify the (potentially too complex) value of information framework, and realize it in operational work practices.
I’m not sure about the first point, but I expect the second will be fertile ground for further investigation.
Peter – it seems that they had information on the bomber but the right people didnt know at the right time to link to the execution process that should have stopped him. That seems to me to link back to some of your earlier blogs about its not having information – in fact we have too much information – its finding ways to reduce the information to levels that allow it to be usable
I found Bruce Schneier's post interesting due to his response to this problem of finding a needle in a haystack.
What he doesn't recommend is taking a piecewise operational approach (which is something like a conventional BI approach), characterising the scenario and then formalising a response, as this doesn't scale, nor does it find new threats (disruptions). His solution is more organisational than operational, reorienting the response around a longitudinal (rather than operational) view of whats happening across the value-chain, and then looking for relationships between unusual data points.
It's really a question of changing how you view and use the data, which ties back the whole “art of random” thing; hence my thought on OODA and John Boyd.
r.
PEG