I think we’re at a tipping point with BI. Yes, it makes sense that BI should be the next big thing in the new year, as many pundits are predicting, driven by the need to make sense of the massive volume of data we’re accumulated. However, I doubt that BI in its current form is up to the task.
As one of the CEOs Andy Mulholland spoke to mentioned “I want to know … when I need to focus in.” The CEO’s problem is not more data, but the right data. As Andy rightfully points out in an earlier blog post, we’ve been focused on harvesting the value from our internal, manufactured data, ignoring the latent potential in our unstructured data (let alone the unstructured data we can find outside the enterprise). The challenge is not to find more data, but the right data to drive the CEO’s decision on where to focus.
It’s amazing how little data you need to make an effective decision—if you have the right data. Andrew McAfee wrote a nice blog post a few years ago (The case against the business case is the closest I can find to it), pointing out that the mass of data we pile into a conventional business case just clouds the issues, creating long cause-and-effect chains that make it hard to come to an effective decision. His solution was the one page business case: capability delivered, (rough) business requirements, solution footprint, and (rough) costing. It might be one page, but there is enough information, the right information, to make an effective decision. I’ve used his approach ever since.
Current BI seems to be approaching the horse from the wrong direction, much like Andrew’s business case problem. We focus on sifting through all the information we have, trying to glean any trends and correlations which might be useful. This works as small to moderate scales, but once we reach the huge end of the scale it starts to groan under its own weight. It’s the law of diminishing returns—adding more information to the mix will only have a moderate benefit compared to the effort required to integrate and process it.
A more productive method might be to use a hypothesis-driven approach. Rather than look for anything that might be interesting, why not go spelunking for specific features which we know will be interesting? The features we’re looking for in the information are (almost always) to support a decision. Why not map out that decision, similar to how we map out the requires for a feedback loop in a control system, and identify the types of features that we need to support the decision we want to make? We can segment our data sets based on the features’ gross characteristics (inside vs. outside, predictive vs. historical …) and then search in the appropriate segments for the features we need. We’ve broken one large problem—find correlations in one massive data set—into a series of much more manageable tasks.
The information arms race, the race to search through more information for that golden ticket, is just a relic of the lack of information we’ve lived with in the past. In today’s land of plenty, more is not necessarily better. Finding the right features is our real challenge.
[…] Is BI really the next big thing? […]
[…] Is BI really the next big thing? […]
As ever Peter a really interesting set of observations; as part of trying to get to grips with the topic of semantics and more importantly its use and value. i have started to try to see data as a layer between computers and users. The computers reach up into the data layer and required it to be structured for their use, whereas the users reach down into the layer and are looking for it to organized contextually for their use. Think of it as event driven context versus report driven structure, and the question is how to use the same data in both environments. That's where i am hoping to be able to see how to use Semantics in a meaningful way to try to get the focus and not to have to boil the ocean!
I like the concept of people and computers sitting on either side of the pensieve, drawing from it the thoughts and memories they need. Rather than formal semantic-driven approach, though, I'm more inclined to use a pattern recognition and/or subsumption approach. The information we're dealing with is to unstructured to sit well with formal ontologies (as they really don't cope with large numbers of exceptions), so it would be more pragmatic to find and extract the features we're after (a process) rather than try and establish an semantic mapping.
the challenge is that the term semantics itself isn't well defined and as such thats not a lot of help! and of course leads to the red rag to a bull when introduced into a discussion. what i am finding difficult is to figure out in a practical sense how to make this a workable capability as my fear is otherwise we will see enterprises trading with two completely different data sets supporting their actions
I like the concept of people and computers sitting on either side of the pensieve, drawing from it the thoughts and memories they need. Rather than formal semantic-driven approach, though, I'm more inclined to use a pattern recognition and/or subsumption approach. The information we're dealing with is to unstructured to sit well with formal ontologies (as they really don't cope with large numbers of exceptions), so it would be more pragmatic to find and extract the features we're after (a process) rather than try and establish an semantic mapping.
the challenge is that the term semantics itself isn't well defined and as such thats not a lot of help! and of course leads to the red rag to a bull when introduced into a discussion. what i am finding difficult is to figure out in a practical sense how to make this a workable capability as my fear is otherwise we will see enterprises trading with two completely different data sets supporting their actions
[…] pointed out the other day, that we seem to be at a tipping point for BI. The quest for more seems to be loosing its head of steam, with most decision makers drowning in a […]