Monthly Archives: December 2009

Why scanning more data will not (necessarily) help BI

I pointed out the other day, that we seem to be at a tipping point for BI. The quest for more seems to be loosing its head of steam, with most decision makers drowning in a sea of massaged and smoothed data. There are some good moves to look beyond our traditional stomping ground of transactional data, but the real challenge is not in considering more data, but to consider the right data.

Most interesting business decisions seem to be a synthesis process. We take a handful of data and fuse them to create an insight. The invention of breath strips is a case in point. We can rarely break our problem down to a single (computed) metric, the world just doesn’t work that way.

Most business decisions rest on small number of data points. It’s just one of our cognitive limits: our working memory is only large enough to hold (approximately) four things (concepts and/or data points) in our head at once. This is one reason that I think Andrew McAfee’s cut-down business case works so well; it works with our human limitations rather than against them.

I was watching an interesting talk the other day — Peter Norvig was providing some gentle suggestions on what features should be beneficial in a language to support scientific computing. Somewhere in the middle of the talk he mentioned the Curse of dimensionality, which is something I hadn’t thought of for a while. This is the problem caused by the exponential increase in volume associated with each additional dimension of (mathematical) space.

In terms of the problem we’re considering, this means that if you are looking for n insights to a problem in a field of data (the n best data points to drive our decision), then finding them becomes exponentially harder for each data set (dimension) we add. More isn’t necessarily better. While the addition of new data sets (such as sourcing data from social networks) enables us to create new correlations, we’re also forced to search an exponentially larger area to find them. It’s the law of diminishing returns.

Our inbuilt cognitive limit only complicates this. When we hit our cognitive limit — when n becomes as large as we can usefully use — any additional correlations can become a burden rather than a benefit. In today’s rich and varied information environment, the problem isn’t to consider more data, or to find more correlations, its to find the best three or features in the data which will drive our decision in the right direction.

How do we navigate from the outside in? From the decision we need, to the data that will drive it. This is the problem I hope the Value of Information discussion addresses.

Posted via web from PEG @ Posterous

Innovation and the art of random

A little while ago I was invited to speak at an event, InnoFuture, which, for a mixture of reasons, didn’t end up happening. The theme for the event was Ahead of the trends — the random effect. My take on it was that innovation is not random, it’s just happening faster than you can process, and that ideas are commoditized making synthesis, the creation of new solutions to old problems, what drives innovation. I was pretty happy with the outline I put together for my talk, that I ended up reusing the content and breaking it into three blog posts, rather than letting it go to waste.

Innovation seems to be the topic of the day. Everyone seems to want some, thinking that it’s the secret sauce which will help them (or their company) bubble to the top of the heap. The self help and consulting communities have responded in force, trying to bottle lightening or package the silver bullet (whichever metaphor you prefer).

It was in this environment that I was quite taken by the topic of a recent InnoFuture event when I was asked to speak.

Ahead of trends — the random effect.
When a concept becomes a trend, you are a not the leader. How to tap into valuable ideas for products, services and communication before they are seen as trends, when they are just … random? Albert Einstein said that imagination is more important than knowledge. Let’s open the doors and let the imagination in for it seems that in the current crisis, the right brain is winning and we may be rationalized to death before things get better.

I’ve never seen the random effect, though I have been delightfully surprised when something unexpected pops up. Having been involved in a bunch of companies and projects that, I’m told, where innovative, I’ve always thought innovation was not so much random, as the result of obliquity. What makes it seem random is the simple fact that your are not aware of the intervening steps from interesting problem through to novel solution.

I figured I’d mash together a few ideas that capture this thought, and provide some (hopefully) sage advice based on what I do to deal with random. I ended up selecting:

  • John Boyd on why rapidly changing environments are confusing,
  • Peter Drucker‘s insight that insight (the tacit application of knowledge) is not a transferable good,
  • the struggle for fluency that we all go through as we learn to read,
  • John Boyd (again, but then he had a lot of good ideas) on the need for synthesis,
  • KK Pang (and old lecturer of mine) on the need to view problems from multiple contexts,
  • the need to follow a consistent theme of interest as the only tractable way of finding interesting problems to solve, and
  • my own experiences in leveraging a network of like and dissimilar minds as a way of effectivly out-sourcing analysis.

The result was called Of snow mobiles and childhood readers: why random isn’t, and how to make it work for you. I ended up having far to much content to fill my twenty minute slot, so it’s probably for the better that the event didn’t go ahead, as it would have taken a lot of time to cut it down.

Given that I had a fairly well developed outline, I decided to make it into a series of blog posts (plus my slides these days don’t have a lot of text on them, so if I just dropped the slides online they wouldn’t make any sense). The blog posts ended up breaking down this way:

  1. Innovation should not be the race for the new-new thing.
    Points out that innovation only seems random, unexpected, as you don’t see the intervening steps between a problem and new solution, and that innovation is the result of many small commoditized steps. This ties into one of my earlier posts of dealing with the speed of change.
  2. The role of snowmobiles in innovation.
    Argues that ideas are a common commodity, and that the real challenge with innovation is synthesis rather than ideation.
  3. Childhood readers and the art of random.
    Argues that the key to innovation is to find interesting problems to solve, and suggests that the best approach is to be fluent in a range of domains (sectors, geographies, activities, …) to provide a broader perspective, focus on a line of inquiry to provide some structure, and build a network of people with complimentary interests, providing you with the time, space and opportunity to focus on synthesis.

I expect that these are more productive if taken as a whole, rather than individual posts.

If you look at the path I’ve charted over my career then this is the approach I’ve taken, and my topic of choice is how people communicate and decide as a group, leading me to John Boyd, Cicero, human-computer interaction, agent technology, biology (my thesis was mathematically modelling nerves in a cat), and so on.

I still have the slides, so feel free to contact me it you’re interested in my presenting all or part of this topic.

Is BI really the next big thing?

I think we’re at a tipping point with BI. Yes, it makes sense that BI should be the next big thing in the new year, as many pundits are predicting, driven by the need to make sense of the massive volume of data we’re accumulated. However, I doubt that BI in its current form is up to the task.

As one of the CEOs Andy Mulholland spoke to mentioned “I want to know … when I need to focus in.” The CEO’s problem is not more data, but the right data. As Andy rightfully points out in an earlier blog post, we’ve been focused on harvesting the value from our internal, manufactured data, ignoring the latent potential in our unstructured data (let alone the unstructured data we can find outside the enterprise). The challenge is not to find more data, but the right data to drive the CEO’s decision on where to focus.

It’s amazing how little data you need to make an effective decision—if you have the right data. Andrew McAfee wrote a nice blog post a few years ago (The case against the business case is the closest I can find to it), pointing out that the mass of data we pile into a conventional business case just clouds the issues, creating long cause-and-effect chains that make it hard to come to an effective decision. His solution was the one page business case: capability delivered, (rough) business requirements, solution footprint, and (rough) costing. It might be one page, but there is enough information, the right information, to make an effective decision. I’ve used his approach ever since.

Current BI seems to be approaching the horse from the wrong direction, much like Andrew’s business case problem. We focus on sifting through all the information we have, trying to glean any trends and correlations which might be useful. This works as small to moderate scales, but once we reach the huge end of the scale it starts to groan under its own weight. It’s the law of diminishing returns—adding more information to the mix will only have a moderate benefit compared to the effort required to integrate and process it.

A more productive method might be to use a hypothesis-driven approach. Rather than look for anything that might be interesting, why not go spelunking for specific features which we know will be interesting?  The features we’re looking for in the information are (almost always) to support a decision. Why not map out that decision, similar to how we map out the requires for a feedback loop in a control system, and identify the types of features that we need to support the decision we want to make? We can segment our data sets based on the features’ gross characteristics (inside vs. outside, predictive vs. historical …) and then search in the appropriate segments for the features we need. We’ve broken one large problem—find correlations in one massive data set—into a series of much more manageable tasks.

The information arms race, the race to search through more information for that golden ticket, is just a relic of the lack of information we’ve lived with in the past. In today’s land of plenty, more is not necessarily better. Finding the right features is our real challenge.

Posted via email from PEG @ Posterous

Innovation [2009-12-14]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

The changing role of Government

Is Government 2.0 (whichever definition you choose) the ultimate aim of government? Government for the people and by the people. Or are we missing the point? We’re not a collection of individuals but a society where the whole is greater than the parts. Should government’s ultimate aim to be the trusted arbiter, bringing together society so that we can govern together? Rather than be disinterested and governed on, as seems to be the current fashion. In an age when everything is fragmented and we’re all responsible for our own destiny, government is in a unique position to be the body that binds together the life events that bring our society together.

Government 2.0 started with lofty goals: make government more collaborative. As with all definitions though, it seems that the custodians of definitions are swapping goals for means. Pundits are pushing for technology driven definitions, as Government 2.0 would not be possible without technology (but then, neither would my morning up of coffee).

Unfortunately Government 2.0 seems to be in danger of becoming “government as a platform”: GaaP or even GaaS (as it were). Entrepreneurs are calling on the government to open up government data, allowing start-ups to remix data to create new services. FixMyStreet might be interesting, and might even tick many of the right technology boxes, but it’s only a small fragment of what is possible.

GovHack

This approach has resulted in some interesting and worthwhile experiments like GovHack, but it seems to position much of government as a boat anchor to be yanked up with top-down directives rather than as valued members of society who are trying to do what they think is the right thing. You don’t create peace by starting a war, and nor do you create open and collaborative government through top down directives. We can do better.

The history of government has been a progression from government by and for the big man, through to today’s push for government for and by the people. Kings and Queens practiced stand-over tactics, going bust every four to seven years from running too many wars that they could not afford, and then leaning on the population to refill their coffers. The various socialist revolutions pushed the big man (or woman) out and replaced them with a bureaucracy intended to provide the population with the services they need. Each of us contributing in line with ability, and taking in line with need. The challenge (and possibly the unsolvable problem) was finding a way to do this in an economically sustainable fashion.

The start of the modern era saw government as border security and global conglomerate. The government was responsible for negotiating your relationship with the rest of the world, and service provision was out-sourced (selling power stations and rail lines). Passports went from a convenient way of identifying yourself when overseas, to become the tool of choice for governments to control border movements.

Government 2.0 is just the most recent iteration in this ongoing evolution of government. The initial promise: government for the little man, enabled by Web 2.0.

As with Enterprise 2.0, what we’re getting from the application of Web 2.0 to an organisation is not what we expected. For example, Enterprise 2.0 was seen as a way to empower knowledge workers but instead, seems to be resulting in a generation of hollowed out companies where the C-level and task workers at the coal face remain, but knowledge workers have been eliminated. Government 2.0 seems to have devolved into “government as a platform” for similar reasons, driven by a general distrust of government (or, at least, the current government which the other people elected) and a desire to have more influence on how government operates.

Government, The State, has come to be defined as the enemy of the little man. The giant organisation which we are largely powerless against (even though we elected them). Government 2.0 is seen as the can opener which can be used to cut the lid off government. Open up government data for consumption and remixing by entrepreneurs. Provide APIs to make this easy. Let us solve your citizen’s problems.

We’re already seeing problems with trust in on-line commerce due to this sort of fine-grained approach. The rise of online credit card purchases has pull the credit card fraud rate up with it resulting in a raft of counter-measures, from fraud detection through to providing consumers with access to their credit reports. Credit reports which, in the U.S., some providers are using as the basis for questionable tactics which scam and extort money from the public.

Has the pendulum swung too far? Or is it The Quiet American all over again?

Gone are the days where we can claim that “The State” is something that doesn’t involve the citizens. Someone to blame when things go wrong. We need to accept that now, more than ever, we always elect the government we deserve.

Technology has created a level of transparency and accountablility—exhemplified by Obama’s campaign—that are breeding a new generation of public servants. Rather than government for, by or of the people, we getting government with the people.

This is driving a the next generation of government: government as the arbitrator of life events. Helping citizens collaborate together. Making us take responsibility for our own futures. Supporting us when facing challenges.

Business-technology, a term coined by Forrester, is a trend for companies to exploit the synergies between business and technology and create new solutions to old problems. Technology is also enabling a new approach to government. Rather than deliver IT Government alignment to support an old model of government, the current generation of technologies make available a new model which harks back to the platonic ideals.

We’ve come along way from the medieval days when government was (generally) something to be ignored:

  • Government for the man (the kings and queens)
  • Government by the man (we’ll tell you what you need) (each according to their need, each …)
  • Government as a conglomerate (everything you need)
  • Government as a corporation (everything you can afford)

The big idea behind Government 2.0 is, at its nub, government together. Erasing the barriers between citizens, between citizens and the government, helping us to take responsibility for our future, and work together to make our world a better place.

Government 2.0 should not be a platform for entrepreneurs to exploit, but a shared framework to help us live together. Transparent development of policy. Provision (though not necessirly ownership) of shared infrastructure. Support when you need it (helping you find the services you need). Involvement in line with the Greek/Roman ideal (though more inclusive, without exclusions such as women or slaves).

Childhood readers and the art of random

Note: This post is part of larger series on innovation, going under the collective name of Innovation and Art of Random.

Innovation can seem random. We’re dealing with so much change in our daily lives that we miss the long and tortuous journey an innovation takes from it’s first conception through to the delivered solution, causing the innovation to seemingly appear from nowhere. We’re distracted as we’re trying to cope with the huge volume of work our changing environment creates, adjusting to the new normal, while trying to find time to sift through the idea fire hose for that one good idea. However ideas are common, commoditized even, and our real challenge is to make connections.

As Peter Drucker pointed out: insight, the tacit application of knowledge is not a transferable good. The value we derive from innovation comes from synthesis, the tacit application of knowledge to create a new solution. The challenge is to find time to pull apart the tools available to us, recombining them to synthesis new (and hopefully innovative) solutions to the problems we’re confronting today.

While ideas may be cheap, the time and space needed to create insight are not. We need to understand our problem from multiple contexts, teasing out the important elements, bringing together ideas to address each element in the synthesis of an original solution. This process takes time, often more time than we can spare, and so we need to invest our time wisely. Which steps in this processes are the most valuable (or the least transferable), the steps we need to own? Which can we outsource, passing responsibility to partners, or even our social network? And is it possible to create time? Using technology to take some of the load and create the breathing room we need.

Dr. Khee Pang
Dr. Khee Pang

One of the best pieces of advice I picked up at university was from Dr. K. K. Pang, who unfortunately passed away in March 2009. Dr Pang taught circuit theory, which can be quite a frustrating subject. It’s common to encounter a problem in circuit theory which you just can’t find a way into, making it seemingly impossible to solve. Dr. Pang’s brilliant, yet simple, advice was “If you don’t like the problem, then change it to one you do like.”. Just start messing with the problem, transforming bits of the circuit at random until you find a problem that you can solve.

Fast forward to my current work, far removed from circuit theory, and I still find myself using this piece of advice at least once a week. It’s not uncommon to come across a problem, a problem with little direct connection to technology, that needs to be approached from a very different angle. When stuck, take a different angle, make it a different problem, and you might find this new problem more to you liking.

You often bump into the same problem in different contexts as you work across industries and geographies. Different contexts can necessitate a different point of view, making the problem look slightly different. This highlights other aspects of the problem that you might not have been aware of before, highlighting previously hidden assumptions or connections to other problems. However, while this cross industry and geography insight is a valuable tool, the time required to go spelunking for insight is prohibitive. We find ourselves spend too much decoding the new context, and too little teasing out the important elements.

Learning to read, something I expect we all did in our childhood, is a struggle for fluency. We work from the identification of letters and words, through struggling to decode the text, to a level of fluency that enables us to focus on the meaning behind the text. Being fluent means being good enough at identification and decoding that we have the time and space for comprehension.

The ability to change the problem in front of you is really a question of being fluent in a range of environments; understanding a number of doctrines. These might be different industries (finance, public sector, utilities …) domains (logistics, risk management, military tactics, rhetoric …) or even geographies (APAC, EU, US …) as each has its own approach. We need enough experience in an environment to be able to decode it easily. Generally this means in the trenches experience, focused on applying knowledge, allowing us to weed out the common place and find the interesting and new. But building fluency takes time though; we can’t afford to immerse ourselves in every possible environment that might be of interest.

For quite a few years (from back in the day when my email address had a .oz at the end) I’ve been collecting a network of colleagues. Each is inquisitive in our own way, each with our own area of interest or theme, covering a huge, overlapping range of doctrines, while always looking for another idea too add to our toolbox. With the world being small, or even flat, this network of like minds has often been the source of a different point of view, one which solves the problem I’m working on. More recently this network has been migrating to Twitter, making the shared conversation more dynamic and immediate. It’s small networks of like-minds like this which can provide us the ability to effectively outsource the majority of our analysis, spreading the effort amongst out peers and creating the time and space to focus on synthesis.

Which brings us to the crux of the problem: innovation relies on the synthesis, and the key to synthesis is in finding interesting problems to solve. An idea, no matter how brilliant, will not go far unless it results in a product or service the people want. Innovation exists out at the surface of our organisations, or at the production coal face. Just as with the breath strips example, interesting problems pop up in the most unexpected places. Our challenge is prepare ourselves so that we can capitalise on the the opportunity a problem represents. As a famous golfer once said:

Gary Player
Gary Player

The more I practice, the luckier I get.
Gary Player

The world around us changes so rapidly that innovation can seem random. The snowmobile was obvious to the people who invented it, as they worked via trial-and-error from the original problem they wanted to solve through to the completed solution; it didn’t leap from their brow as a fully formed concept. Develop your interests, become fluent in a wide range of relevant topics and environments, use your network to extend your reach even further, and look for interesting problems to solve. In a world awash with good ideas, when innovation relies on your ability synthesis new solutions by finding an new angle from which to approach old problems (possibly problems so old that people forgot that they had them), the key to success is to find our own focus and then use your own own interests to drive yourself forward while effectively leveraging your network and resources around you to take as much of the load as possible. Innovation is rarely the result of a brilliant idea, but a patient process of finding problems to solve and then solving them, and sometimes we’re surprised by how innovative our solutions can be.

A nice visual argument for the value of mash-ups

As I’ve mentioned before, I would like a nice, clear, crisp definition for mash-up. A definition which captures the benefits that mash-ups can bring, rather than detailing a collection of tools, technologies and standards that we happen to find interesting at the time. For me, this is the TQM argument of fusing data and process to eliminate unnecessary decisions—make-work or swivel chair integration—to create a more efficient and effective work environment.

It’s Just a Bunch of Stuff That Happens has done a brilliant job of capturing this visually (included below). I like the usability aspect this highlights. A mash-up’s focus is cross-application usability—removing the annoyances of dealing with separate information sources. We could simply take these sources and squish them up against the glass, delivering the content into iGoogle or NetVibes gadgets. But what those original push-pins on a map mash-ups did was improve the usability of these information sources by eliminating the decisions required to navigate across them. Just as Apple did with the iPod and iPhone, eliminating or fusing functions to eliminate the (unnecessary) decisions required to navigate the overly complex and confusing interfaces of the mobile phones that came before them.

iGoogle and NetVibes are the Symbian to a mash-up’s iPhone.

Symplicity

Posted via web from PEG @ Posterous