Tag Archives: Measurement

Innovation and the art of random

A little while ago I was invited to speak at an event, InnoFuture, which, for a mixture of reasons, didn’t end up happening. The theme for the event was Ahead of the trends — the random effect. My take on it was that innovation is not random, it’s just happening faster than you can process, and that ideas are commoditized making synthesis, the creation of new solutions to old problems, what drives innovation. I was pretty happy with the outline I put together for my talk, that I ended up reusing the content and breaking it into three blog posts, rather than letting it go to waste.

Innovation seems to be the topic of the day. Everyone seems to want some, thinking that it’s the secret sauce which will help them (or their company) bubble to the top of the heap. The self help and consulting communities have responded in force, trying to bottle lightening or package the silver bullet (whichever metaphor you prefer).

It was in this environment that I was quite taken by the topic of a recent InnoFuture event when I was asked to speak.

Ahead of trends — the random effect.
When a concept becomes a trend, you are a not the leader. How to tap into valuable ideas for products, services and communication before they are seen as trends, when they are just … random? Albert Einstein said that imagination is more important than knowledge. Let’s open the doors and let the imagination in for it seems that in the current crisis, the right brain is winning and we may be rationalized to death before things get better.

I’ve never seen the random effect, though I have been delightfully surprised when something unexpected pops up. Having been involved in a bunch of companies and projects that, I’m told, where innovative, I’ve always thought innovation was not so much random, as the result of obliquity. What makes it seem random is the simple fact that your are not aware of the intervening steps from interesting problem through to novel solution.

I figured I’d mash together a few ideas that capture this thought, and provide some (hopefully) sage advice based on what I do to deal with random. I ended up selecting:

  • John Boyd on why rapidly changing environments are confusing,
  • Peter Drucker‘s insight that insight (the tacit application of knowledge) is not a transferable good,
  • the struggle for fluency that we all go through as we learn to read,
  • John Boyd (again, but then he had a lot of good ideas) on the need for synthesis,
  • KK Pang (and old lecturer of mine) on the need to view problems from multiple contexts,
  • the need to follow a consistent theme of interest as the only tractable way of finding interesting problems to solve, and
  • my own experiences in leveraging a network of like and dissimilar minds as a way of effectivly out-sourcing analysis.

The result was called Of snow mobiles and childhood readers: why random isn’t, and how to make it work for you. I ended up having far to much content to fill my twenty minute slot, so it’s probably for the better that the event didn’t go ahead, as it would have taken a lot of time to cut it down.

Given that I had a fairly well developed outline, I decided to make it into a series of blog posts (plus my slides these days don’t have a lot of text on them, so if I just dropped the slides online they wouldn’t make any sense). The blog posts ended up breaking down this way:

  1. Innovation should not be the race for the new-new thing.
    Points out that innovation only seems random, unexpected, as you don’t see the intervening steps between a problem and new solution, and that innovation is the result of many small commoditized steps. This ties into one of my earlier posts of dealing with the speed of change.
  2. The role of snowmobiles in innovation.
    Argues that ideas are a common commodity, and that the real challenge with innovation is synthesis rather than ideation.
  3. Childhood readers and the art of random.
    Argues that the key to innovation is to find interesting problems to solve, and suggests that the best approach is to be fluent in a range of domains (sectors, geographies, activities, …) to provide a broader perspective, focus on a line of inquiry to provide some structure, and build a network of people with complimentary interests, providing you with the time, space and opportunity to focus on synthesis.

I expect that these are more productive if taken as a whole, rather than individual posts.

If you look at the path I’ve charted over my career then this is the approach I’ve taken, and my topic of choice is how people communicate and decide as a group, leading me to John Boyd, Cicero, human-computer interaction, agent technology, biology (my thesis was mathematically modelling nerves in a cat), and so on.

I still have the slides, so feel free to contact me it you’re interested in my presenting all or part of this topic.

The value of information

We all know that data is valuable; without it it would be somewhat difficult to bill customers and stay in business. Some companies have accumulated masses of data in a data warehouse which they’ve used to drive organizational efficiencies or performance improvements. But do we ever ask ourselves when is the data most valuable?

Billing is important, but if we get the data earlier then we might be able to deal with a problem—a business exception—more efficiently. Resolving a short pick, for example, before the customer notices. Or perhaps even predicting a stock-out. And in the current hyper-competitive business environment where everyone is good, having data and the insight that comes with it just a little bit sooner might be enough to give us an edge.

A good friend of mine often talks about the value of information in a meter. This makes more sense when you know that he’s a utility/energy guru who’s up to his elbows in the U.S. smart metering roll out. Information is a useful thing when you’re putting together systems to manage distributed networks of assets worth billions of dollars. While the data will still be used to drive billing in the end, the sooner we receive the data the more we can do with it.

One of the factors driving the configuration of smart meter networks is the potential uses for the information the meters will generate. A simple approach is to view smart meters as a way to reduce the cost of meter reading; have meters automatically phone readings home rather than drive past each customer’s premisses in a truck and eyeball each meter. We might even used this reduced cost to read the meters more frequently, shrinking our billing cycle, and the revenue outstanding with it. However, the information we’re working from will still be months, or even quarters, old.

If we’re smart (and our meter has the right instrumentation) then we will know exactly which and how many houses have been affected by a fault. Vegetation management (tree trimming) could become proactive by analyzing electrical noise on the power lines that the smart meters can see, and determine where along a power line we need to trim the trees. This lets us go directly to where work needs to be done, rather than driving past every every power line on a schedule—a significant cost and time saving, not to mention an opportunity to engage customers more closely and service them better.

If our information is a bit younger (days or weeks rather than months) then we can use it too schedule just-in-time maintenance. The same meters can watch for power fluctuations coming out of transformers, motors and so on, looking for the tell tail signs of imminent failure. Teams rush out and replace the asset just before it fails, rather than working to a program of scheduled maintenance (maintenance which might be causing some of the failures).

When the information is only minutes old we can consider demand shaping. By turning off hot water heaters and letting them coast we can avoid spinning up more generators.

If we get at or below seconds we can start using the information for load balancing across the network, managing faults and responding to disasters.

I think we, outside the energy industry, are missing a trick. We tend to use a narrow, operational view of the information we can derive from our IT estate. Data is either considered transactional or historical; we’re either using it in an active transaction or we’re using it to generate reports well after the event. We typically don’t consider what other uses we might put the information to if it were available in shorter time frames.

I like to think of information availability in terms of a time continuum, rather than a simple transactional/historical split. The earlier we use the information, the more potential value we can wring from it.

The value of data
The value of data decreases rapidly with age

There’s no end of useful purposes we can turn our information too between the billing and transactional timeframes. Operational excellence and business intelligence allow us to tune business processes to follow monthly or seasonal cycles. Sales and logistics are tuned on a weekly basis to adjust for the dynamics of the current holiday. Days old information would allow us to respond in days, calling a client when we haven’t received their regular order (a non-event). Operations can use hours old information for capacity planning, watching for something trending in the wrong direction and responding before everything falls overs.

If we can use trending data—predicting stock-outs and watching trends in real time—then we can identify opportunities or head off business exceptions before they become exceptional. BAM (business activity monitoring) and real-time data warehouses take on new meaning when viewed in this light.

In a world where we are all good, being smart about the information we can harvest from our business environment (both inside and outside our organization) has the potential to make us exceptional.

Update: Andy Mulholland has a nice build on this idea over at Capgemini‘s CTO blog: Have we really understood what Business Intelligence means?