Tag Archives: Andy Mulholland

Think “in the market,” not “go to market”

A friend of mine{{1}} made an astute comment the other day.

We need to think about “in the market” models, rather than “go to market” models.

[[1]]Andy Mulholland @ Capgemini[[1]]

I think this nicely captures the shift we’re seeing in the market; businesses are moving away from offering products which (hopefully) will sell, and adopting models founded on successful long term relationships. This is true for both business-to-consumer and business-to-business relationships, as our success is increasingly dependent on the success of the community we are a part of and the problems that we solve for (our role in) this community.

For a long time we’ve sought that new widget we might offer to the market: the new candy bar everyone wants. It’s the old journey of:

  • find a need,
  • fulfil the need.

Our business models have been built around giving someone something they what, and making a margin on the way through. Sometimes our customers didn’t know that they had the need until we, or their peer group, pointed it out to them, but we were nevertheless, fulfilling a need.

Recent history has seen the more sophisticated version of this emerge in the last few decades:

Give them the razor and sell the razor blades{{2}}.

[[2]]Giving away the razor, selling the blades @ Interesting thing of the day[[2]]

which has the added advantage of fulfilling a reoccurring need. Companies such as HP have made good use of this, more-or-less giving away the printers while pricing printer ink so that it is one of the most expensive substances on the planet (per gram).

Since then, companies (both B2C and B2B) have been working hard to reach customers earlier and earlier in the buying process. Rather than simply responding, after a customer has identified a need, along with the rest of the pack, they want to engage the customer and help the customer shape their need in a way that provides the company with an advantage. A great example of this are the airlines who enable you to buy a short holiday somewhere warm rather than a return trip to some specified destination. The customer gets some help shaping their need (a holiday), while the company has the opportunity to shape the need is a way that prefers their products and services (a holiday somewhere that the airline flies to).

The most recent shift has been to flip this approach on its head. Rather than aligning themselves with the needs they fulfil, some companies are starting to align themselves with the problems they solve. Needs are, after all, just represent potential solutions to a problem.

Nike is an interesting case study. Back in the day Nike was a (marketing driven) sports shoe company. If you needed shoes, then they had shoes. Around 2006—2008 Nike started developing a range of complementary products – web sites, sensors integrated into clothing, etc. – and began positioning the company as providing excellence in running, rather than simply fulfilling a need. The company grew 27% in two years as a result.

Rolls Royce (who I’ve written about before{{3}}) are another good example, but business-to-business. They shifted from the need (jet engines) to the problem (moving the plane) with huge success.

[[3]]What I like about jet engines @ PEG[[3]]

While these companies still have product and service catalogues, what’s interesting is the diversity of their catalogues. Rather than structuring their catalogue around an internal capability (their ability to design and manufacture a shoe or jet engine), the focus is on their role in the market and the capabilities required to support this role.

As Andy said, they have an “in the market” model, rather than a “go to market” model.

Danger Will Robinson!

Ack! The scorecard's gone red!
Ack! The scorecard's gone red!

Andy Mulholland has a nice post over at the Capgemini CTO blog, which points out that we have a strange aversion to the colour red. Having red on your balanced scorecard is not necessarily a bad thing, as it tells you something that you didn’t know before. Insisting on managers delivering completely green scorecard is just throwing good information away.

Unfortunately something’s wrong with Capgemini’s blogging platform, and it won’t let me post a comment. Go and read the post, and then you can find my comment below.

Economists have a (rather old) saying: “if you don’t fail occasionally, then you’re not optimising (enough)”. We need to consider red squares on the board to be opportunities, just as much as they might be problems. Red just represents “something happened that we didn’t expect”. This might be bad (something broke), or it might be good (an opportunity).

Given the rapid pace of change today, and the high incidence of the unexpected, managing all the red out of your business instantly turns you into a dinosaur.

The IT department we have today is not the IT department we’ll need tomorrow

The IT departments many of us work in today (either as an employee or consultant) are often the result of thirty or more years of diligent labour. These departments are designed, optimised even, to create IT estates populated with large, expensive applications. Unfortunately these departments are also looking a lot like dinosaurs: large, slow and altogether unsuited for the the new normal. The challenge is to reconfigure our departments, transforming them from asset management functions into business (or business-technology) optimisation engines. This transformation should be a keen interest for all of us, as it’s going to drive a dramatic change in staffing profiles which will, in turn, effect our own jobs in the no so distant future.

Delivering large IT solutions is a tricky business. They’re big. They’re expensive. And the projects to create them go off the rails more often than we’d like to admit. IT departments have been built to minimise the risks associated with delivering and operating these applications. This means governance, and usually quite a lot of it. Departments which started off as small scale engineering functions soon picked up an administrative layer responsible to the mechanics of governance.

More recently we’ve been confronted with the challenge with managing the dependancies and interactions between IT applications. Initiatives like straight-through processing require us to take a holistic, rather than a pieces-parts, approach, and we’re all dealing with the problem of having one of each application or middleware product, as well as a few we brewed in the back room ourselves. Planning the operation and evolution of the IT estate became more important, and we picked up an enterprise architecture capability to manage the evolution of our IT estate.

It’s common to visualise these various departmental functions and roles as a triangle (or a pyramid, if you prefer). At the bottom we have engineering: the developers and other technical personnel who do the actual work to build and maintain our applications. Next layer up is governance, the project and operational administrators who schedule the work and check that it’s done to spec. Second from the top are the planners, the architects responsible for shaping the work to be done as well as acting as design authority. Capping of the triangle (or pyramid) is the IT leadership team who decide what should be done.

The departmental skills triangle

While specific techniques and technologies might come and go, the overall composition of the triangle has remained the same. From the sixties and seventies through to even quite recently, we’ve staffed our IT departments with many technical doers, a few less administrators, a smaller planning team, and a small IT leadership group. The career path for most of us been a progression from the bottom layers – when we were fresh out of school – to the highest point in the triangle that we can manage.

The emergence of off-shore and outsourcing put a spanner in the works. We all understand the rational: migrate the more junior positions – the positions with the least direct (if any) contact with the business proper – to a cheaper country. Many companies under intense cost pressure broke the triangle in two, keeping the upper planning and decision roles, while pushing the majority of the manage and all the do roles out of the country, or even out of the company.

Our first attempt at out-sourcing

Ignoring whether or not this drive to externalise the lower roles provided the expected savings or not, what it did do is break the career ladder for IT staff. Where does you next generation of senior IT personnel come from if you’ve pushed the lower ranks out of the business? Many companies found themselves with an awkward skills shortage a few years into an outsourcing / off-shore arrangement, as they were no longer able to train or promote senior personnel to replace those who were leaving through natural attrition.

The solution to this was to change how we brake-up the skills triangle; rather than a simple horizontal cut, we took a slice down the side. Retaining a portion of all skills in-house allows companies provide a career path and on the job training for their staff.

A second, improved, go at out-sourcing
A second, improved, go at out-sourcing

Many companies have tweaked this model, adding a bulge in the middle to provide a large enough resource pool to manage both internal projects, as well as those run by out-sourced and off-shore resources.

Factoring in the effort required to manage out-sourced projects
Factoring in the effort required to manage out-sourced projects

This model is now common in a lot of large companies, and it has served us well. However, the world has a funny habit of changing just when you’ve everything working smoothly.

The recent global financial criss has fundamentally changed the business landscape. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order. Many are even talking about the emergence of a new normal. The impact this will have on how we run our businesses (and our IT departments) is still being discussed, but we can see the outline of this impact already.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building internal capabilities. While this trend might have initially started as a cost saving, most of the benefit is in management time saved, which can then be used to focus on more important issues. We’re all finding that the limiting factor in our business is management time, so being able to hand off the management of less important tasks can help provide that edge you need.

We’re also seeing faster business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up. Nowhere is this more evident than the regulated industries (finance, utilities …), where updates in government regulation has changed from a generational to a quarterly occurrence as governments attempt to use regulation change to steer the economic boat.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

We can draw a few general conclusions on the potential impact on IT departments of these trends.

  • The increase reliance on partners, the broader partner ecosystem this implies, and an increasingly global approach to business will create more complex operational environments, increasing the importance of planning the IT estate and steering a company’s IT in the right direction.
  • The need to reduce leverage, and free up working capital, is pushing companies toward BPO and SaaS solutions, rather than the traditional on-premisses solutions, where the solution provider is paid per-seat, or might even be only paid a success fee.
  • The need for rapid project turn-around is pushing us toward running large portfolios of small projects, rather than a small number of large projects.
  • A lot of the admin work we used to do is now baked into web delivered solutions (BaseCamp et al).

This will trigger us to break up a the skills triangle in a different way.

A skills/roles triangle for the new normal
A skills/roles triangle for the new normal

While we’ll still take a slice down the side of the triangle, the buldge will move to the ends of the slice, giving it a skinny waist. The more complex operational environment means that we need to beef up planning (though we don’t want to get all dogmatic about our approach, as existing asset-centric IT planning methodologies won’t work in the new normal). A shift to large numbers of small projects (where the projects are potentially more technically complex) means that we’ll beef up our internal delivery capability, providing team leads with more autonomy. The move to smaller projects also means that we can reduce our administration and governance overhead.

We’ll replace some skills with automated (SaaS) solutions. Tools like BaseCamp will enable us to devolve responsibility for reporting and management to the team at the coalface. It will also reduce the need to develop and maintain infrastructure. Cloud technology is a good example of this, as it takes a lot of the tacit knowledge required to manage a fleet of servers and bakes it into software, placing it in the hands of the developers. Rumor has it that that a cloud admin can support 10,000 servers to a more traditional admin’s 500.

And finally, our suppliers act as a layer through the middle, a flex resource for us to call on. They can also provide us with a broader, cross-industry view, of how to best leverage technology.

This thinning out of the middle ranks is part of a trend we’re seeing elsewhere. Web2.0/E2.0/et al are causing organisations to remove knowledge workers — the traditional white collar middle layers of the organisaiton – leaving companies with a strategy/leadership group and task workers.

Update: Andy Mulholland has an interesting build on this post over at the Capgemini CTO blog. I particularly like the Holm service launched by Ford and Microsoft, a service that it’s hard to imagine a traditional IT department fielding.

Working from the outside in

We’re drowning in a sea of data and ideas, with huge volumes of untapped information available both inside and outside our organization. There is so much information at our disposal that it’s hard to discern Arthur from Martha, let alone optimize the data set we’re using. How can we make sense of the chaos around us? How can we find the useful signals which will drive us to the next level of business performance, from amongst all this noise?

I’ve spent some time recently, thinking about how the decisions our knowledge workers make in planning and managing business exceptions can have a greater impact on our business performance than the logic reified in the applications themselves. And how the quality of information we feed into their decision making processes can have an even bigger impact, as the data’s impact is effectively amplified by the decision making process. Not all data is of equal value and, as is often said, if you put rubbish in then you get rubbish out.

Traditional Business Intelligence (BI) tackles this problem by enabling us to mine for correlations in the data tucked away in our data warehouse. These correlations provide us with signals to help drive better decisions. Managing stock levels based on historical trends (Christmas rush, BBQs in summer …) is good, but connecting these trends to local demographic shifts is better.

Unfortunately this approach is inherently limited. Not matter how powerful your analytical tools, you can only find correlations within and between the data sets you have in the data warehouse, and this is only a small subset of the total data available to us. We can load additional data sets into the warehouse (such as demographic data bought from a research firm), but in a world awash with (potentially useful) data, the real challenge is deciding on which data sets to load, and not in finding the correlations once they are loaded.

What we really need is a tool to help scan across all available data sets and find the data which will provide the best signals to drive the outcome we’re looking for. An outside-in approach, working from the outcome we want to the data we need, rather than an inside-out approach, working from the data we have to the outcomes it might support. This will provide us with a repeatable method, a system, for finding the signals needed to drive us to the next level of performance, rather than the creative, hit-and-miss approach we currently use. Or, in geekier terms, a methodology which enables us to proactively manage our information portfolio and derive the greatest value from it.

I was doodling on the tram the other day, playing with the figure I created for the Inside vs. Outside post, when I had a thought. The figure was created as a heat map showing how the value of information is modulated by time (new vs. old) and distance (inside vs. outside). What if we used it the other way around? (Kind of obvious in hindsight, I know, but these things usually are.) We might use the figure to map from the type of outcome we’re trying to achieve back to the signals required to drive us to that outcome.

Time and distance drive the value of information
Time and distance drive the value of information

This addresses an interesting comment (in email) by a U.K. colleague of mine. (Jon, stand up and be counted.) As Andy Mulholland pointed out, the upper right represents weak confusing signals, while the lower left represents strong, coherent signals. Being a delivery guy, Jon’s first though was how to manage the dangers in excessively focusing on the upper right corner of the figure. Sweeping a plane’s wings forward increases its maneuverability, but at the cost of decreasing it’s stability. Relying too heavily on external, early signals can, in a similar fashion, could push an organization into a danger zone. If we want to use these types of these signals to drive crucial business decisions, then we need to understand the tipping point and balance the risks.

My tram-doodle was a simple thing, converting a heat map to a mud map. For a given business decision, such as planning tomorrow’s stock levels for a FMCG category, we can outline the required performance envelope on the figure. This outline shows us the sort of signals we should be looking for (inside good, outside bad), while the shape of the outlines provides us with an understanding (and way of balancing) the overall maneuverability and stability of the outcome the signals will support. More external predictive scope in the outline (i.e. more area inside the outline in the upper-right quadrant) will provide a more responsive outcome, but at the cost of less stability. Increasing internal scope will provide a more stable outcome, but at the cost of responsiveness. Less stability might translate to more (potentially unnecessary) logistics movements, while more stability would represent missed sales opportunities. (This all creates a little deja vu, with a strong feeling of computing Q values for non-linear control theory back in university, so I’ve started formalizing how to create and measure these outlines, as well as how to determine the relative weights of signals in each area of the map, but that’s another blog post.)

An information performance mud map
An information performance mud map

Given a performance outline we can go spelunking for signals which fit inside the outline.

Luckily the mud map provides us with guidance on where to look. An internal-historical signal is, by definition driven by historical data generated inside the organization. Past till data? An external-reactive signal is, by definition external and reactive. A short term (i.e. tomorrow’s) weather forecast, perhaps? Casting our net as widely as possible, we can gather all the signals which have the potential to drive us toward to the desired outcome.

Next, we balance the information portfolio for this decision, identifying the minimum set of signals required to drive the decision. We can do this by grouping the signals by type (internal-historical, …) and then charting them against cost and value. Cost is the acquisition cost, and might represent a commercial transaction (buying access to another organizations near-term weather forecast), the development and consulting effort required to create the data set (forming your own weather forecasting function), or a combination of the two, heavily influenced by an architectural view of the solution (as Rod outlined). Value is a measure of the potency and quality of the signal, which will be determined by existing BI analytics methodologies.

Plotting value against cost on a new chart creates a handy tool for finding the data sets to use. We want to pick from the lower right – high value but low cost.

An information mud map
An information mud map

It’s interesting to tie this back to the Tesco example. Global warming is making the weather more variable, resulting in unseasonable hot and cold spells. This was, in turn, driving short-term consumer demand in directions not predicted by existing planning models. These changes in demand represented cost, in the from of stock left on the shelves past it’s use-by date, or missed opportunities, by not being able to service the demand when and where it arises.

The solution was to expand the information footprint, pulling in more predictive signals from outside the business: changing the outline on the mud map to improve closed-loop performance. The decision to create an in-house weather bureau represents a straight forward cost-value trade-off in delivering an operational solution.

These two tools provide us with an interesting approach to tackling a number of challenges I’m seeing inside companies today. We’re a lot more externally driven now than we were even just a few years ago. The challenge is to identify customer problems we can solve and tie them back to what our organization does, rather than trying to conceive offerings in isolation and push them out into the market. These tools enable us to sketch the customer challenges (the decisions our customers need to make) and map them back to the portfolio of signals that we can (or might like to) provide to them. It’s outcome-centric, rather than asset-centric, which provides us with more freedom to be creative in how we approach the market, and has the potential to foster a more intimate approach to serving customer demand.

Inside vs. Outside

As Andy Mullholland pointed out in a recent post, all too often we manage our businesses by looking out the rear window to see where we’ve been, rather than looking forward to see where we’re going. How we use information too drive informed business decisions has a significant impact on our competitiveness.

I’ve made the point previously (which Andy built on) that not all information is of equal value. Success in today’s rapidly changing and uncertain business environment rests on our ability to make timely, appropriate and decisive action in response to new insights. Execution speed or organizational intelligence are not enough on their own: we need an intimate connection to the environment we operate in. Simply collecting more historical data will not solve the problem. If we want to look out the front window and see where we’re going, then we need to consider external market information, and not just internal historical information, or predictions derived from this information.

A little while ago I wrote about the value of information. My main point was that we tend to think of most information in one of two modes—either transactionally, with the information part of current business operations; or historically, when the information represents past business performance—where it’s more productive to think of an information age continuum.

The value of information
The value of information

Andy Mulholland posted an interesting build on this idea on the Capgemini CTO blog, adding the idea that information from our external environment provides mixed and weak signals, while internal, historical information provides focused and strong signals.

The value of information and internal vs. external drivers
The value of information and internal vs. external drivers

Andy’s major point was that traditional approaches to Business Intelligence (BI) focus on these strong, historical signals, which is much like driving a car by looking out the back window. While this works in a (relatively) unchanging environment (if the road was curving right, then keep turning right), it’s less useful in a rapidly changing environment as we won’t see the unexpected speed bump until we hit it. As Andy commented:

Unfortunately stability and lack of change are two elements that are conspicuously lacking in the global markets of today. Added to which, social and technology changes are creating new ideas, waves, and markets – almost overnight in some cases. These are the ‘opportunities’ to achieve ‘stretch targets’, or even to adjust positioning and the current business plan and budget. But the information is difficult to understand and use, as it is comprised of ‘mixed and weak signals’. As an example, we can look to what signals did the rise of the iPod and iTunes send to the music industry. There were definite signals in the market that change was occurring, but the BI of the music industry was monitoring its sales of CDs and didn’t react until these were impacted, by which point it was probably too late. Too late meaning the market had chosen to change and the new arrival had the strength to fight off the late actions of the previous established players.

We’ve become quite sophisticated at looking out the back window to manage moving forward. A whole class of enterprise applications, Enterprise Performance Management (EPM), has been created to harvest and analyze this data, aligning it with enterprise strategies and targets. With our own quants, we can create sophisticated models of our business, market, competitors and clients to predict where they’ll go next.

Robert K. Merton: Father of Quants
Robert K. Merton: Father of Quants

Despite EPM’s impressive theories and product sheets, it cannot, on its own, help us leverage these new market opportunities. These tools simply cannot predict where the speed bumps in the market, no matter how sophisticated they are.

There’s a simple thought experiment economists use to show the inherent limitations in using mathematical models to simulate the market. (A topical subject given the recent global financial crisis.) Imagine, for a moment, that you have a perfect model of the market; you can predict when and where the market will move with startling accuracy. However, as Sun likes to point out, statistically, the smartest people in your field do not work for your company; the resources in the general market are too big when compared to your company. If you have a perfect model, then you must assume that your competitors also have a perfect model. Assuming you’ll both use these models as triggers for action, you’ll both act earlier, and in possibly the same way, changing the state of the market. The fact that you’ve invented a tool to predicts the speed bumps causes the speed bumps to move. Scary!

Enterprise Performance Management is firmly in the grasp of the law of diminishing returns. Once you have the critical mass of data required to create a reasonable prediction, collecting additional data will have a negligible impact on the quality of this prediction. The harder your quants work, the more sophisticated your models, the larger the volume of data you collect and trawl, the lower the incremental impact will be on your business.

Andy’s point is a big one. It’s not possible to accurately predict future market disruptions with on historical data alone. Real insight is dependent on data sourced from outside the organization, not inside. This is not to diminish the important role BI and EPM play in modern business management, but to highlight that we need to look outside the organization if we are to deliver the next step change in performance.

Zara, a fashion retailer, is an interesting example of this. Rather than attempt to predict or create demand on a seasonal fashion cycle, and deliver product appropriately (an internally driven approach), Zara tracks customer preferences and trends as they happen in the stores and tries to deliver an appropriate design as rapidly as possible (an externally driven approach). This approach has made Zara the most profitable arm of Inditex, a holding company of eight retail brands, and one of the biggest success stories in Spanish business. You could say that Quants are out, and Blink is in.

At this point we can return to my original goal: creating a simple graphic that captures and communicates what drives the value of information. Building on both my own and Andy’s ideas we can create a new chart. This chart needs to capture how the value of information is effected by age, as well as the impact of externally vs. internally sourced. Using these two factors as dimensions, we can create a heat map capturing information value, as shown below.

Time and distance drive the value of information
Time and distance drive the value of information

Vertically we have the divide between inside and outside: internally created from processes; though information at the surface of our organization, sourced from current customers and partners; to information sourced from the general market and environment outside the organization. Horizontally we have information age, from information we obtain proactively (we think that customer might want a product), through reactively (the customer has indicated that they want a product) to historical (we sold a product to a customer). Highest value, in the top right corner, represents the external market disruption that we can tap into. Lowest value (though still important) represents an internal transactional processes.

As an acid test, I’ve plotted some of the case studies mentioned in to the conversation so far on a copy of this diagram.

  • The maintenance story I used in my original post. Internal, historical data lets us do predictive maintenance on equipment, while  external data enables us to maintain just before (detected) failure. Note: This also applies tasks like vegetation management (trimming trees to avoid power lines), as real time data and be used to determine where vegetation is a problem, rather than simply eyeballing the entire power network.
  • The Walkman and iPod examples from Andy’s follow-up post. Check out Snake Coffee for a discussion on how information driven the evolution of the Walkman.
  • The Walmart Telxon story, using floor staff to capture word of mouth sales.
  • The example from my follow-up (of Andy’s follow-up), of Albert Heijn (a Dutch Supermarket group) lifting the pricing of ice cream and certain drinks when the temperature goes above 25° C.
  • Netflix vs. (traditional) Blockbuster (via. Nigel Walsh in the comments), where Netflix helps you maintain a list of files you would like to see, rather than a more traditional brick-and-morter store which reacts to your desire to see a film.

Send me any examples that you know of (or think of) and I’ll add them to the acid test chart.

An acid test for our chart
An acid test for our chart

An interesting exercise left to the reader is to map Peter Drucker’s Seven Drivers for change onto the same figure.

Update: A discussion with a different take on the value of information is happening over at the Information Architects.

Update: The latest instalment in this thread is Working from the outside in.

Update: MIT Sloan Management Review weighs in with an interesting article on How to make sense of weak signals.

The value of information

We all know that data is valuable; without it it would be somewhat difficult to bill customers and stay in business. Some companies have accumulated masses of data in a data warehouse which they’ve used to drive organizational efficiencies or performance improvements. But do we ever ask ourselves when is the data most valuable?

Billing is important, but if we get the data earlier then we might be able to deal with a problem—a business exception—more efficiently. Resolving a short pick, for example, before the customer notices. Or perhaps even predicting a stock-out. And in the current hyper-competitive business environment where everyone is good, having data and the insight that comes with it just a little bit sooner might be enough to give us an edge.

A good friend of mine often talks about the value of information in a meter. This makes more sense when you know that he’s a utility/energy guru who’s up to his elbows in the U.S. smart metering roll out. Information is a useful thing when you’re putting together systems to manage distributed networks of assets worth billions of dollars. While the data will still be used to drive billing in the end, the sooner we receive the data the more we can do with it.

One of the factors driving the configuration of smart meter networks is the potential uses for the information the meters will generate. A simple approach is to view smart meters as a way to reduce the cost of meter reading; have meters automatically phone readings home rather than drive past each customer’s premisses in a truck and eyeball each meter. We might even used this reduced cost to read the meters more frequently, shrinking our billing cycle, and the revenue outstanding with it. However, the information we’re working from will still be months, or even quarters, old.

If we’re smart (and our meter has the right instrumentation) then we will know exactly which and how many houses have been affected by a fault. Vegetation management (tree trimming) could become proactive by analyzing electrical noise on the power lines that the smart meters can see, and determine where along a power line we need to trim the trees. This lets us go directly to where work needs to be done, rather than driving past every every power line on a schedule—a significant cost and time saving, not to mention an opportunity to engage customers more closely and service them better.

If our information is a bit younger (days or weeks rather than months) then we can use it too schedule just-in-time maintenance. The same meters can watch for power fluctuations coming out of transformers, motors and so on, looking for the tell tail signs of imminent failure. Teams rush out and replace the asset just before it fails, rather than working to a program of scheduled maintenance (maintenance which might be causing some of the failures).

When the information is only minutes old we can consider demand shaping. By turning off hot water heaters and letting them coast we can avoid spinning up more generators.

If we get at or below seconds we can start using the information for load balancing across the network, managing faults and responding to disasters.

I think we, outside the energy industry, are missing a trick. We tend to use a narrow, operational view of the information we can derive from our IT estate. Data is either considered transactional or historical; we’re either using it in an active transaction or we’re using it to generate reports well after the event. We typically don’t consider what other uses we might put the information to if it were available in shorter time frames.

I like to think of information availability in terms of a time continuum, rather than a simple transactional/historical split. The earlier we use the information, the more potential value we can wring from it.

The value of data
The value of data decreases rapidly with age

There’s no end of useful purposes we can turn our information too between the billing and transactional timeframes. Operational excellence and business intelligence allow us to tune business processes to follow monthly or seasonal cycles. Sales and logistics are tuned on a weekly basis to adjust for the dynamics of the current holiday. Days old information would allow us to respond in days, calling a client when we haven’t received their regular order (a non-event). Operations can use hours old information for capacity planning, watching for something trending in the wrong direction and responding before everything falls overs.

If we can use trending data—predicting stock-outs and watching trends in real time—then we can identify opportunities or head off business exceptions before they become exceptional. BAM (business activity monitoring) and real-time data warehouses take on new meaning when viewed in this light.

In a world where we are all good, being smart about the information we can harvest from our business environment (both inside and outside our organization) has the potential to make us exceptional.

Update: Andy Mulholland has a nice build on this idea over at Capgemini‘s CTO blog: Have we really understood what Business Intelligence means?