Tag Archives: Business Intelligence

The rules of enterprise IT

As I’ve pointed out before (possibly as I’m quite fond of games{{1}}) the game of enterprise IT has a long an proud history. I’ve also pointed out that the rules of this game need to change if enterprise IT — as we know it — is to remain relevant in the future{{2}}. This is triggered a few interesting conversations at the pub on just what are the old rules of IT.

[[1]]Capitalise: A game for the whole company to play![[1]]
[[2]]People don’t like change. (Or do they?)[[2]]

Enterprise IT, as we know it today, is an asset management business, the bastard son of Henry Ford’s moving production line. Enterprise IT takes the raw material of business processes and technology and turns them into automated solutions. From those first card tabulators through to today’s enterprise applications, the focus has been on delivering large IT solutions into the business.

The rules of enterprise IT are the therefore rules of business operations. After a fair amount of coffee and beer with friends, the following 4 ± 2 rules seems to be a fair minimum set (in no particular order).

Keep the lights on. Or, put more gently, the ticket to the strategy table is a smooth running business. Business has become totally reliant on IT, while at the same time IT is still seen as something of a black art run by a collection of unapproachable high priests. The board might complain about the cost and pain of an ERP upgrade, but they know they have to find the money if they want to successfully close the books at the end of the financial year. While this means that the money will usually be found, it also means that the number one rule of being a CIO is to keep the transactions flowing. Orders must be taken, products shipped (or services provided), invoices sent and cash collected. IT is an operational essential, and any CIO who can’t be trusted to keep the lights on won’t even have time to warm up their seat.

Save money. IT started as a cost saving exercise: automatic tabulation machines to replace rooms full of people shuffling papers, networks to eliminate the need to truck paper from one place to another. From those first few systems through to today’s modern enterprise solutions, applications have been seen as a tool to save time and money. Understand what the business processes or problem is, and then support the heavy information lifting with technology to drive cost savings and reduce cycle time. Business cases are driven by ideas like ROI, capturing these savings over time. Keep pushing the bottom line down. These incremental savings can add up to significant changes, such as Dell’s make-to-order solution{{3}} which enabled the company to operate with negative working capital (ie. they took your cash before they needed to pay their suppliers), but the overall approach is still based on using IT to drive cost savings through the automation of predefined business processes.

[[3]]Dell’s make to order solution leaves competitors in the dust.[[3]]

Build what you need. When applications are rare, then building them is an engineering challenge. You can’t just go to the store and by the parts you need, you need to create a lot of the parts yourself in your own machine shop. I remember the large teams (compared to today) from the start of my career. A CORBA project didn’t just need a team to implement the business logic, it needed a large infrastructure team (security guy, transaction guy …) as well. Many organisations (and their strong desire to build – or at least heavily customise – solutions) still work under this assumption. IT was the department to marshal large engineering teams who deliver the industrial grade solutions which can form the backbone of a business.

Ferrero Rocher
Crunch on the outside, soft and chewy in the middle.

Keep the outside outside. It’s common to have what is called a Ferrero Rocher{{4}} approach to IT: crunchy on the outside while soft and chewy in the middle. This applies to both security and data management. We visualise a strong distinction between inside and outside the enterprise. Inside we have our data, processes and people. Outside is everyone else (including our customers and partners). We harvest data from our operations and inject it into business intelligence solutions to create insight (and drive operational savings). We trust whatever’s inside our four walls, while deploying significant security measures to keep the evil outside.

[[4]]Ferrero[[4]]

It’s a separate question of whether or not these rules are still relevant in an age when business cycles are measured in weeks rather than years, and SaaS and cloud computing are emerging as the dominate modes of software delivery.

Decisions are more important than data

Names and categories are important. Just look at the challenges faced by the archeology community as DNA evidence forces history to be rewritten when it breaks old understandings, changing how we think and feel in the process. Just who invaded who? Or was related to who?

We have the same problem with (enterprise) technology; how we think about the building blocks of the IT estate has a strong influence on how approach the problems we need to solve. Unfortunately our current taxonomy has a very functional basis, rooted as it is in the original challenge of creating the major IT assets we have today. This is a problem, as it’s preventing us to taking full advantage of the technologies available to us. If we want to move forward, creating solutions that will thrive in a post GFC world, then we need to think about enterprise IT in a different way.

Enterprise applications – the applications we often know and love (or hate) – fall into a few distinct types. A taxonomy, if you will. This taxonomy has a very functional basis, founded as it is on the challenge of delivering high performance and stable solutions into difficult operational environments. Categories tend to be focused on the technical role a group of assets have in the overall IT estate. We might quibble over the precise number of categories and their makeup, but for the purposes of this argument I’m going to go with three distinct categories (plus another one).

SABER
SABER @ American Airlines

First, there’s the applications responsible for data storage and coherence: the electronic filing cabinets that replaced rooms full of clerks and accountants back in the day. From the first computerised general ledger through to CRM, their business case is a simple one of automating paper shuffling. Put the data in on place and making access quick and easy; like SABER did, which I’ve mentioned before.

Next, are the data transformation tools. Applications which take a bunch of inputs and generate an answer. This might be a plan (production plan, staffing roster, transport planning or supply chain movements …) or a figure (price, tax, overnight interest calculation). State might be stored somewhere else, but these solutions still need some some serious computing power to cope with hugh bursts in demand.

Third is data presentation: taking corporate information and presenting in some form that humans can consume (though looking at my latest phone bill, there’s no attempt to make the data easy to consume). This might be billing or invoicing engines, application specific GUIs, or even portals.

We can also typically add one more category – data integration – though this is mainly the domain of data warehouses. Solutions that pull together data from multiple sources to create a summary view. This category of solutions wouldn’t exist aside from the fact that our operational, data management solutions, can’t cope with an additional reporting load. This is also the category for all those XLS spreadsheets that spread through business like a virus, as high integration costs or more important projects prevent us from supporting user requests.

A long time ago we’d bake all these layers into the one solution. SABER, I’m sure, did a bit of everything, though its main focus was data management. Client-server changed things a bit by breaking user interface from back-end data management, and then portals took this a step further. Planning tools (and other data transformation tools) started as modules in larger applications, eventually popping out as stand alone solutions when they grew large enough (and complex enough) to justify their own delivery effort. Now we have separate solutions in each of these categories, and a major integration problem.

This categorisation creates a number of problems for me. First and foremost is the disconnection between what business has become, and what technology is trying to be. Back in the day when “computer” referred to someone sitting at a desk computing ballistics tables, we organised data processing in much the same way that Henry Ford organised his production line. Our current approach to technology is simply the latest step in the automation of this production line.

Computers in the past
Computers in the past

Quite a bit has changed since then. We’ve reconfigured out businesses, we’re reconfiguring our IT departments, and we need to reconfigure our approach to IT. Business today is really a network of actors who collaborate to make decisions, with most (if not all) of the heavy data lifting done by technology. Retail chains are trying to reduce the transaction load on their team working the tills so that they can focus on customer relationships. The focus in supply chains to on ensuring that your network of exception managers can work together to effectively manage disruptions in the supply chain. Even head office focused on understanding and responding to market changes, rather than trying to optimise the business in an unchanging market.

The moving parts of business have changed. Henry Ford focused on mass: the challenge of scaling manufacturing processes to get cost down. We’re moved well beyond mass, through velocity, to focus on agility. A modern business is a collection of actors collaborating and making decisions, not a set of statically defined processes backed by technology assets. Trying to force modern business practices into yesterdays IT taxonomy is the source of one of the disconnects between business and IT that we complain so much about.

There’s no finer example of this than Sales and Operations Planning (S&OP). What should be a collaborative and fluid process – forward planning among a network of stakeholders – has been shoehorned into a traditional n-tier, database driven, enterprise solution. While an S&OP solution can provided significant cost saving, many companies find it too hard to fit themselves into the solution. It’s not surprising that S&OP has a reputation for being difficult to deploy and use, with many planners preferring to work around the system than with it.

I’ve been toying with a new taxonomy for a little while now, one that tries to reflect the decision, actor and collaboration centric nature of modern business. Rather than fit the people to the factory, which was the approach during the industrial revolution, the idea is to fit the factory to the people, which is the approach we use today post LEAN and flexible manufacturing. While it’s a work in progress, it still provides a good starting point for discussions on how we might use technology to support business in the new normal.

In no particular order…

Fusion solutions blend data and process to create a clear and coherent environment to support specific roles and decisions. The idea is to provide the right data and process, at the right time, in a format that is easy to consume and use, to drive the best possible decisions. This might involve blending internal data with externally sourced data (potentially scraped from a competitor’s web site); whatever data required. Providing a clear and consistent knowledge work environment, rather than the siloed and portaled environment we have today, will improve productivity (more time on work that matters, and less time on busy work) and efficiency (fewer mistakes).

Next, decisioning solutions automate key decisions in the enterprise. These decisions might range from mortgage approvals through office work, such as logistics exception management, to supporting knowledge workers workers in the field. We also need to acknowledge that decisions are often decision making processes which require logic (roles) applied over a number of discrete steps (processes). This should not be seen as replacing knowledge workers, as a more productive approach is to view decision automation as a way of amplifying our users talents.

While we have a lot of information, some information will need to be manufactured ourselves. This might range from simple charts generated from tabular data, through to logistics plans or maintenance scheduling, or even payroll.

Information and process access provide stakeholders (both people and organisations) with access to our corporate services. This is not your traditional portal to web based GUI, as the focus will be on providing stakeholders with access wherever and whenever they need, on whatever device they happen to be using. This would mean embedding your content into a Facebook app, rather than investing in a strategic portal infrastructure project. Or it might involve developing a payment gateway.

Finally we have asset management, responsible for managing your data as a corporate asset. This looks beyond the traditional storage and consistency requires for existing enterprise applications to include the political dimension, accessibility (I can get at my data whenever and wherever I want to) and stability (earthquakes, disaster recovery and the like).

It’s interesting to consider the sort of strategy a company might use around each of these categories. Manufacturing solutions – such as crew scheduling – are very transactional. Old data out, new data in. This makes them easily outsourced, or run as a bureau service. Asset management solutions map very well to SaaS: commoditized, simple and cost effective. Access solutions are similar to asset management.

Fusion and decisioning solutions are interesting. The complete solution is difficult to outsource. For many fusion solutions, the data and process set presented to knowledge workers will be unique and will change frequently, while decisioning solutions contain decisions which can represent our competitive advantage. On the other hand, it’s the intellectual content in these solutions, and not the platform, which makes them special. We could sell our platform to our competitors, or even use a commonly available SaaS platform, and still retain our competitive advantage, as the advantage is in the content, while our barrier to competition is the effort required to recreate the content.

This set of categories seems to map better to where we’re going with enterprise IT at the moment. Consider the S&OP solution I mention before. Rather than construct a large, traditional, data-centric enterprise application and change our work practices to suit, we break the problem into a number of mid-sized components and focus on driving the right decisions: fusion, decisioning, manufacturing, access, and asset management. Our solution strategy becomes more nuanced, as our goal is to blend components from each category to provide planners with the right information at the right time to enable them to make the best possible decision.

After all, when the focus is on business agility, and when we’re drowning in a see of information, decisions are more important than data.

Information overload

We’re drowning in information, as I’ve written about before, both in the context of Business Intelligence and Innovation (whatever that is). An interesting blog post by Tim Kastelle over at his Innovation Leadership Network takes the somewhat contrarian view, that we have always had this information overload problem. Quoting Stowe Boyd, he points out:

I suggest we just haven’t experimented enough with ways to render information in more usable ways, and once we start to do so, it will like take 10 years (the 10,000 hour rule again) before anyone demonstrates real mastery of the techniques involved.

The problem is that our current tooling for information processing is not up to the task at hand. Unfortunately Tim, like most of us, is still trying to find the best way to managed the information load pressing down on us.

Any suggestions?

Posted via email from PEG @ Posterous

Is BI really the next big thing?

I think we’re at a tipping point with BI. Yes, it makes sense that BI should be the next big thing in the new year, as many pundits are predicting, driven by the need to make sense of the massive volume of data we’re accumulated. However, I doubt that BI in its current form is up to the task.

As one of the CEOs Andy Mulholland spoke to mentioned “I want to know … when I need to focus in.” The CEO’s problem is not more data, but the right data. As Andy rightfully points out in an earlier blog post, we’ve been focused on harvesting the value from our internal, manufactured data, ignoring the latent potential in our unstructured data (let alone the unstructured data we can find outside the enterprise). The challenge is not to find more data, but the right data to drive the CEO’s decision on where to focus.

It’s amazing how little data you need to make an effective decision—if you have the right data. Andrew McAfee wrote a nice blog post a few years ago (The case against the business case is the closest I can find to it), pointing out that the mass of data we pile into a conventional business case just clouds the issues, creating long cause-and-effect chains that make it hard to come to an effective decision. His solution was the one page business case: capability delivered, (rough) business requirements, solution footprint, and (rough) costing. It might be one page, but there is enough information, the right information, to make an effective decision. I’ve used his approach ever since.

Current BI seems to be approaching the horse from the wrong direction, much like Andrew’s business case problem. We focus on sifting through all the information we have, trying to glean any trends and correlations which might be useful. This works as small to moderate scales, but once we reach the huge end of the scale it starts to groan under its own weight. It’s the law of diminishing returns—adding more information to the mix will only have a moderate benefit compared to the effort required to integrate and process it.

A more productive method might be to use a hypothesis-driven approach. Rather than look for anything that might be interesting, why not go spelunking for specific features which we know will be interesting?  The features we’re looking for in the information are (almost always) to support a decision. Why not map out that decision, similar to how we map out the requires for a feedback loop in a control system, and identify the types of features that we need to support the decision we want to make? We can segment our data sets based on the features’ gross characteristics (inside vs. outside, predictive vs. historical …) and then search in the appropriate segments for the features we need. We’ve broken one large problem—find correlations in one massive data set—into a series of much more manageable tasks.

The information arms race, the race to search through more information for that golden ticket, is just a relic of the lack of information we’ve lived with in the past. In today’s land of plenty, more is not necessarily better. Finding the right features is our real challenge.

Posted via email from PEG @ Posterous

Working from the outside in

We’re drowning in a sea of data and ideas, with huge volumes of untapped information available both inside and outside our organization. There is so much information at our disposal that it’s hard to discern Arthur from Martha, let alone optimize the data set we’re using. How can we make sense of the chaos around us? How can we find the useful signals which will drive us to the next level of business performance, from amongst all this noise?

I’ve spent some time recently, thinking about how the decisions our knowledge workers make in planning and managing business exceptions can have a greater impact on our business performance than the logic reified in the applications themselves. And how the quality of information we feed into their decision making processes can have an even bigger impact, as the data’s impact is effectively amplified by the decision making process. Not all data is of equal value and, as is often said, if you put rubbish in then you get rubbish out.

Traditional Business Intelligence (BI) tackles this problem by enabling us to mine for correlations in the data tucked away in our data warehouse. These correlations provide us with signals to help drive better decisions. Managing stock levels based on historical trends (Christmas rush, BBQs in summer …) is good, but connecting these trends to local demographic shifts is better.

Unfortunately this approach is inherently limited. Not matter how powerful your analytical tools, you can only find correlations within and between the data sets you have in the data warehouse, and this is only a small subset of the total data available to us. We can load additional data sets into the warehouse (such as demographic data bought from a research firm), but in a world awash with (potentially useful) data, the real challenge is deciding on which data sets to load, and not in finding the correlations once they are loaded.

What we really need is a tool to help scan across all available data sets and find the data which will provide the best signals to drive the outcome we’re looking for. An outside-in approach, working from the outcome we want to the data we need, rather than an inside-out approach, working from the data we have to the outcomes it might support. This will provide us with a repeatable method, a system, for finding the signals needed to drive us to the next level of performance, rather than the creative, hit-and-miss approach we currently use. Or, in geekier terms, a methodology which enables us to proactively manage our information portfolio and derive the greatest value from it.

I was doodling on the tram the other day, playing with the figure I created for the Inside vs. Outside post, when I had a thought. The figure was created as a heat map showing how the value of information is modulated by time (new vs. old) and distance (inside vs. outside). What if we used it the other way around? (Kind of obvious in hindsight, I know, but these things usually are.) We might use the figure to map from the type of outcome we’re trying to achieve back to the signals required to drive us to that outcome.

Time and distance drive the value of information
Time and distance drive the value of information

This addresses an interesting comment (in email) by a U.K. colleague of mine. (Jon, stand up and be counted.) As Andy Mulholland pointed out, the upper right represents weak confusing signals, while the lower left represents strong, coherent signals. Being a delivery guy, Jon’s first though was how to manage the dangers in excessively focusing on the upper right corner of the figure. Sweeping a plane’s wings forward increases its maneuverability, but at the cost of decreasing it’s stability. Relying too heavily on external, early signals can, in a similar fashion, could push an organization into a danger zone. If we want to use these types of these signals to drive crucial business decisions, then we need to understand the tipping point and balance the risks.

My tram-doodle was a simple thing, converting a heat map to a mud map. For a given business decision, such as planning tomorrow’s stock levels for a FMCG category, we can outline the required performance envelope on the figure. This outline shows us the sort of signals we should be looking for (inside good, outside bad), while the shape of the outlines provides us with an understanding (and way of balancing) the overall maneuverability and stability of the outcome the signals will support. More external predictive scope in the outline (i.e. more area inside the outline in the upper-right quadrant) will provide a more responsive outcome, but at the cost of less stability. Increasing internal scope will provide a more stable outcome, but at the cost of responsiveness. Less stability might translate to more (potentially unnecessary) logistics movements, while more stability would represent missed sales opportunities. (This all creates a little deja vu, with a strong feeling of computing Q values for non-linear control theory back in university, so I’ve started formalizing how to create and measure these outlines, as well as how to determine the relative weights of signals in each area of the map, but that’s another blog post.)

An information performance mud map
An information performance mud map

Given a performance outline we can go spelunking for signals which fit inside the outline.

Luckily the mud map provides us with guidance on where to look. An internal-historical signal is, by definition driven by historical data generated inside the organization. Past till data? An external-reactive signal is, by definition external and reactive. A short term (i.e. tomorrow’s) weather forecast, perhaps? Casting our net as widely as possible, we can gather all the signals which have the potential to drive us toward to the desired outcome.

Next, we balance the information portfolio for this decision, identifying the minimum set of signals required to drive the decision. We can do this by grouping the signals by type (internal-historical, …) and then charting them against cost and value. Cost is the acquisition cost, and might represent a commercial transaction (buying access to another organizations near-term weather forecast), the development and consulting effort required to create the data set (forming your own weather forecasting function), or a combination of the two, heavily influenced by an architectural view of the solution (as Rod outlined). Value is a measure of the potency and quality of the signal, which will be determined by existing BI analytics methodologies.

Plotting value against cost on a new chart creates a handy tool for finding the data sets to use. We want to pick from the lower right – high value but low cost.

An information mud map
An information mud map

It’s interesting to tie this back to the Tesco example. Global warming is making the weather more variable, resulting in unseasonable hot and cold spells. This was, in turn, driving short-term consumer demand in directions not predicted by existing planning models. These changes in demand represented cost, in the from of stock left on the shelves past it’s use-by date, or missed opportunities, by not being able to service the demand when and where it arises.

The solution was to expand the information footprint, pulling in more predictive signals from outside the business: changing the outline on the mud map to improve closed-loop performance. The decision to create an in-house weather bureau represents a straight forward cost-value trade-off in delivering an operational solution.

These two tools provide us with an interesting approach to tackling a number of challenges I’m seeing inside companies today. We’re a lot more externally driven now than we were even just a few years ago. The challenge is to identify customer problems we can solve and tie them back to what our organization does, rather than trying to conceive offerings in isolation and push them out into the market. These tools enable us to sketch the customer challenges (the decisions our customers need to make) and map them back to the portfolio of signals that we can (or might like to) provide to them. It’s outcome-centric, rather than asset-centric, which provides us with more freedom to be creative in how we approach the market, and has the potential to foster a more intimate approach to serving customer demand.

Using what you have

All too often companies miss opportunities because they can’t make connections between the things they already know. There’s a well traveled story about a clothing company who bounces a customer’s request to return an item, as they don’t think it’s worth the bother even though the customer has a real complaint, only to find out later that the customer was the wife of the CEO of one of their major partners. She probably spent most of dinner that night complaining about the company’s customer service, must to the detriment of the CEO’s opinion of the partnership. If they’d just been able to make a couple of connections a little earlier, the outcome might have been a little different.

It’s nice to see some companies weeding through the pile of data available to them, and make some of the obvious connections. One bloke, after the flight from hell which was delayed due to weather, found out that Northwest Airlines had made the obvious connections and solved the problem before he arrived for his connecting flight.

So let me see if I got this right. I don’t need to find a free ground agent to get re-booked. I don’t need to schlep myself and my luggage in line along with 50+ other people who are all mad, tired and missing their families… to get re-ticketed? AND NWA was giving me $50 off another flight and frequent flier miles to boot? Remember this wasn’t their fault, its mother natures gig here. This was some customer service!!! I love it!

Operations knew that the flight was running late, and booking knew of the connection. I spent the Sunday before last standing around Sydney Airport and Virgin couldn’t make the obvious connection. Luckily, he didn’t have the same experience.

How often have you been frustrated because some company you’re dealing with can’t get the left hand to talk to the right?

Posted via email from PEG

Inside vs. Outside

As Andy Mullholland pointed out in a recent post, all too often we manage our businesses by looking out the rear window to see where we’ve been, rather than looking forward to see where we’re going. How we use information too drive informed business decisions has a significant impact on our competitiveness.

I’ve made the point previously (which Andy built on) that not all information is of equal value. Success in today’s rapidly changing and uncertain business environment rests on our ability to make timely, appropriate and decisive action in response to new insights. Execution speed or organizational intelligence are not enough on their own: we need an intimate connection to the environment we operate in. Simply collecting more historical data will not solve the problem. If we want to look out the front window and see where we’re going, then we need to consider external market information, and not just internal historical information, or predictions derived from this information.

A little while ago I wrote about the value of information. My main point was that we tend to think of most information in one of two modes—either transactionally, with the information part of current business operations; or historically, when the information represents past business performance—where it’s more productive to think of an information age continuum.

The value of information
The value of information

Andy Mulholland posted an interesting build on this idea on the Capgemini CTO blog, adding the idea that information from our external environment provides mixed and weak signals, while internal, historical information provides focused and strong signals.

The value of information and internal vs. external drivers
The value of information and internal vs. external drivers

Andy’s major point was that traditional approaches to Business Intelligence (BI) focus on these strong, historical signals, which is much like driving a car by looking out the back window. While this works in a (relatively) unchanging environment (if the road was curving right, then keep turning right), it’s less useful in a rapidly changing environment as we won’t see the unexpected speed bump until we hit it. As Andy commented:

Unfortunately stability and lack of change are two elements that are conspicuously lacking in the global markets of today. Added to which, social and technology changes are creating new ideas, waves, and markets – almost overnight in some cases. These are the ‘opportunities’ to achieve ‘stretch targets’, or even to adjust positioning and the current business plan and budget. But the information is difficult to understand and use, as it is comprised of ‘mixed and weak signals’. As an example, we can look to what signals did the rise of the iPod and iTunes send to the music industry. There were definite signals in the market that change was occurring, but the BI of the music industry was monitoring its sales of CDs and didn’t react until these were impacted, by which point it was probably too late. Too late meaning the market had chosen to change and the new arrival had the strength to fight off the late actions of the previous established players.

We’ve become quite sophisticated at looking out the back window to manage moving forward. A whole class of enterprise applications, Enterprise Performance Management (EPM), has been created to harvest and analyze this data, aligning it with enterprise strategies and targets. With our own quants, we can create sophisticated models of our business, market, competitors and clients to predict where they’ll go next.

Robert K. Merton: Father of Quants
Robert K. Merton: Father of Quants

Despite EPM’s impressive theories and product sheets, it cannot, on its own, help us leverage these new market opportunities. These tools simply cannot predict where the speed bumps in the market, no matter how sophisticated they are.

There’s a simple thought experiment economists use to show the inherent limitations in using mathematical models to simulate the market. (A topical subject given the recent global financial crisis.) Imagine, for a moment, that you have a perfect model of the market; you can predict when and where the market will move with startling accuracy. However, as Sun likes to point out, statistically, the smartest people in your field do not work for your company; the resources in the general market are too big when compared to your company. If you have a perfect model, then you must assume that your competitors also have a perfect model. Assuming you’ll both use these models as triggers for action, you’ll both act earlier, and in possibly the same way, changing the state of the market. The fact that you’ve invented a tool to predicts the speed bumps causes the speed bumps to move. Scary!

Enterprise Performance Management is firmly in the grasp of the law of diminishing returns. Once you have the critical mass of data required to create a reasonable prediction, collecting additional data will have a negligible impact on the quality of this prediction. The harder your quants work, the more sophisticated your models, the larger the volume of data you collect and trawl, the lower the incremental impact will be on your business.

Andy’s point is a big one. It’s not possible to accurately predict future market disruptions with on historical data alone. Real insight is dependent on data sourced from outside the organization, not inside. This is not to diminish the important role BI and EPM play in modern business management, but to highlight that we need to look outside the organization if we are to deliver the next step change in performance.

Zara, a fashion retailer, is an interesting example of this. Rather than attempt to predict or create demand on a seasonal fashion cycle, and deliver product appropriately (an internally driven approach), Zara tracks customer preferences and trends as they happen in the stores and tries to deliver an appropriate design as rapidly as possible (an externally driven approach). This approach has made Zara the most profitable arm of Inditex, a holding company of eight retail brands, and one of the biggest success stories in Spanish business. You could say that Quants are out, and Blink is in.

At this point we can return to my original goal: creating a simple graphic that captures and communicates what drives the value of information. Building on both my own and Andy’s ideas we can create a new chart. This chart needs to capture how the value of information is effected by age, as well as the impact of externally vs. internally sourced. Using these two factors as dimensions, we can create a heat map capturing information value, as shown below.

Time and distance drive the value of information
Time and distance drive the value of information

Vertically we have the divide between inside and outside: internally created from processes; though information at the surface of our organization, sourced from current customers and partners; to information sourced from the general market and environment outside the organization. Horizontally we have information age, from information we obtain proactively (we think that customer might want a product), through reactively (the customer has indicated that they want a product) to historical (we sold a product to a customer). Highest value, in the top right corner, represents the external market disruption that we can tap into. Lowest value (though still important) represents an internal transactional processes.

As an acid test, I’ve plotted some of the case studies mentioned in to the conversation so far on a copy of this diagram.

  • The maintenance story I used in my original post. Internal, historical data lets us do predictive maintenance on equipment, while  external data enables us to maintain just before (detected) failure. Note: This also applies tasks like vegetation management (trimming trees to avoid power lines), as real time data and be used to determine where vegetation is a problem, rather than simply eyeballing the entire power network.
  • The Walkman and iPod examples from Andy’s follow-up post. Check out Snake Coffee for a discussion on how information driven the evolution of the Walkman.
  • The Walmart Telxon story, using floor staff to capture word of mouth sales.
  • The example from my follow-up (of Andy’s follow-up), of Albert Heijn (a Dutch Supermarket group) lifting the pricing of ice cream and certain drinks when the temperature goes above 25° C.
  • Netflix vs. (traditional) Blockbuster (via. Nigel Walsh in the comments), where Netflix helps you maintain a list of files you would like to see, rather than a more traditional brick-and-morter store which reacts to your desire to see a film.

Send me any examples that you know of (or think of) and I’ll add them to the acid test chart.

An acid test for our chart
An acid test for our chart

An interesting exercise left to the reader is to map Peter Drucker’s Seven Drivers for change onto the same figure.

Update: A discussion with a different take on the value of information is happening over at the Information Architects.

Update: The latest instalment in this thread is Working from the outside in.

Update: MIT Sloan Management Review weighs in with an interesting article on How to make sense of weak signals.

Have we really understood what Business Intelligence means?

Andy Mulholland has a nice build on my value of information bit over at Capgemini’s CTO blog, flipping the sense of the figure and showing how the time axis also connects to internal vs. external focus, and IT’s shift from cost control to value creation.

The value of information
The value of information and internal vs. external drivers

Check it out.

Update 2: Andy Mulholland came across a nice example:

Albert Heijn the Dutch Supermarket group lifts the pricing of ice cream and certain drinks when the temperature goes above 25’ C

Update 1: I’ve left a comment there building on what Andy has.

BI does seem to be moving in this direction, but still has a long way to go and is too internally focused. Customer Intelligence is moving the enterprise boundary out a little, and does not really address the challenge of integrating external information to create new insight. What about local events, weather, the memes from the social media community, the memes from our competitors customers, or anything else we can think of? The challenge is to fuse internal, customer, competitor, market and even environmental data to create new insight.

For example, consider current approaches to S&OP (sales and operations planning). We’ve take what is an inherently unstructured and collaborative activity and shoved it through the process and business intelligence meat grinder to create yet-another enterprise application. It’s no surprise that S&OP is a challenge to deploy, with few companies realizing (let alone capturing) the promised value. Customer Intelligence adds little to the benefit side of this this equation; it would seem impossible to justify CI in terms of cost saving, and challenging to justify it in terms of creating new business.

Imaging a world where we have our S&OP team focused on information synthesis rather than the planning process. They might pluck weather data (it’s going to be hot in St Kilda) and couple it with an event (the St Kilda festival), memes from their customers (and their competitor’s customers) plucked from hootsuite, and decide only 24 hours before the event to rapidly deploy a pop-up store. It’s this sort of sense-and-respond ability that will drive us to the next level of performance.

One of the best real world examples of this transition from internal-cost-control to external-value-capture has happened around the hand-held stock management devices used in retail. Initial deployed as a cost control measure (i.e. better information on what’s on the shelves) they have now become a tool for capturing value. Walmart has been using these devices for some time, devolving buying decisions to the team walking the shop floor and providing them with the information they need to make good buying decisions. As one reporter found:

“We received an inspirational talk on this subject, from an employee who reacted after the store test-marketed tents that could protect cars for people who didn’t have enough garage space. They sold out quickly, and several customers came in asking for more. Clearly this was a singular, exceptional case of word-of-mouth, so he ordered literally a truckload of tent-garages, “Which I shouldn’t have done really without asking someone,” he said with a shrug, “because I hadn’t been working at the store for long.” But the item was a huge success. His VPI was the biggest in store history—and that kind of thing doesn’t go unnoticed in Arkansas.”

Fly on the wall

In BI terms, we’re moving from large, centralized solutions used to drive planning, to distributed peer-to-peer networks focused on supporting local decisions. While corporate data stores will still play an important role, the advantage is moving to our ability to fuse multiple data sources, some which we do not own and some which only have local relevance. The right information, at the right time, in the right place, to empower knowledge workers to make the best possible decisions. Local Intelligence, rather than Business Intelligence.

The value of information

We all know that data is valuable; without it it would be somewhat difficult to bill customers and stay in business. Some companies have accumulated masses of data in a data warehouse which they’ve used to drive organizational efficiencies or performance improvements. But do we ever ask ourselves when is the data most valuable?

Billing is important, but if we get the data earlier then we might be able to deal with a problem—a business exception—more efficiently. Resolving a short pick, for example, before the customer notices. Or perhaps even predicting a stock-out. And in the current hyper-competitive business environment where everyone is good, having data and the insight that comes with it just a little bit sooner might be enough to give us an edge.

A good friend of mine often talks about the value of information in a meter. This makes more sense when you know that he’s a utility/energy guru who’s up to his elbows in the U.S. smart metering roll out. Information is a useful thing when you’re putting together systems to manage distributed networks of assets worth billions of dollars. While the data will still be used to drive billing in the end, the sooner we receive the data the more we can do with it.

One of the factors driving the configuration of smart meter networks is the potential uses for the information the meters will generate. A simple approach is to view smart meters as a way to reduce the cost of meter reading; have meters automatically phone readings home rather than drive past each customer’s premisses in a truck and eyeball each meter. We might even used this reduced cost to read the meters more frequently, shrinking our billing cycle, and the revenue outstanding with it. However, the information we’re working from will still be months, or even quarters, old.

If we’re smart (and our meter has the right instrumentation) then we will know exactly which and how many houses have been affected by a fault. Vegetation management (tree trimming) could become proactive by analyzing electrical noise on the power lines that the smart meters can see, and determine where along a power line we need to trim the trees. This lets us go directly to where work needs to be done, rather than driving past every every power line on a schedule—a significant cost and time saving, not to mention an opportunity to engage customers more closely and service them better.

If our information is a bit younger (days or weeks rather than months) then we can use it too schedule just-in-time maintenance. The same meters can watch for power fluctuations coming out of transformers, motors and so on, looking for the tell tail signs of imminent failure. Teams rush out and replace the asset just before it fails, rather than working to a program of scheduled maintenance (maintenance which might be causing some of the failures).

When the information is only minutes old we can consider demand shaping. By turning off hot water heaters and letting them coast we can avoid spinning up more generators.

If we get at or below seconds we can start using the information for load balancing across the network, managing faults and responding to disasters.

I think we, outside the energy industry, are missing a trick. We tend to use a narrow, operational view of the information we can derive from our IT estate. Data is either considered transactional or historical; we’re either using it in an active transaction or we’re using it to generate reports well after the event. We typically don’t consider what other uses we might put the information to if it were available in shorter time frames.

I like to think of information availability in terms of a time continuum, rather than a simple transactional/historical split. The earlier we use the information, the more potential value we can wring from it.

The value of data
The value of data decreases rapidly with age

There’s no end of useful purposes we can turn our information too between the billing and transactional timeframes. Operational excellence and business intelligence allow us to tune business processes to follow monthly or seasonal cycles. Sales and logistics are tuned on a weekly basis to adjust for the dynamics of the current holiday. Days old information would allow us to respond in days, calling a client when we haven’t received their regular order (a non-event). Operations can use hours old information for capacity planning, watching for something trending in the wrong direction and responding before everything falls overs.

If we can use trending data—predicting stock-outs and watching trends in real time—then we can identify opportunities or head off business exceptions before they become exceptional. BAM (business activity monitoring) and real-time data warehouses take on new meaning when viewed in this light.

In a world where we are all good, being smart about the information we can harvest from our business environment (both inside and outside our organization) has the potential to make us exceptional.

Update: Andy Mulholland has a nice build on this idea over at Capgemini‘s CTO blog: Have we really understood what Business Intelligence means?

The problems we’re facing

Companies are engaged in an arms race. For years they have been rushing to beat competitors to market with applications designed to automate a previously manual area of the business, making them more efficient and thereby creating a competitive advantage.

Today, enterprise applications are so successful that it is impossible to do business without them. The efficiencies they deliver have irrevocably changed the business environment, with an industry developing around them a range of vendors providing products to meet most needs. It is even possible to argue that many applications have become a commodity (as Nicholas Carr did in his HBR article “IT Doesn’t Matter”), and in the last couple of years we have seen consolidation in the market as larger vendors snap up smaller niche players to round out their product portfolio.

This has levelled the playing field, and it’s no longer possible to use an application in the same way to create competitive advantage. Now that applications are ubiquitous, they’re simply part of the fabric of business.

Today, how we manage the operation of a business process is becoming more important that the business process itself. Marco Iansiti brought this into sharp relief through his work at Harvard Business Review when he measured the efficiency of deployment of IT, and not cost, and correlated upper quartile efficiency with upper quartile sales revenue growth. Efficiently dealing with business exceptions, optimizing key decisions and ensuring end-to-end consistency and efficiency will have a greater impact than replacing an existing application.

We are finished the big effort: applications are available from multiple vendors to support the majority of a business’s supporting functionality. The law of diminishing returns has taken effect, and owning or creating new IT asset today will not simply confer a competitive advantage. Competitive advantage now lives in the gaps between our applications. Exception handling is becoming increasingly important as good exception handling can have a dramatic impact on both the bottom- and top-line. If we can deal with stock-outs more efficiently then we can keep less stock on hand and operate a leaner supply chain. Improving how we determine financial adequacy allows us to hold lower capital reserves, freeing up cash that we can put to other more productive uses. Extending our value-chain beyond the confines of our organisation to include partners, suppliers and channels, allows us to optimize end-to-end processes. Providing joined-up support for our mortgage product model allows us to put the model directly in the hands of our clients, letting them configure their own, personal, home loan.

Link to the complete article.