Tag Archives: Environmental Issue

Working from the outside in

We’re drowning in a sea of data and ideas, with huge volumes of untapped information available both inside and outside our organization. There is so much information at our disposal that it’s hard to discern Arthur from Martha, let alone optimize the data set we’re using. How can we make sense of the chaos around us? How can we find the useful signals which will drive us to the next level of business performance, from amongst all this noise?

I’ve spent some time recently, thinking about how the decisions our knowledge workers make in planning and managing business exceptions can have a greater impact on our business performance than the logic reified in the applications themselves. And how the quality of information we feed into their decision making processes can have an even bigger impact, as the data’s impact is effectively amplified by the decision making process. Not all data is of equal value and, as is often said, if you put rubbish in then you get rubbish out.

Traditional Business Intelligence (BI) tackles this problem by enabling us to mine for correlations in the data tucked away in our data warehouse. These correlations provide us with signals to help drive better decisions. Managing stock levels based on historical trends (Christmas rush, BBQs in summer …) is good, but connecting these trends to local demographic shifts is better.

Unfortunately this approach is inherently limited. Not matter how powerful your analytical tools, you can only find correlations within and between the data sets you have in the data warehouse, and this is only a small subset of the total data available to us. We can load additional data sets into the warehouse (such as demographic data bought from a research firm), but in a world awash with (potentially useful) data, the real challenge is deciding on which data sets to load, and not in finding the correlations once they are loaded.

What we really need is a tool to help scan across all available data sets and find the data which will provide the best signals to drive the outcome we’re looking for. An outside-in approach, working from the outcome we want to the data we need, rather than an inside-out approach, working from the data we have to the outcomes it might support. This will provide us with a repeatable method, a system, for finding the signals needed to drive us to the next level of performance, rather than the creative, hit-and-miss approach we currently use. Or, in geekier terms, a methodology which enables us to proactively manage our information portfolio and derive the greatest value from it.

I was doodling on the tram the other day, playing with the figure I created for the Inside vs. Outside post, when I had a thought. The figure was created as a heat map showing how the value of information is modulated by time (new vs. old) and distance (inside vs. outside). What if we used it the other way around? (Kind of obvious in hindsight, I know, but these things usually are.) We might use the figure to map from the type of outcome we’re trying to achieve back to the signals required to drive us to that outcome.

Time and distance drive the value of information
Time and distance drive the value of information

This addresses an interesting comment (in email) by a U.K. colleague of mine. (Jon, stand up and be counted.) As Andy Mulholland pointed out, the upper right represents weak confusing signals, while the lower left represents strong, coherent signals. Being a delivery guy, Jon’s first though was how to manage the dangers in excessively focusing on the upper right corner of the figure. Sweeping a plane’s wings forward increases its maneuverability, but at the cost of decreasing it’s stability. Relying too heavily on external, early signals can, in a similar fashion, could push an organization into a danger zone. If we want to use these types of these signals to drive crucial business decisions, then we need to understand the tipping point and balance the risks.

My tram-doodle was a simple thing, converting a heat map to a mud map. For a given business decision, such as planning tomorrow’s stock levels for a FMCG category, we can outline the required performance envelope on the figure. This outline shows us the sort of signals we should be looking for (inside good, outside bad), while the shape of the outlines provides us with an understanding (and way of balancing) the overall maneuverability and stability of the outcome the signals will support. More external predictive scope in the outline (i.e. more area inside the outline in the upper-right quadrant) will provide a more responsive outcome, but at the cost of less stability. Increasing internal scope will provide a more stable outcome, but at the cost of responsiveness. Less stability might translate to more (potentially unnecessary) logistics movements, while more stability would represent missed sales opportunities. (This all creates a little deja vu, with a strong feeling of computing Q values for non-linear control theory back in university, so I’ve started formalizing how to create and measure these outlines, as well as how to determine the relative weights of signals in each area of the map, but that’s another blog post.)

An information performance mud map
An information performance mud map

Given a performance outline we can go spelunking for signals which fit inside the outline.

Luckily the mud map provides us with guidance on where to look. An internal-historical signal is, by definition driven by historical data generated inside the organization. Past till data? An external-reactive signal is, by definition external and reactive. A short term (i.e. tomorrow’s) weather forecast, perhaps? Casting our net as widely as possible, we can gather all the signals which have the potential to drive us toward to the desired outcome.

Next, we balance the information portfolio for this decision, identifying the minimum set of signals required to drive the decision. We can do this by grouping the signals by type (internal-historical, …) and then charting them against cost and value. Cost is the acquisition cost, and might represent a commercial transaction (buying access to another organizations near-term weather forecast), the development and consulting effort required to create the data set (forming your own weather forecasting function), or a combination of the two, heavily influenced by an architectural view of the solution (as Rod outlined). Value is a measure of the potency and quality of the signal, which will be determined by existing BI analytics methodologies.

Plotting value against cost on a new chart creates a handy tool for finding the data sets to use. We want to pick from the lower right – high value but low cost.

An information mud map
An information mud map

It’s interesting to tie this back to the Tesco example. Global warming is making the weather more variable, resulting in unseasonable hot and cold spells. This was, in turn, driving short-term consumer demand in directions not predicted by existing planning models. These changes in demand represented cost, in the from of stock left on the shelves past it’s use-by date, or missed opportunities, by not being able to service the demand when and where it arises.

The solution was to expand the information footprint, pulling in more predictive signals from outside the business: changing the outline on the mud map to improve closed-loop performance. The decision to create an in-house weather bureau represents a straight forward cost-value trade-off in delivering an operational solution.

These two tools provide us with an interesting approach to tackling a number of challenges I’m seeing inside companies today. We’re a lot more externally driven now than we were even just a few years ago. The challenge is to identify customer problems we can solve and tie them back to what our organization does, rather than trying to conceive offerings in isolation and push them out into the market. These tools enable us to sketch the customer challenges (the decisions our customers need to make) and map them back to the portfolio of signals that we can (or might like to) provide to them. It’s outcome-centric, rather than asset-centric, which provides us with more freedom to be creative in how we approach the market, and has the potential to foster a more intimate approach to serving customer demand.

Tesco’s looking outside the building to predict customer needs

Tesco is using external weather data to drive sales
Tesco is using external weather data to drive sales

Tesco, the UK’s largest retailer, has started using weather forecasts to help determine what to stock in its stores across the UK.

Traditional approaches to stock management use historical buying data to drive stock decisions. This has worked well to date, but the increasing unpredictability of today’s weather patterns — driven by global warming — has presented business with both an opportunity and a challenge. An unexpected warm (or cold) spell can create unexpected spikes in demand which go unserviced, while existing stock is left on the shelves.

In Tesco’s own words:

In recent years, the unpredictability of the British summer — not to mention the unreliability of British weather forecasters — has caused a massive headache for those in the retail food business deciding exactly which foods to put out on shelves.

The present summer is a perfect example, with the weather changing almost daily and shoppers wanting barbecue and salad foods one day and winter food the next.

Tesco’s solution was to integrate detailed regional weather reports — valuable, external information — with the sales history at each Tesco store. A rise of 10C, for example, led to a 300% uplift in sales of barbecue meat and a 50% increase in sales of lettuce.

Integrating weather and sales data will enable Tesco to both capture these spikes in demand, while avoiding waste.

(Largely adapted from the article in the Times Online.)

There’s more to sustainability than simply using less

I wouldn’t be too surprised if the Australian government passes legislation requiring all residents to shower with a friend in an effort to save water. We’re in a bit of a bind; the longest drought in living memory combined with global warming and climate change means that there is just not enough water to go around.

Energy, water and our population are all interrelated
Energy, water and our population are all interrelated

It’s not just a lack of water causing problems though. Manufacturing more energy (electricity) requires huge amounts of water (for steam), while manufacturing more water requires huge amounts of energy (for desalination). Factor in a growing and increasingly urban population and you quickly realize that washing your car every few weeks and buying energy appliances just won’t cut it.

Take the electrify industry for example. Today’s electricity utilities follow a model that is relatively unchanged since Samuel Insull’s day. Electrons are manufactured in large power stations before they are trucked down wires to where they are consumed by consumer and industrial devices. Demand dictates supply. Electrons are shared equally among devices and if we don’t have enough to meet demand, then everyone gets less than they need. The result is brownouts: motors fuse, traffic lights dim and people crash. Life generally stops.

Electricity production since Samuel Insulls day
Electricity production since Samuel Insull's day

Micro-generation and CHP (combined heat and power) will alleviate the problem somewhat, but if we want an electricity supply for a sustainable future then we need to completely rethink how electricity is managed. We need to move from a demand-driven system, to a supply-driven system. Rather than racing to manufacture enough electricity to fulfill demand, the focus would be on effectively using the energy available to us.

The technology required to reinvent electricity distribution is already emerging into the commercial world. The global rollout of smart metering is providing us with the basic infrastructure for a new, smarter energy distribution system. The challenge is to move beyond conventional centrally run demand management programmes, and adopt more distributed approaches. Technology is already emerging into the commercial arena demonstrating the first tentative steps on this journey.

Imagine if we could import the retail electricity spot price into the home or factory via smart metering. We have national energy markets, so why not create an energy market inside our houses? Local generation (solar, wind, CHP) would have a price set based on required required return on investment, while energy is imported (if required) based on the spot price. The decision if and when to consume electricity is then devolved to the appliances (fridge, air conditioner etc) by letting them bid for the energy they need.

An internal energy market
An internal energy market

The intelligence to support this complex behavior might be buried inside the appliance itself, or mediated via a smart plug. A hot water heater would trade of electricity price, usage profile and current water temperature to optimize its operation. Air conditioners might let the internal temperature rise a couple of degrees if its exceptionally hit out side. A dish washer might wait until a quiet period late at night—long after you’ve gone to bed—before running it’s cycle. Lights would always turn on (of course), but would also turn off again if they cannot detect anyone in the room.

Given an understanding of our usage patterns a market can be used to turn of appliances we don’t need, harvest power then it is cheap (by using waste solar power to pump water up hill), or even sell our excess. Technology enables us to understand our usage patterns and align them with the internal and national energy market to most effectively use the energy available to us.

The new complexity this approach creates could be packaged into solutions. Energy retailers could offer energy plans, analogous to mobile phone plans. Each plan would be tailored to suit different habits and needs. Plans might include value-added solutions, such as installing solar or wind power on premises, integrated into the home market.

In the same way that Threadless and Rolls Royce mined the synergies between business and technology to reinvent their respective industries, some companies might use a supply driven network to transform their business models. Rather than selling electricity (generating more profit by selling more) they might reconfigure themselves into power management companies who help you manage your consumption (generating more profit by selling less). This could range from configuring, monitoring and managing you home appliances to match their performance to your needs, through to installing solar panels on your own roof—at their own cost—so that that they can offer solar power on your internal energy market.

What are the challenges and opportunities created when we move to a supply driven model? What happens when we have supply driven homes? Supply driven committees? Supply driven regions? Or when entire networks become supply driven?

What are the challenges and opportunities created when we move to a supply driven model?
What are the challenges and opportunities created when we move to a supply driven model?

Smart metering and smart plugs are showing us the way. We already have a demand signal, though somewhat delayed, and we can retro-fit appliances with smart plugs to support demand management. The next step is to make this infrastructure a little smarter; upgrading the sensor network to support more distributed command and control, and embedding decision making in the home and, ultimately, the appliances themselves. This enables us to push decision making to the edge of the network where it can scale more effectively, provides us with a generation of much more efficient applications, and sets us up for the future.

Innovation [2008-11-03]

Another week and another collection of interesting ideas from around the Internet.

As always, thoughts and/or comments are greatly appreciated.

This issue: