Category Archives: Series

David has the edge on Goliath

David returns triumphant with the head of Goliath (Palazzo Ferrari, Genoa)
David returns triumphant with the head of Goliath (Palazzo Ferrari, Genoa)

Is success in business due to luck or hard work? It used to be that if you worked hard and invested astutely in your business that you could expect to be rewarded. Build it and they will come. Times have changed though, and more and more often it seems that all that hard work goes to waste when an unknown (and previously unseen) competitor emerges from nowhere to steal the market from under your nose. Success has become random with the business environment perpetually unstable and in constant flux. The market is hit-driven rather than being based on careful investment. Success now depends on coming up with the right product at the right time, and having a fairly large dose of luck. Business development used to mean investing in your business and building up the assets under its control. Now it means maximising your business’s luck (or minimising the luck of others).

Continue reading David has the edge on Goliath

Dynamic pricing and the race to the bottom

I see that online retailers have been admiring the yield management techniques used by airlines and hotels{{1}}. After all, what’s not to like about profit maximisation? Consumer goods, however, are not a time sensitive resource who’s value crashes to zero after a particular date. Online retailers might just be starting an arms war with customers that they cannot win. The result will be a race to the bottom as mounting pressure compresses already tight margins.

[[1]]Brian Proffitt (September 2012), How much will it cost you? With Dynamic Pricing, online sellers say ‘It depends’.., ReadWriteWeb[[1]]

Continue reading Dynamic pricing and the race to the bottom

My book, ‘The New Instability’, is finally available

Cover of The New InstabilityAfter much effort my book, ‘The New Instability’, has finally found its way through the channel and is now available as a paperback, ePub (iPad, Nook) and mobi (Kindle).

The nutshell summary:

The uncertain economic and business condition is not simply a passing phase, a confluence of technology is changing the how the business environment and old business models and strategies built around acquiring and leveraging assets are breaking down. A new generation of companies are rethinking how business should operate and discovering business models that are orders of magnitude more efficient than the previous lot. How have they done this? And how can how do it?

It you would like to know more then you can find an extract from the book on the ‘Extract’ page on the book’s web site.

You can buy it as:

I’ll update the ‘Where to buy’ page as it finds it’s way out into the wild.

What recession?

The global financial crisis hit nearly four years ago in 2008 but America and Europe appear to still be stuck in the mud. Even the Asian market has softened. But is this a recession? Or are we seeing a reconfiguration of the economy as the technological seeds laid over the last few generations finally germinated and bear fruit? Prices for made goods are collapsing as the cost of manufacturing has plummeted, while the cost of sourcing and distribution has crashed, caught between globalisation and the Internet. Even innovation, the source of all those sexy new products, has been democratised with the investment required to development new products taking a nosedive. Our existing business models were not designed to thrive, or even survive, this this environment. While the current market is a challenge to navigate, a lot of the problems we're seeing could be result of a collapse of antiquated business models rather than the collapse in demand that these businesses are intended to service.

Continue reading What recession?

Why scanning more data will not (necessarily) help BI

I pointed out the other day, that we seem to be at a tipping point for BI. The quest for more seems to be loosing its head of steam, with most decision makers drowning in a sea of massaged and smoothed data. There are some good moves to look beyond our traditional stomping ground of transactional data, but the real challenge is not in considering more data, but to consider the right data.

Most interesting business decisions seem to be a synthesis process. We take a handful of data and fuse them to create an insight. The invention of breath strips is a case in point. We can rarely break our problem down to a single (computed) metric, the world just doesn’t work that way.

Most business decisions rest on small number of data points. It’s just one of our cognitive limits: our working memory is only large enough to hold (approximately) four things (concepts and/or data points) in our head at once. This is one reason that I think Andrew McAfee’s cut-down business case works so well; it works with our human limitations rather than against them.

I was watching an interesting talk the other day — Peter Norvig was providing some gentle suggestions on what features should be beneficial in a language to support scientific computing. Somewhere in the middle of the talk he mentioned the Curse of dimensionality, which is something I hadn’t thought of for a while. This is the problem caused by the exponential increase in volume associated with each additional dimension of (mathematical) space.

In terms of the problem we’re considering, this means that if you are looking for n insights to a problem in a field of data (the n best data points to drive our decision), then finding them becomes exponentially harder for each data set (dimension) we add. More isn’t necessarily better. While the addition of new data sets (such as sourcing data from social networks) enables us to create new correlations, we’re also forced to search an exponentially larger area to find them. It’s the law of diminishing returns.

Our inbuilt cognitive limit only complicates this. When we hit our cognitive limit — when n becomes as large as we can usefully use — any additional correlations can become a burden rather than a benefit. In today’s rich and varied information environment, the problem isn’t to consider more data, or to find more correlations, its to find the best three or features in the data which will drive our decision in the right direction.

How do we navigate from the outside in? From the decision we need, to the data that will drive it. This is the problem I hope the Value of Information discussion addresses.

Posted via web from PEG @ Posterous

Is BI really the next big thing?

I think we’re at a tipping point with BI. Yes, it makes sense that BI should be the next big thing in the new year, as many pundits are predicting, driven by the need to make sense of the massive volume of data we’re accumulated. However, I doubt that BI in its current form is up to the task.

As one of the CEOs Andy Mulholland spoke to mentioned “I want to know … when I need to focus in.” The CEO’s problem is not more data, but the right data. As Andy rightfully points out in an earlier blog post, we’ve been focused on harvesting the value from our internal, manufactured data, ignoring the latent potential in our unstructured data (let alone the unstructured data we can find outside the enterprise). The challenge is not to find more data, but the right data to drive the CEO’s decision on where to focus.

It’s amazing how little data you need to make an effective decision—if you have the right data. Andrew McAfee wrote a nice blog post a few years ago (The case against the business case is the closest I can find to it), pointing out that the mass of data we pile into a conventional business case just clouds the issues, creating long cause-and-effect chains that make it hard to come to an effective decision. His solution was the one page business case: capability delivered, (rough) business requirements, solution footprint, and (rough) costing. It might be one page, but there is enough information, the right information, to make an effective decision. I’ve used his approach ever since.

Current BI seems to be approaching the horse from the wrong direction, much like Andrew’s business case problem. We focus on sifting through all the information we have, trying to glean any trends and correlations which might be useful. This works as small to moderate scales, but once we reach the huge end of the scale it starts to groan under its own weight. It’s the law of diminishing returns—adding more information to the mix will only have a moderate benefit compared to the effort required to integrate and process it.

A more productive method might be to use a hypothesis-driven approach. Rather than look for anything that might be interesting, why not go spelunking for specific features which we know will be interesting?  The features we’re looking for in the information are (almost always) to support a decision. Why not map out that decision, similar to how we map out the requires for a feedback loop in a control system, and identify the types of features that we need to support the decision we want to make? We can segment our data sets based on the features’ gross characteristics (inside vs. outside, predictive vs. historical …) and then search in the appropriate segments for the features we need. We’ve broken one large problem—find correlations in one massive data set—into a series of much more manageable tasks.

The information arms race, the race to search through more information for that golden ticket, is just a relic of the lack of information we’ve lived with in the past. In today’s land of plenty, more is not necessarily better. Finding the right features is our real challenge.

Posted via email from PEG @ Posterous

Working from the outside in

We’re drowning in a sea of data and ideas, with huge volumes of untapped information available both inside and outside our organization. There is so much information at our disposal that it’s hard to discern Arthur from Martha, let alone optimize the data set we’re using. How can we make sense of the chaos around us? How can we find the useful signals which will drive us to the next level of business performance, from amongst all this noise?

I’ve spent some time recently, thinking about how the decisions our knowledge workers make in planning and managing business exceptions can have a greater impact on our business performance than the logic reified in the applications themselves. And how the quality of information we feed into their decision making processes can have an even bigger impact, as the data’s impact is effectively amplified by the decision making process. Not all data is of equal value and, as is often said, if you put rubbish in then you get rubbish out.

Traditional Business Intelligence (BI) tackles this problem by enabling us to mine for correlations in the data tucked away in our data warehouse. These correlations provide us with signals to help drive better decisions. Managing stock levels based on historical trends (Christmas rush, BBQs in summer …) is good, but connecting these trends to local demographic shifts is better.

Unfortunately this approach is inherently limited. Not matter how powerful your analytical tools, you can only find correlations within and between the data sets you have in the data warehouse, and this is only a small subset of the total data available to us. We can load additional data sets into the warehouse (such as demographic data bought from a research firm), but in a world awash with (potentially useful) data, the real challenge is deciding on which data sets to load, and not in finding the correlations once they are loaded.

What we really need is a tool to help scan across all available data sets and find the data which will provide the best signals to drive the outcome we’re looking for. An outside-in approach, working from the outcome we want to the data we need, rather than an inside-out approach, working from the data we have to the outcomes it might support. This will provide us with a repeatable method, a system, for finding the signals needed to drive us to the next level of performance, rather than the creative, hit-and-miss approach we currently use. Or, in geekier terms, a methodology which enables us to proactively manage our information portfolio and derive the greatest value from it.

I was doodling on the tram the other day, playing with the figure I created for the Inside vs. Outside post, when I had a thought. The figure was created as a heat map showing how the value of information is modulated by time (new vs. old) and distance (inside vs. outside). What if we used it the other way around? (Kind of obvious in hindsight, I know, but these things usually are.) We might use the figure to map from the type of outcome we’re trying to achieve back to the signals required to drive us to that outcome.

Time and distance drive the value of information
Time and distance drive the value of information

This addresses an interesting comment (in email) by a U.K. colleague of mine. (Jon, stand up and be counted.) As Andy Mulholland pointed out, the upper right represents weak confusing signals, while the lower left represents strong, coherent signals. Being a delivery guy, Jon’s first though was how to manage the dangers in excessively focusing on the upper right corner of the figure. Sweeping a plane’s wings forward increases its maneuverability, but at the cost of decreasing it’s stability. Relying too heavily on external, early signals can, in a similar fashion, could push an organization into a danger zone. If we want to use these types of these signals to drive crucial business decisions, then we need to understand the tipping point and balance the risks.

My tram-doodle was a simple thing, converting a heat map to a mud map. For a given business decision, such as planning tomorrow’s stock levels for a FMCG category, we can outline the required performance envelope on the figure. This outline shows us the sort of signals we should be looking for (inside good, outside bad), while the shape of the outlines provides us with an understanding (and way of balancing) the overall maneuverability and stability of the outcome the signals will support. More external predictive scope in the outline (i.e. more area inside the outline in the upper-right quadrant) will provide a more responsive outcome, but at the cost of less stability. Increasing internal scope will provide a more stable outcome, but at the cost of responsiveness. Less stability might translate to more (potentially unnecessary) logistics movements, while more stability would represent missed sales opportunities. (This all creates a little deja vu, with a strong feeling of computing Q values for non-linear control theory back in university, so I’ve started formalizing how to create and measure these outlines, as well as how to determine the relative weights of signals in each area of the map, but that’s another blog post.)

An information performance mud map
An information performance mud map

Given a performance outline we can go spelunking for signals which fit inside the outline.

Luckily the mud map provides us with guidance on where to look. An internal-historical signal is, by definition driven by historical data generated inside the organization. Past till data? An external-reactive signal is, by definition external and reactive. A short term (i.e. tomorrow’s) weather forecast, perhaps? Casting our net as widely as possible, we can gather all the signals which have the potential to drive us toward to the desired outcome.

Next, we balance the information portfolio for this decision, identifying the minimum set of signals required to drive the decision. We can do this by grouping the signals by type (internal-historical, …) and then charting them against cost and value. Cost is the acquisition cost, and might represent a commercial transaction (buying access to another organizations near-term weather forecast), the development and consulting effort required to create the data set (forming your own weather forecasting function), or a combination of the two, heavily influenced by an architectural view of the solution (as Rod outlined). Value is a measure of the potency and quality of the signal, which will be determined by existing BI analytics methodologies.

Plotting value against cost on a new chart creates a handy tool for finding the data sets to use. We want to pick from the lower right – high value but low cost.

An information mud map
An information mud map

It’s interesting to tie this back to the Tesco example. Global warming is making the weather more variable, resulting in unseasonable hot and cold spells. This was, in turn, driving short-term consumer demand in directions not predicted by existing planning models. These changes in demand represented cost, in the from of stock left on the shelves past it’s use-by date, or missed opportunities, by not being able to service the demand when and where it arises.

The solution was to expand the information footprint, pulling in more predictive signals from outside the business: changing the outline on the mud map to improve closed-loop performance. The decision to create an in-house weather bureau represents a straight forward cost-value trade-off in delivering an operational solution.

These two tools provide us with an interesting approach to tackling a number of challenges I’m seeing inside companies today. We’re a lot more externally driven now than we were even just a few years ago. The challenge is to identify customer problems we can solve and tie them back to what our organization does, rather than trying to conceive offerings in isolation and push them out into the market. These tools enable us to sketch the customer challenges (the decisions our customers need to make) and map them back to the portfolio of signals that we can (or might like to) provide to them. It’s outcome-centric, rather than asset-centric, which provides us with more freedom to be creative in how we approach the market, and has the potential to foster a more intimate approach to serving customer demand.