Tag Archives: TESCO

Winners and losers in retail

There’s a lot of talk in the media at the moment about the soft retail market. Consumer confidence is down1)Australian Consumer ConfidenceTrading Economics and we (as we’re all consumers) are not spending like we used to, or at least we’re not spending like the retailers would like us to, and that when we do spend that we’re running to cheaper online retailers. I’m not sure that this is the whole story though.

With a spare Sunday afternoon on my hands I decided to spend some time trawling through the ABS retail data and take a look beyond the month-on-month trends. Working on an Australian version of the Shift Index2)The Shift Index: Measuring the forces of long term change, Deloitte has nudged me to wonder about the long term trends that are affecting retail.

Continue reading Winners and losers in retail

References   [ + ]

We saw the future, and there wasn’t an e-wallet to be found

Do NFC payments – with their tap-and-go simplicity – herald a revolution of the shopping experience? Or is NFC just an attempt to force more of our daily transactions onto payments platforms where their owners can claim a usage tax? The sales pitch is a promise of simpler, faster and more secure payments, allowing us to grab our goods and quickly get on with what we were doing. The reality is that the payment is only responsible for a small portion of the time wasted during the buying journey. Other trends we're seeing have much more potential to revolutionise the shopping experience, and they do this by moving the purchase away from the till to allow consumers to transact where and whenever they need. The huge investment in NFC means we can expect to see NFC terminals at most of the shops we frequent. However, at the same time we can expect NFC to be quickly eclipsed by other solutions which do a much better job of streamlining the buying journey.

Continue reading We saw the future, and there wasn’t an e-wallet to be found

How to cope with an IT transformation

Once started, an IT transformation is hard to stop. Such huge efforts – often involving investments of hundreds of millions, or even billions of dollars – take on a life of their own. Once the requirements have been scoped and IT has started the hard work of development, business’s thoughts often turn from anticipation to dread. How do we understand what we’re getting? How do we cope with business change between when we signed off the requirements to the solution entering production? Will the solution actually be able support an operating and constantly evolving business?

Transformations take a lot of time and huge amount of resources, giving them a life of their own within the business. It’s not uncommon for the team responsible for the transformation to absorb a significant proportion of the people and capital from the main business, often into the double-digit percentages. It’s also not uncommon for the the time between kicking off the project and delivery of the first components in to the business to be five years or more.

The world can change a lot in five years. Take Apple for example: sixty percent of the products they sell did not exist three years ago{{1}}. It’s not rare for the business to have a little buyers remorse once the requirements have been sign-off and we sit waiting for the solution to arrive. Did we ask for the right thing? Will the platforms and infrastructure perform as expected? Are our requirements good enough for IT to deliver what we need? Will what we asked for be relevant when it’s delivered?

[[1]]60 percent of Apple’s sales are from products that did not exist three years ago @ asumco.com[[1]]

Apple quarterly sales by product
Apple quarterly sales by product

The business has placed a large bet – often putting the entire company’s life on the line – so it’s understandable to be a little worried, and the investment is usually large enough that the business is committed: there’s no backing out now. However, the decision to undertake the transformation has been made, our bets have been placed, and there’s no point regretting carefully considered decisions made in the past with the best evidence and information we could gather at the time. We should be looking forward, focusing on how we can best leverage this investment once it is delivered.

We can break our concerns into a few distinct groups: completeness, suitability, relevance and adaptability.

First, we tend to worry that our requirements were complete. Did we give IT the information they need to do their job? Or were there holes and oversights in the requirements which will require interpretation by IT, interpretation which may or may-not align with how the business conceived the requirement when we wrote down the bullet points.

Next, we are concerned that we asked for the right thing. I don’t know about you, but I find it hard to imagine a finished solution from tables, bullet points and process diagrams. And I know that if I’m having trouble, then you’re probably imagining a slightly different finished solution than I’m thinking of. And IT probably has a different picture in their heads again. Someone is bound to be disappointed when the final solution is rolled out.

Thirdly, we have relevance. Five years is a long time. Even three years is long, as Apple has shown us. Our requirements were conceived in a very different business environment to the one that the solution will be deployed into. We probably did our best to guess what will change during the delivery journey, we can also be sure that some of our predictions will be wrong. How accurate our predictions are (which is largely a question of how lucky we were) will determine how relevant the solution will be. If our predictions were off the mark, then we might have a lot of work to do after the final release to bring the solution up to scratch.

Finally, we have adaptability. A business is not a fixed target, as it constantly evolves and adapts in response to the business environment it is situated in. Hopefully we specified the right flex-points – areas in the solution which will need to change rapidly in response to changing business need – back at the start of the journey. We don’t want our transformed IT estate to become instant legacy.

A lot of these concerns have already been address with ideas like rapid-productionisation{{2}} and (gasp!) agile methodologies, but they’re solving a different problem. Once you have a transformation underway, advice that you should hire lots of Scrum masters will fall on dead ears. While there’s a lot of good advice in these methodologies, our concern is coping with the transformation we have, not to throw away all effort to-date and try something different.

[[2]]Rapid productionising @ Shermo[[2]]

So what can we do to help IT ensure that the transformed IT estate is the best that it can be?

We could try to test to success, making IT jump through even more hoops by create more and increasing strenuous tests to add to the acceptance criteria, but while faster and more intense might work for George Lucas{{3}}, it doesn’t add a lot of value in this instance. Our concerns are understanding the requirements we have and safeguarding the relevance of our IT estate in a rapidly evolving business environment. We’re less concerned that existing requirements are implemented correctly (we should have already done that work).

[[3]]Fan criticism of George Lucas: Ability as a film director @ Wookieepedia[[3]]

I can see two clear strategies for coping with the IT transformation we have. First, is to create a better shared understanding of what the final solution might look like (shared between business and IT, as well as between business stakeholders). Second is to start understanding how the future IT estate might need to evolve and adapt in the future. Learnings from both of these activities can be feed back into the transformation to help improve the outcomes, as well as providing the business with a platform to communicate the potential scale and impact of the change with the broader user population.

There are a number of light-weight approaches to building and testing new user interfaces and workflows, putting the to-be requirements in the hands of the users in a very real and tactic way which enables them to understand what the world will look like post transformation. This needs to be more than UI wireframes or user storyboards. We need to trial new work practice, process improvements and decisioning logic. The team members at the coalface of the business also need to use these new tools in anger before we really under their impact. Above all, they need time with these solutions, time to form an opinion, as I’ve written about before{{4}}.

[[4]]I’ve already told you 125% of what I know @ PEG[[4]]

Much like the retail industry, with their trial stores, we can create a trial solution to explore how the final solution should move and act. We’re less worried about the plumbing and infrastructure, as we’re focused on the layout and how the trial store is used. This trial solution can be integrated with existing operations and provided to a small user population -– perhaps a branch in a bank, an single operations centre for back-office processing, or a one factory operated by a manufacturer – where we can realise, measure, test and update our understanding of what the to-be solution should look like, bringing our business and technology stakeholders to a single shared understanding of what we’re trying to achieve.

Our trial solution need not be on the production platform, as we’re trying to understand how the final solution should work and be used, not how it should be implemented. Startups are already providing enterprise mash-up platforms{{5}} which let you integrate UI, process and decisioning elements into one coherent user interface, often in weeks rather than months or years. Larger vendors – such as IBM and Oracle – are already integrating these technologies into their platforms. New vendors are also emerging which offer BPM on demand via a SaaS model.

[[5]]Enterprise Mash-Ups defined at Wikipedia[[5]]

Concerns about the scaleability and maintainability of these new technologies can be balanced with the limited scale and lifetime of our trial deployment. A trial operations centre in one city often need not require 24×7 support, perfectly capable of limping along with a nine-to-five phone number of someone from the development team. We can also always fail back to the older solution if the trial solution isn’t up to scratch.

Our second strategy might be to experiment with new ideas and wholly new models of operation, collecting data and insight on how the transformed IT estate might need to evolve once it becomes operational. This is the disruptive sibling of the incremental improvements in the trial solution. (Indeed, some of the insights from these experiments might even be tested in a trial solution, if feasible.)

In the spirit of experimental scenario planning, a bank might look to Mint{{6}} or Kiva{{7}}, while a retailer might look to Zara{{8}}. Or, even more interesting, you might look across industries, with a bank looking to Zara for inspiration, for example. The scenarios we identify might range from tactical plays, through major disruptions. What would happen if you took a different approach to planning{{9}}, as Tesco did{{10}} or if we, like Zara, focused on time to market rather than cost, and inverted how we think about our supply chain in the process{{11}}.

[[6]]Mint[[6]]
[[7]]Kiva[[7]]
[[8]]Zara[[8]]
[[9]]Inside vs. Outside @ PEG[[9]]
[[10]]Tesco is looking outside the building to predict customer needs @ PEG[[10]]
[[11]]Accelerate along the road to happiness @ PEG[[11]]

We can frame what we learn from these experiments in terms of the business drivers and activities they impact, allowing us to understand how the transformed IT estate would need to change in response. The data we obtain can be compiled and weighted to create a heat map which highlights potential change hotspots in the to-be IT estate, valuable information which can be feed back into the transformation effort, while the (measured, evaluated and updated) scenarios can be compiled into a playbook to prepare use when the new IT estate goes live.

Whatever we do, we can can’t sit by passively waiting for our new, transformed IT estate to be handed to us. Five years is a very long time in business, and if we want an IT estate that will support us into the future, then we need to start thinking about it now.

Working from the outside in

We’re drowning in a sea of data and ideas, with huge volumes of untapped information available both inside and outside our organization. There is so much information at our disposal that it’s hard to discern Arthur from Martha, let alone optimize the data set we’re using. How can we make sense of the chaos around us? How can we find the useful signals which will drive us to the next level of business performance, from amongst all this noise?

I’ve spent some time recently, thinking about how the decisions our knowledge workers make in planning and managing business exceptions can have a greater impact on our business performance than the logic reified in the applications themselves. And how the quality of information we feed into their decision making processes can have an even bigger impact, as the data’s impact is effectively amplified by the decision making process. Not all data is of equal value and, as is often said, if you put rubbish in then you get rubbish out.

Traditional Business Intelligence (BI) tackles this problem by enabling us to mine for correlations in the data tucked away in our data warehouse. These correlations provide us with signals to help drive better decisions. Managing stock levels based on historical trends (Christmas rush, BBQs in summer …) is good, but connecting these trends to local demographic shifts is better.

Unfortunately this approach is inherently limited. Not matter how powerful your analytical tools, you can only find correlations within and between the data sets you have in the data warehouse, and this is only a small subset of the total data available to us. We can load additional data sets into the warehouse (such as demographic data bought from a research firm), but in a world awash with (potentially useful) data, the real challenge is deciding on which data sets to load, and not in finding the correlations once they are loaded.

What we really need is a tool to help scan across all available data sets and find the data which will provide the best signals to drive the outcome we’re looking for. An outside-in approach, working from the outcome we want to the data we need, rather than an inside-out approach, working from the data we have to the outcomes it might support. This will provide us with a repeatable method, a system, for finding the signals needed to drive us to the next level of performance, rather than the creative, hit-and-miss approach we currently use. Or, in geekier terms, a methodology which enables us to proactively manage our information portfolio and derive the greatest value from it.

I was doodling on the tram the other day, playing with the figure I created for the Inside vs. Outside post, when I had a thought. The figure was created as a heat map showing how the value of information is modulated by time (new vs. old) and distance (inside vs. outside). What if we used it the other way around? (Kind of obvious in hindsight, I know, but these things usually are.) We might use the figure to map from the type of outcome we’re trying to achieve back to the signals required to drive us to that outcome.

Time and distance drive the value of information
Time and distance drive the value of information

This addresses an interesting comment (in email) by a U.K. colleague of mine. (Jon, stand up and be counted.) As Andy Mulholland pointed out, the upper right represents weak confusing signals, while the lower left represents strong, coherent signals. Being a delivery guy, Jon’s first though was how to manage the dangers in excessively focusing on the upper right corner of the figure. Sweeping a plane’s wings forward increases its maneuverability, but at the cost of decreasing it’s stability. Relying too heavily on external, early signals can, in a similar fashion, could push an organization into a danger zone. If we want to use these types of these signals to drive crucial business decisions, then we need to understand the tipping point and balance the risks.

My tram-doodle was a simple thing, converting a heat map to a mud map. For a given business decision, such as planning tomorrow’s stock levels for a FMCG category, we can outline the required performance envelope on the figure. This outline shows us the sort of signals we should be looking for (inside good, outside bad), while the shape of the outlines provides us with an understanding (and way of balancing) the overall maneuverability and stability of the outcome the signals will support. More external predictive scope in the outline (i.e. more area inside the outline in the upper-right quadrant) will provide a more responsive outcome, but at the cost of less stability. Increasing internal scope will provide a more stable outcome, but at the cost of responsiveness. Less stability might translate to more (potentially unnecessary) logistics movements, while more stability would represent missed sales opportunities. (This all creates a little deja vu, with a strong feeling of computing Q values for non-linear control theory back in university, so I’ve started formalizing how to create and measure these outlines, as well as how to determine the relative weights of signals in each area of the map, but that’s another blog post.)

An information performance mud map
An information performance mud map

Given a performance outline we can go spelunking for signals which fit inside the outline.

Luckily the mud map provides us with guidance on where to look. An internal-historical signal is, by definition driven by historical data generated inside the organization. Past till data? An external-reactive signal is, by definition external and reactive. A short term (i.e. tomorrow’s) weather forecast, perhaps? Casting our net as widely as possible, we can gather all the signals which have the potential to drive us toward to the desired outcome.

Next, we balance the information portfolio for this decision, identifying the minimum set of signals required to drive the decision. We can do this by grouping the signals by type (internal-historical, …) and then charting them against cost and value. Cost is the acquisition cost, and might represent a commercial transaction (buying access to another organizations near-term weather forecast), the development and consulting effort required to create the data set (forming your own weather forecasting function), or a combination of the two, heavily influenced by an architectural view of the solution (as Rod outlined). Value is a measure of the potency and quality of the signal, which will be determined by existing BI analytics methodologies.

Plotting value against cost on a new chart creates a handy tool for finding the data sets to use. We want to pick from the lower right – high value but low cost.

An information mud map
An information mud map

It’s interesting to tie this back to the Tesco example. Global warming is making the weather more variable, resulting in unseasonable hot and cold spells. This was, in turn, driving short-term consumer demand in directions not predicted by existing planning models. These changes in demand represented cost, in the from of stock left on the shelves past it’s use-by date, or missed opportunities, by not being able to service the demand when and where it arises.

The solution was to expand the information footprint, pulling in more predictive signals from outside the business: changing the outline on the mud map to improve closed-loop performance. The decision to create an in-house weather bureau represents a straight forward cost-value trade-off in delivering an operational solution.

These two tools provide us with an interesting approach to tackling a number of challenges I’m seeing inside companies today. We’re a lot more externally driven now than we were even just a few years ago. The challenge is to identify customer problems we can solve and tie them back to what our organization does, rather than trying to conceive offerings in isolation and push them out into the market. These tools enable us to sketch the customer challenges (the decisions our customers need to make) and map them back to the portfolio of signals that we can (or might like to) provide to them. It’s outcome-centric, rather than asset-centric, which provides us with more freedom to be creative in how we approach the market, and has the potential to foster a more intimate approach to serving customer demand.

Tesco’s looking outside the building to predict customer needs

Tesco is using external weather data to drive sales
Tesco is using external weather data to drive sales

Tesco, the UK’s largest retailer, has started using weather forecasts to help determine what to stock in its stores across the UK.

Traditional approaches to stock management use historical buying data to drive stock decisions. This has worked well to date, but the increasing unpredictability of today’s weather patterns — driven by global warming — has presented business with both an opportunity and a challenge. An unexpected warm (or cold) spell can create unexpected spikes in demand which go unserviced, while existing stock is left on the shelves.

In Tesco’s own words:

In recent years, the unpredictability of the British summer — not to mention the unreliability of British weather forecasters — has caused a massive headache for those in the retail food business deciding exactly which foods to put out on shelves.

The present summer is a perfect example, with the weather changing almost daily and shoppers wanting barbecue and salad foods one day and winter food the next.

Tesco’s solution was to integrate detailed regional weather reports — valuable, external information — with the sales history at each Tesco store. A rise of 10C, for example, led to a 300% uplift in sales of barbecue meat and a 50% increase in sales of lettuce.

Integrating weather and sales data will enable Tesco to both capture these spikes in demand, while avoiding waste.

(Largely adapted from the article in the Times Online.)