Tag Archives: Management

Business is like a train…

The following analogy popped up the other day in an email discussion with a friend.

Running a business is a bit like being the Fat Controller, running his vast train network. We spend our time trying to get the trains to run on time with the all too often distraction of digging the Troublesome Trucks out of trouble.

Improvement often means upgrading the tracks to create smoother, straighter lines. After years of doing this, any improvement to the tracks can only provide a minor, incremental benefit.

What we really need is a new signalling system. We need to better utilise the tracks we already have, and this means making better decisions about which trains to run where, and better coordination between the trains. Our tracks are fine (as long as we keep up the scheduled maintenance), but we do need to better manage transit across and between them.

Swap processes for tracks, and I think that this paints quite a nice visual picture.

Years of processes improvement (via LEAN, Six Sigma and, more recently, BPM) had straightened and smoothed our processes to the point that any additional investment has hit the law of diminishing returns. Rather than continue to try and improve the processes on my own, I’d outsource process maintenance to a collection of SaaS and BPO providers.

The greater scale of these providers allows them to invest in improvements which I don’t have the time or money for. Handing over responsibility also creates the time and space for me to focus on improving the decisions on which process to run where, and when: my signalling system.

This is especially important in a world where it is becoming rare to even own the processes these days.

We forget just how important a good signalling system is. Get it right and you get the German or Japanese train networks. Get it wrong and you rapidly descend into the second or third world, regardless of the quality of your tracks.

BPM is not a programming challenge

Get a few beers into a group of developers these days and it’s not uncommon for the complaints to start flowing about BPM (Business Process Management). BPM, they usually conclude, is more pain than it’s worth. I don’t think that BPM is a bad technology, per se, but it does appear to be the wrong tool for the job. The root of the problem is that BPM is a handy tool for programming distributed systems, but the challenge of creating distributed systems is orthogonal to business process execution and management. We’re using a screw driver to belt in a nail. It’s more productive to think business process execution and management as a (realtime) planning problem.

Programming is the automation of the known. Take as stable, repeatable process and automate it; bake the process into silicone to make it go fast. This is the same tactic that I was using back in my image processing days (and that was a long time ago). We’d develop the algorithms in C, experiment and tweak until they were right, and once they were stable we’d burn them into an ASIC (Application-Specific Integrated Circuit) to provide a speed boost. The ASICs were a lot faster than the C version: more than an order of magnitude faster.

Programmers, and IT folk in general, have a habit of treating the problems we confront as programming challenges. This has been outstandingly successful to date; just try and find a home appliance or service that doesn’t have a programme buried in it somewhere. (It’s not an unmitigated success though, such as our tumble drier is driving us nuts if its overly frequent software errors.) It’s not surprising that we chose to treat business processes automation and management as a programming problem once it appeared on our radar.

Don’t get me wrong: BPM is a solid technology. A friend of mine once showed my how he’d used his BPM stack to test its BPEL engine. As side from being a nice example of eating your own dog food, it was a great example of using BPEL as a distributed programming tool to solve a small but complex problem.

So why do we see so many developers complaining about BPM? It’s not the technology itself: the technology works. The issue is that we’re using it to solve problems that it’s not suited for. The most obvious evidence of this is the current poor state of BPM support for business exception management. We’ve deployed a lot of technology to support exception management in business processes without really solving the problem.

Managing business exceptions is driving the developers nuts. I know of one example where managing a couple of not infrequent business exceptions was the major technical problem in a very significant project (well into eight figures). The problem is that business exceptions are not from the same family of beasts as programming exceptions. Programming exceptions are exceptional. Business exceptions are just a (slightly) different way to achieve the same goal. All our compensating actions and exception stacks just get in the way of solving the problem.

On PowerPoint, anything can look achievable. The BPMN diagram we shared with the business was extremely elegant: nice sharp angles and coloured bubbles. Everyone agreed that it was a good representation of what the business does. The devil is in the details though. The development team quickly becomes frustrated as they have to deal with the realities of implementing a dynamic and exception rich business processes. Exceptions pile up on top of exceptions, and soon that BPMN diagram covers a wall, littered as it is with branch and join operations. It’s not a complex process, but we’ve made it incredibly complicated.

Edward Tufte's take on explaining complex concepts with PowerPoint
A military parade explained, a la PowerPoint

We can’t program our way out of this box, trying to pile on more features and patches. We can rip the complications out – simplifying the process to the point that it becomes tractable with our programming tools (which is what happened in my example above). But this removes all the variation which which makes the processes so valuable. (This, of course, the dirty secret of LEAN et al: you’re trading flexibility for cost saving, making your processes very efficient but also very fragile.)

Or we can try solving the problem a different way.

Don’t treat the automation of a business processes as a programming task (and I by this I mean the capture of imperative instructions for a computer to execute, no matter how unstructured or parallel). Programming is the automation of the known. Business processes, however, are the management and anticipation of the unknown. Modelling business processes should be seen as a (realtime) planning problem.

Which comes back to one of my common themes: push vs pull models, or the importance of what over how. Or, as a friend of mine with a better turn of phrase puts it, we need to stop trying to invent new technologies and work out how to use what we already have more effectively. Rather than trying to invent new technologies to solve problems that are already well understood elsewhere, pushing the technology into the problem, a more pragmatic approach is to leverage that existing understanding and then pull in existing technologies as appropriate.

Planning and executing in a rapidly changing environment is a well understood problem. Just ask anyone who’s been involved with the military. If we view the management of a business processes as a realtime planning problem, then what were business exceptions are reduced to simply alternate routes to the same goal, rather than a problem which requires a compensating action.

Battle of Gaugamela (Arbela) (331BC)
Take that hill!

One key principle is to establish a clear goal – Take that hill!, or Find that lost shipment! – articulate the tactics, the courses of action we might use to achieve that goal, and then defer decisions on which course of action to take until the decision needs to be made. If we commit to a course of action too early, locking in a decision during design time, then it’s likely that we’ll be forced to manage the exception when we realise that we picked the wrong course of action. It’s better to wait until the moment when all relevant information and options are available to us, and then take decisive action.

From a modelling point of view, we need to establish where are the key events at which we need to make decisions in line with a larger strategy. The decisions at each of these events needs to weigh the available courses of action and select the most appropriate, much like using a set of business rules to identify applicable options. The course of action selected, a scenario or business process fragment, will be semi independent from the other in the applicable set, as it addresses a different business context. Nor can the scenario we pick cannot be predetermined, as it depends on the business context. Short and sharp, each scenario will be simple, general and flexible, enabling us to configure it for the specific circumstances at hand, as we can’t anticipate all possible scenarios. And finally, we need to ensure that the scenarios we provide cover the situations we can anticipate, including the provision of a manual escape hatch.

Goals, rules and process: in that order. Integrated rather than as standalone engines. Pull pull these established technologies into a single platform and we might just be closer to a BPM solution inline with what we really need. (And we know there is nothing new under the sun, as this essentially a build on Jim Sinurs rules-and-process argument, and borrows a lot from STRIPS, PRS, dMARS and even the work I did at Agentis.)

As I mentioned at the start of this missive, BPM as a product category makes sense and the current implementations are capable distributed programming tools. The problem is that business process management is not a distributed programming challenge. Business exceptions are not exceptional. I say steal a page from the military strategy book – they, after all, have been successfully working on this problem for some time – and build our solutions around ideas the military use to succeed in a rapidly changing environment. Goals, rules and processes. The trick is to be pragmatic, rather than dogmatic in our implementation, and focus on solving the problem rather then trying to create a new technology.

Danger Will Robinson!

Ack! The scorecard's gone red!
Ack! The scorecard's gone red!

Andy Mulholland has a nice post over at the Capgemini CTO blog, which points out that we have a strange aversion to the colour red. Having red on your balanced scorecard is not necessarily a bad thing, as it tells you something that you didn’t know before. Insisting on managers delivering completely green scorecard is just throwing good information away.

Unfortunately something’s wrong with Capgemini’s blogging platform, and it won’t let me post a comment. Go and read the post, and then you can find my comment below.

Economists have a (rather old) saying: “if you don’t fail occasionally, then you’re not optimising (enough)”. We need to consider red squares on the board to be opportunities, just as much as they might be problems. Red just represents “something happened that we didn’t expect”. This might be bad (something broke), or it might be good (an opportunity).

Given the rapid pace of change today, and the high incidence of the unexpected, managing all the red out of your business instantly turns you into a dinosaur.

Consulting doesn’t work any more. We need to reinvent it.

What does it mean to be in consulting these days? The consulting model that’s evolved over the last 30 – 50 years seems to be breaking down. The internet and social media have shifted the way business operates, and the consulting industry has failed to move with it. The old tricks that the industry has relied on — the did it, done it stories and the assumption that I know something you don’t — no longer apply. Margins are under pressure and revenue is on the way down (though outsourcing is propping up some) as clients find smarter ways to solve problems, or decide that they can simply do without. The knowledge and resources the consulting industry has been selling are no longer scarce, and we need to sell something else. Rather than seeing this as a problem, I see it as a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. Sell outcomes, not scarcity and rationing.

I’m a consultant. I have been for some time too, working in both small and large consultancies. It seems to me that the traditional relationship between consultancy and client is breaking down. This also appears to be true for both flavours of consulting: business and technology. And by consulting I mean everything from the large tier ones down to the brave individuals carving a path for themselves.

Business is down, and the magic number seems to be roughly a 17% decline year-on-year. One possible cause might be that the life blood of the industry — the large multi-year transformation project — has lost a lot of its attraction in recent years. If you dig around in the financials for the large publicly listed consultancies and vendors you’ll find that the revenue from IT estate renewal and transformation (application licenses, application configuration and installation services, change management, and even advisory) is sagging by roughly 17% everywhere around the globe.

SABER @ American Airlines

Large transformation projects have lost much of their attraction. While IBM successfully delivered SABER back in the 60s, providing a heart transplant for American Airlines ticketing processes, more recent stabs at similarly sized projects have met with less than stellar results. Many more projects are quietly swept under the carpet, declared a success so that involved can move on to something else.

The consulting model is a simple one. Consultants work on projects, and the projects translate into billable hours. Consultancies strive to minimise overheads (working on customer premises and minimising support staff), while passing incidental costs through to clients in the form of expenses. Billable hours drive revenue, with lower grades provide higher margins.

This creates a couple of interesting, and predictable, behaviours. First, productivity enhancing tooling is frowned on. It’s better to deploy a graduate with a spreadsheet than a more senior consultant with effective tooling. Second, a small number of large transactions are preferred to a large number of small transactions. A small number of large transactions requires less overhead (sales and back-office infrastructure).

All this drives consultancies to create large, transformational projects. Advisory projects end up developing multi-year (or even multi-decade) roadmaps to consolidate, align and optimise the business. Technology projects deliver large, multi-million dollar, IT assets into the IT estate. These large, business and IT transformation projects provide the growth, revenue and margin targets required to beat the market.

This desire for large projects is packaged up in what is commonly called “best practice”. The consulting industry focuses on did it, done it stories, standard and repeatable projects to minimise risk. The sales pitch is straight-forward: “Do you want this thing we did over here?” This might be the development of a global sourcing strategy, an ERP implementation, …

Spencer Tracy & Katharine Hepburn in The Desk Set
Spencer Tracy & Katharine Hepburn in The Desk Set

This approach has worked for some time, with consultancy and client more-or-less aligned. Back when IBM developed SABER you were forced to build solutions from the tin up, and even small business solutions required significant effort to deliver. In the 1957, when Spencer Tracy played a productivity expert in The Desk Set, new IT solutions required very specific skills sets to develop and deploy. These skills were in short supply, making it hard for an organisation to create and maintain a critical mass of in-house expertise.

Rather than attempt to build an internal capability — forcing the organisation on a long learning journey, a journey involving making mistakes to acquire tacit knowledge — a more pragmatic approach is to rent the capability. Using a consultancy provides access to skills and knowledge you can’t get elsewhere, usually packaged up as a formal methodology. It’s a risk management exercise: you get a consultancy to deliver a solution or develop a strategy as they just did one last week and know where all the potholes are. If we were cheeky, then we would summerize this by stating that consultancies have a simple value proposition: I know something you don’t!

It’s a model defined by scarcity.

A lot has changed in the last few years; business moves a lot faster and a new generation of technology is starting to take hold. The business and technology environment is changing so fast that we’re struggling to keep up. Technology and business have become so interwoven that we now talk of Business-Technology, and a lot of that scarce knowledge is now easily obtainable.

The Diverging Pulse Rates of Business and Technology
The Diverging Pulse Rates of Business and Technology

The scarce tacit knowledge we used to require is now bundled up in methodologies; methodologies which are trainable, learnable, and scaleable. LEAN and Six Sigma are good examples of this, starting as more black art than science, maturing into respected methodologies, to today where certification is widely available and each methodology has a vibrate community of practitioners spread across both clients and consultancies. The growth of MBA programmes also ensures that this knowledge is spread far and wide.

Technology has followed a similar path, with the detailed knowledge required to develop distributed solutions incrementally reified in methodologies and frameworks. When I started my career XDR and sockets were the networking technologies of the day, and teams often grew to close to one hundred engineers. Today the same solution developed on a modern platform (Java, Ruby, Python …) has a team in the single digits, and takes a fraction of the time. Tacit knowledge has be reified in software platforms and frameworks. SaaS (Software as a Service) takes this to a while new level by enabling you to avoid software development entirely.

The did it, done it stories that consulting has thrived on in the past are being chewed up and spat out by the business schools, open source, and the platform and SaaS vendors. A casual survey of the market usually finds that SaaS-based solutions require 10% of the installation effort of a traditional on-premsis solution. (Yes, that’s 90% less effort.) Less effort means less revenue for the consultancies. It also reduces the need for advisory services, as provisioning a SaaS solution with the corporate credit card should not require a $200,000 project to build a cost-benefit analysis. And gone are the days when you could simply read the latest magazines and articles from the business schools, spouting what you’d read back to a client. Many clients have been on the consulting side of the fence, have a similar education in the business schools, and reads all the same articles.

I know and you don’t! no longer works. The world has moved on and the consulting industry needs to adapt. The knowledge and resources the industry has been selling are no longer scarce, and we need to sell something else. I see this is a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. As Jeff Jarvis has said: stop selling scarcity, sell outcomes.

Updated: A good friend has pointed out the one area of consulting — one which we might call applied business consulting — resists the trend to be commoditized. This is the old school task of sitting with clients one-on-one, working to understand their enterprise and what makes it special, and then using this understanding to find the next area or opportunity that the enterprise is uniquely qualified to exploit. There’s no junior consultants in this area, only old grey-beards who are too expensive to stay in their old jobs, but that still are highly useful to the industry. Unfortunately this model doesn’t scale, forcing most (if not all) consultancies into a more operational knowledge transfer role (think Six Sigma and LEAN) in an attempt to improve revenue and GOP.

Updated: Keith Coleman (global head of public sector at Capgemini Consulting) makes a similar case with Time to sell results, not just advice (via @rpetal27).

Updated: I’ve responded to my own post, tweaking my consulting page to capture my take on what a consultant needs to do in this day and age.

One of the only two sources of sustainable competitive advantage available to us today

I stumbled onto a somewhat interesting post over at HBR, which talks Garry Kasparov’s ideas in the business world. This is actually quite a relevant pairing, though an old one in the tradition of human-computer augmentation.

The idea a simple one, which takes far fewer words to express than the article took.

Use information technology to augment users, rather than replace them.

IT is good at lot of tasks, and less good at others. People, too, have their strengths and weaknesses. What’s interesting is that computers are weak where people are strong, and vice-versa. Computers excel as appliers of algorithms with huge memories and an attention to detail; people are powerful, creative problem solvers who have trouble thinking of four things at once and like coffee breaks. Why not pair the two, and get the best of both worlds.

Rather than replace the users, why don’t we use technology to automate the easy (for technology) 80% of what they do. (This is something I’ve written about before.) In the chess example, the easy 80% is providing the user with a chess computer for the commoditized solution space search, allowing them to focus on strategy. The performance improvement this approach provides can create an significant competitive advantage. As Garry Kasparov found, even a weak user with a chess computer can be impossible to defeat, by human or computer.

This then provides us with two options:

  1. Take the improvement as a saving by reducing head count.
  2. Reinvest the improvement by providing our users with more time to focus on the hard 20%.

(I must admit, i much prefer the later.)

If we continue to focus on automating the next easy 80%, we’ve created a platform and process for continual business optimisation. (Improvements in search efficiency would simply be harvested when appropriate to maintain parity.) Interestingly, this is one of only two sources of a sustainable competitive advantage available to us today.

The competative advantage with this approach rests with the user, in the commonplaces, the strategies, they use to solve problems. By reifying the easy 80% these strategies in software (processes and rules) we are moving some of the competitive advantage into the organisation with it can leveraged by other users. By continually attacking the easy 80% of what the users are doing, we are continually improving our competitive position. We could even sell our IT platform (but not the reified problem solving strategies) to our competitors — commoditzing the platform to realise a cost saving — without endangering our competitive position, as they would need to go through the same improvement and learning process that we did, while we continue to race ahead.

Now that’s scary: as long as we keep improving our process, our competitors will never be able to catch us.

Posted via web from PEG @ Posterous

Is “agile enterprise IT” an oxymoron?

Have we managed to design agility out of enterprise IT? Are the two now incompatible? Our decision to measure IT purely in terms of cost (ROI) or stability (SLAs) means that we have put aside other desirable characteristics like responsiveness, making our IT estates more like the lumbering airships of the 1920s. While efficient and reliable (once we got the hydrogen out of them), they are neither exciting or responsive to the business. The business ends up going elsewhere for their thrills. What to do?

LZ-127 Graf Zeppelin
LZ-127 Graf Zeppelin

An interesting post on jugaad over at the Capgemini CTO blog got me thinking. The tension between the managed chaos that jugaad seems to represent and the stability we strive for in IT seems to nicely capture the current tensions between business and IT. Business finds that opportunities are blinking in and out of existence faster than ever before, providing dramatically reduced windows of opportunity leaving IT departments unable to respond in time, prompting the business to look outside the organisation for solutions.

The first rule of CIOs is “you only have a seat at the strategy table if you’re keeping the lights on”. The pressure is on to keep the transactions flowing, and we spend a lot of time and money (usually the vast majority of our budget) ensuring that transactions do indeed flow. We often complain that our entire focus seems to be on cost and operations, when there is so much more we can bring to the leadership team. We forget that all departments labour under a similar rule, and all these rules are really just localised versions of a single overarching rule: the first rule of business, which is to be in business (i.e. remain solvent). Sales needs to sell, manufacturing needs to manufacture, … By devoting so much of our energy on cost and stability, we seems to have dug ourselves into a bit of a hole.

There’s another rule that I like to quote from time-to-time: management is not the art of making the perfect decision, but making a timely decision and then making it work. This seems to be something we’ve forgotten in the West, and particularly in IT. Perfection is an unattainable ideal in the real world, and agility requires a little chaos/instability. What’s interesting about jugaad is the concept’s ability to embrace the chaos required to succeed when resource constraints prevent you for using the perfect (or even simply the best) solution.

Vickers F.B. 5 Gunbus
Vickers F.B.5. Gunbus

Consider a fighter plane. The other day I was watching a documentary on the history of aircraft which showed how the evolution of fighters is a progression from stability to instability The first fighters (and we’re talking the start of WWI here–all fabric and glue) were designed to float above the battlefield where the pilots could shoot down at soldiers, or even lob bombs at them. They were designed to be very stable, so stable that the pilot could ignore the controls for a while and the plane would fly itself. Or you could shoot out most of the control surfaces and still land safely. (Sounds a bit like a modern, bullet proof, IT application, eh?)

The Red Baron: NAME
The Red Baron: Manfred von Richthofen

The problem with these planes is that they are very stable. It’s hard to make them turn and dance about, and this makes them easy to shoot down. They needed to be more agile, harder to shoot down, and the solution was to make them less stable. The result, by the end of WWI, was the fairly unstable tri-planes we associate with the Red Baron. Yes, this made them harder to fly, and even harder to land, but it also made them harder to hit.

Wizz forward to the modern day, and we find that all modern fighters are unstable by design. They’re so unstable that they’re unflyable without modern fly-by-wire systems. Forget about landing: you couldn’t even get them off the ground without their fancy control systems. The governance of the fly-by-wire systems lets the pilot control the uncontrollable.

The problem with modern IT is that it is too stable. Not the parts, the individual applications, but the IT estate as a whole. We’ve designed agility out of it, focusing on creating a stable and efficient platform for lobbing bombs onto the enemy below. This is great is the landscape below us doesn’t change, and the enemy promises not to move or shoot back, but not so good in today’s rapidly changing business environment. We need to be able to rapidly turn and dance about, both to dodge bullets and pounce on opportunities. We need some instability as instability means that we’re poised for change.

Jugaad points out that we need to allow in a bit of chaos if we want to bring the agility back in. The chaos jugaad provides is the instability we need. This will require us to update our governance processes, evolving them beyond simply being a tool to stop the bad happening, transforming governance into a tool for harvesting the jugaad where it occurs. After all, the role of enterprise IT is to capture good ideas and automate them, allowing them to be leveraged across the entire enterprise.

Managing chaos has become something of a science in the aircraft world. Tools like Energy-Maneuverability theory are used during aircraft design to make informed tradeoffs between weight, weapons load, amount of wing (i.e. ability to turn), and so on. This goes well beyond most efforts to map and score business processes, which is inherently a static pieces/parts and cost driven approach. Our focus should be on using different technologies and delivery approaches to modify how our IT estate responds to business change; optimising our IT estate’s dynamic, change-driven characteristics as well as its cost-driven static characteristics.

This might be the root of some of the problems we’re seeing between business and IT. IT’s tendency to measure value in terms of cost and/or stability leads us to create IT estates optimised for a static environment, which are at odds with the dynamic nature of the modern business environment. We should be focusing on the overall dynamic business performance of the IT estate, its energy-maneuverability profile.

We can be our own worst enemy

The only certainties in life are death and taxes, or so we’ve been told on numerous occasions. I’d like to add “change” to the list. Change, be it business change or change in our personal lives, has accelerated to the point that we can expect the environment we inhabit to change significantly in the immediate future, let along over the length of our careers. If we want our business to remain competitive in this ever evolving landscape, then overcoming our (and our team’s) own resistance to change is our biggest challenge.

The rules we have built our careers on, rules forged back in the industrial revolution, are starting to come apart. Most folk—from the Baby Boomers through to Gen Y—expect the skills they acquired in their formative years to support them well through to retirement. How we conduct business might change radically, driven by technological and societal change, but what we did in business could be assumed to change at a slower than generational pace. We might order over the internet rather than via a physical catalogue, or call a person via a mobile rather than call a place via a landline, but skills we learnt in our formative years would still serve us well. For example, project managers manage ever increasingly complex projects over the length of their career, even though how they manage projects has migrated from paper GANTT charts to MS Project, and now onto BaseCamp.

Which is interesting, as it is this what, the doctrine, which most people use to define themselves. A project manager manages projects, and has (most likely) built their career by managing increasingly larger projects and, eventually, programs. Enterprise architects work their way toward managing ever larger transformation programs. Consultants work to become stream leads, team leaders, finally running large teams across entire sectors or geographies. An so on. The length of someones career sees them narrowing their focus to specialise in a particular doctrine, while expanding their management responsibilities. It is this doctrine which most people define themselves by, and their career is an constant investment in doctrine to enhance their skills, increasing their value with respect to the doctrine they chose to focus on.

This is fine in a world when the doctrines a business needs to operate change slower than the duration of a typical career. But what happens if the pace of change accelerates? When the length of the average career becomes significantly longer than the useful life of the doctrines the business requires.

We’ve reached an interesting technological inflection point. Information technology to date could be characterized as the race for automation. The vast bulk of enterprise applications have been focused on automating a previously manual task. This might be data management (general ledger, CRM, et al) or transforming data (SAP APO). The applications we developed were designed as bolt-ons to existing business models. Much like adding an after-market turbo charger to your faithful steed. Most (if not all) of the doctrines in the technology profession have grown to support this model: the development and deployment of large IT assets to support an existing business.

However, the role of technology in business is changing. The market of enterprise applications has matured to the point where a range of vendors can supply you with applications to automate any area of the business you care to name, making these applications ubiquitous and commoditized.The new, emerging, model has us looking beyond business technology alignment, trying and identify new business models which can exploit synergies between the two. A trend Forrester has termed, Business-Technology.

The focus has shifted from asset to outcome, changing the rules we built our careers on. Our tendency to define ourselves by the doctrine we learnt/developed yesterday has become a liability. We focus on how we do something, not why we do it, making it hard to change our habits when the assumptions they are founded on no longer apply. With our old doctrines founded on the development and management of large IT assets, we’re ill-equipped to adapt to the new engagement models Business-Technology requires.

The shift to an outcome focus is part of the acceleration of the pace of business. The winners in this environment are constantly inventing new doctrine as they look for better ways to achieve the same outcome. How we conduct business is changing so rapidly that we can’t expect to be doing the same thing in five years time, let alone for the rest of our career. What we learnt to do in our mid 20’s is no longer (entirely) relevant, and doesn’t deliver the same outcome as it used to. Isn’t the definition of insanity continuing to do something the we know doesn’t work? So why, then, do we continue to launch major transformation programmes when we know they have a low chance of success in the current business and social environment? Doctrine has become dogma.

We need to (re)define ourselves along the lines of “I solve problems”: identifying with the outcomes we deliver, at both personal and departmental levels. This allows us to consider a range of doctrines/options/alternatives and look for the best path forward. If we adopt “I am an TOGAF enterprise architect” (or SixSigma black belt, or Prince2, or …) then they will just crank the handle as the process has become more important than the goal. If we adopt “how can I effectively evolve this IT estate the with tools I have”, then we’ll be more successful.

Rolls-Royce and Craig’s List are good examples of organisations using a focus on outcomes to driver their businesses forward. Bruce Lee might even be the poster child of this problem solving mentality. He studies a wide range of fighting doctrines, and designed some of his own, in an attempt to break his habits and find a better way.

The Boundaryless Value-Chain

I’ve uploaded another presentation to SlideShare. (Still trying to work through the backlog.) This is something that I had been doing logistics companies and a few public forums, such as The Open Group.

How real-time computing will transform supply chain decision-making

This presentation will provide a plain-English account of how real-time computing will transform supply chain decision-making and control. Peter Evans-Greenwood will illustrate the emerging leading practices with lessons learned from case studies, featuring clients across the globe.

The biggest challenge for today’s supply chains is to be adaptive. While tremendous gains have been made over the last thirty years, today’s applications are not as flexible as promised. New tools and techniques are required to capture and automate the non-linear, exception-rich, business logic that we currently rely on employees to deliver. Extending the technology stack will allow us to leverage the higher capacity of technology to deliver globally optimal solutions and to introduce innovations such as the moving warehouse into all our supply chains.

Inside vs. Outside

As Andy Mullholland pointed out in a recent post, all too often we manage our businesses by looking out the rear window to see where we’ve been, rather than looking forward to see where we’re going. How we use information too drive informed business decisions has a significant impact on our competitiveness.

I’ve made the point previously (which Andy built on) that not all information is of equal value. Success in today’s rapidly changing and uncertain business environment rests on our ability to make timely, appropriate and decisive action in response to new insights. Execution speed or organizational intelligence are not enough on their own: we need an intimate connection to the environment we operate in. Simply collecting more historical data will not solve the problem. If we want to look out the front window and see where we’re going, then we need to consider external market information, and not just internal historical information, or predictions derived from this information.

A little while ago I wrote about the value of information. My main point was that we tend to think of most information in one of two modes—either transactionally, with the information part of current business operations; or historically, when the information represents past business performance—where it’s more productive to think of an information age continuum.

The value of information
The value of information

Andy Mulholland posted an interesting build on this idea on the Capgemini CTO blog, adding the idea that information from our external environment provides mixed and weak signals, while internal, historical information provides focused and strong signals.

The value of information and internal vs. external drivers
The value of information and internal vs. external drivers

Andy’s major point was that traditional approaches to Business Intelligence (BI) focus on these strong, historical signals, which is much like driving a car by looking out the back window. While this works in a (relatively) unchanging environment (if the road was curving right, then keep turning right), it’s less useful in a rapidly changing environment as we won’t see the unexpected speed bump until we hit it. As Andy commented:

Unfortunately stability and lack of change are two elements that are conspicuously lacking in the global markets of today. Added to which, social and technology changes are creating new ideas, waves, and markets – almost overnight in some cases. These are the ‘opportunities’ to achieve ‘stretch targets’, or even to adjust positioning and the current business plan and budget. But the information is difficult to understand and use, as it is comprised of ‘mixed and weak signals’. As an example, we can look to what signals did the rise of the iPod and iTunes send to the music industry. There were definite signals in the market that change was occurring, but the BI of the music industry was monitoring its sales of CDs and didn’t react until these were impacted, by which point it was probably too late. Too late meaning the market had chosen to change and the new arrival had the strength to fight off the late actions of the previous established players.

We’ve become quite sophisticated at looking out the back window to manage moving forward. A whole class of enterprise applications, Enterprise Performance Management (EPM), has been created to harvest and analyze this data, aligning it with enterprise strategies and targets. With our own quants, we can create sophisticated models of our business, market, competitors and clients to predict where they’ll go next.

Robert K. Merton: Father of Quants
Robert K. Merton: Father of Quants

Despite EPM’s impressive theories and product sheets, it cannot, on its own, help us leverage these new market opportunities. These tools simply cannot predict where the speed bumps in the market, no matter how sophisticated they are.

There’s a simple thought experiment economists use to show the inherent limitations in using mathematical models to simulate the market. (A topical subject given the recent global financial crisis.) Imagine, for a moment, that you have a perfect model of the market; you can predict when and where the market will move with startling accuracy. However, as Sun likes to point out, statistically, the smartest people in your field do not work for your company; the resources in the general market are too big when compared to your company. If you have a perfect model, then you must assume that your competitors also have a perfect model. Assuming you’ll both use these models as triggers for action, you’ll both act earlier, and in possibly the same way, changing the state of the market. The fact that you’ve invented a tool to predicts the speed bumps causes the speed bumps to move. Scary!

Enterprise Performance Management is firmly in the grasp of the law of diminishing returns. Once you have the critical mass of data required to create a reasonable prediction, collecting additional data will have a negligible impact on the quality of this prediction. The harder your quants work, the more sophisticated your models, the larger the volume of data you collect and trawl, the lower the incremental impact will be on your business.

Andy’s point is a big one. It’s not possible to accurately predict future market disruptions with on historical data alone. Real insight is dependent on data sourced from outside the organization, not inside. This is not to diminish the important role BI and EPM play in modern business management, but to highlight that we need to look outside the organization if we are to deliver the next step change in performance.

Zara, a fashion retailer, is an interesting example of this. Rather than attempt to predict or create demand on a seasonal fashion cycle, and deliver product appropriately (an internally driven approach), Zara tracks customer preferences and trends as they happen in the stores and tries to deliver an appropriate design as rapidly as possible (an externally driven approach). This approach has made Zara the most profitable arm of Inditex, a holding company of eight retail brands, and one of the biggest success stories in Spanish business. You could say that Quants are out, and Blink is in.

At this point we can return to my original goal: creating a simple graphic that captures and communicates what drives the value of information. Building on both my own and Andy’s ideas we can create a new chart. This chart needs to capture how the value of information is effected by age, as well as the impact of externally vs. internally sourced. Using these two factors as dimensions, we can create a heat map capturing information value, as shown below.

Time and distance drive the value of information
Time and distance drive the value of information

Vertically we have the divide between inside and outside: internally created from processes; though information at the surface of our organization, sourced from current customers and partners; to information sourced from the general market and environment outside the organization. Horizontally we have information age, from information we obtain proactively (we think that customer might want a product), through reactively (the customer has indicated that they want a product) to historical (we sold a product to a customer). Highest value, in the top right corner, represents the external market disruption that we can tap into. Lowest value (though still important) represents an internal transactional processes.

As an acid test, I’ve plotted some of the case studies mentioned in to the conversation so far on a copy of this diagram.

  • The maintenance story I used in my original post. Internal, historical data lets us do predictive maintenance on equipment, while  external data enables us to maintain just before (detected) failure. Note: This also applies tasks like vegetation management (trimming trees to avoid power lines), as real time data and be used to determine where vegetation is a problem, rather than simply eyeballing the entire power network.
  • The Walkman and iPod examples from Andy’s follow-up post. Check out Snake Coffee for a discussion on how information driven the evolution of the Walkman.
  • The Walmart Telxon story, using floor staff to capture word of mouth sales.
  • The example from my follow-up (of Andy’s follow-up), of Albert Heijn (a Dutch Supermarket group) lifting the pricing of ice cream and certain drinks when the temperature goes above 25° C.
  • Netflix vs. (traditional) Blockbuster (via. Nigel Walsh in the comments), where Netflix helps you maintain a list of files you would like to see, rather than a more traditional brick-and-morter store which reacts to your desire to see a film.

Send me any examples that you know of (or think of) and I’ll add them to the acid test chart.

An acid test for our chart
An acid test for our chart

An interesting exercise left to the reader is to map Peter Drucker’s Seven Drivers for change onto the same figure.

Update: A discussion with a different take on the value of information is happening over at the Information Architects.

Update: The latest instalment in this thread is Working from the outside in.

Update: MIT Sloan Management Review weighs in with an interesting article on How to make sense of weak signals.