Tag Archives: LEAN

Michelangelo’s approach to workflow discovery

Take any existing workflow — any people driven business process — and I expect that most of the tasks within it could best be described as cruft.

cruft: /kruhft/
[very common; back-formation from crufty]

  1. n. An unpleasant substance. The dust that gathers under your bed is cruft; the TMRC Dictionary correctly noted that attacking it with a broom only produces more.
  2. n. The results of shoddy construction.
  3. vt. [from hand cruft, pun on ‘hand craft’] To write assembler code for something normally (and better) done by a compiler (see hand-hacking).
  4. n. Excess; superfluous junk; used esp. of redundant or superseded code.
  5. [University of Wisconsin] n. Cruft is to hackers as gaggle is to geese; that is, at UW one properly says “a cruft of hackers”.

The Jargon File, v4.4.7

Capturing and improving a workflow (optimising it even) is a processes of removing cruft to identify what really needs to be there. This is remarkably like Michalangelo{{1}}’s approach to carving David{{2}}. When asked how he created such a beautiful sculpture, everything just as it should be, Michalangeo responded (and I’m paraphrasing):

[[1]]Michelangelo Buonarroti[[1]]
[[2]]Michelangelo’s David[[2]]

Michelangelo's David
Michelangelo’s David

David was always there in the limestone; I just carved away the bits that weren’t David.

Cruft is the result of the people — the knowledge workers engaged in the process — dealing with the limitations of last decade’s technology. Cruft is the work-arounds and compensating actions for a fragmented and conflicting IT environment, an environment which gets in the road more often than it supports the knowledge workers. Or cruft might be the detritus of quality control and risk management measures put in place some time ago (decades in many instances) to prevented an expensive mistake that is no longer possible.

Most approaches to workflow automation are based on some sort of process improvement methodology, such as LEAN or Six Sigma. These methods work: I’ve often heard is stated that pointing Six Sigma at a process results in a 30% saving, each and every time. They do this by aggressively removing variation in the process — slicing away unnecessary decisions, as each decisions is an opportunity for a mistake. These decisions might represent duplicated decisions, redundant process steps, or unnecessarily complicated handoffs.

There’s a couple of problems with this though, when dealing with workflow. Looking for what’s redundant doesn’t create an explicit link between business objectives and the steps in the workflow, explicitly justifying each step’s existence, making it hard to ensure that we caught all the cruft. And the aggressive removal of variation can strip a process’s value along with its cost.

Much of the cruft in a workflow process is there for historical reasons. These reasons can range from something bad happened a long time in the past through to we don’t know why, but if we don’t do that then the whole thing falls over. A good facilitator will challenge seemingly obsolete steps, identifying those steps who have served their purpose and should be removed. However, it’s not possible to justify every step without quickly wearing down subject matter experts. Some obsolete steps will always leak through, no matter how many top-down and bottom-up iterations we do.

We can also find that we reach the end of the processes improvement journey only to find that much of the process’s value — the exceptions and variation that make the process valuable — has been cut out to make the process more efficient or easier to implement. In the quest for more science in our processes, we’ve eliminated the art that we relied on.

If business process management isn’t a programming challenge{{3}}, then this holds even truer for human driven workflow.

[[3]]A business process is not a programming challenge @ PEG[[3]]

What we need is a way to chip away the cruft and establish a clear line of traceability between the goals of each stakeholder involved in the process, and each step and decision in the workflow. And we need to do this in a way that allows us to balance art and science.

I’m pretty sure that Michalangeo had a good idea of what he wanted to create when he started belting on the chisel. He was looking for something in the rock, the natural seems and faults, that would let him find David. He kept the things that supported his grand plan, while chipping away those that didn’t.

For a workflow processes, these are the rules, tasks and points of variation that knowledge workers use to navigate their way through the day. Business rules and tasks are the basic stuff of workflow: decisions, data transformation and hand-offs between stakeholders. Points of variation let us identify those places in a workflow where we want to allow variation — alternate ways of achieving the one goal — as a way of balancing art and science.

Rather than focus on programming the steps of the process, worrying if we should send an email or a fax, we need to make this (often) tacit knowledge explicit. Working top-down, from the goals of the business owners, and bottom-up, from the hand-offs and touch-points with other stakeholders, we can chip away at the rock. Each rule, task or point of variation we find is measured against our goals to see if we should chip it away, or leave it to become part of the sculpture.

That which we need stays, that which is unnecessary is chipped away.

Business is like a train…

The following analogy popped up the other day in an email discussion with a friend.

Running a business is a bit like being the Fat Controller, running his vast train network. We spend our time trying to get the trains to run on time with the all too often distraction of digging the Troublesome Trucks out of trouble.

Improvement often means upgrading the tracks to create smoother, straighter lines. After years of doing this, any improvement to the tracks can only provide a minor, incremental benefit.

What we really need is a new signalling system. We need to better utilise the tracks we already have, and this means making better decisions about which trains to run where, and better coordination between the trains. Our tracks are fine (as long as we keep up the scheduled maintenance), but we do need to better manage transit across and between them.

Swap processes for tracks, and I think that this paints quite a nice visual picture.

Years of processes improvement (via LEAN, Six Sigma and, more recently, BPM) had straightened and smoothed our processes to the point that any additional investment has hit the law of diminishing returns. Rather than continue to try and improve the processes on my own, I’d outsource process maintenance to a collection of SaaS and BPO providers.

The greater scale of these providers allows them to invest in improvements which I don’t have the time or money for. Handing over responsibility also creates the time and space for me to focus on improving the decisions on which process to run where, and when: my signalling system.

This is especially important in a world where it is becoming rare to even own the processes these days.

We forget just how important a good signalling system is. Get it right and you get the German or Japanese train networks. Get it wrong and you rapidly descend into the second or third world, regardless of the quality of your tracks.

BPM is not a programming challenge

Get a few beers into a group of developers these days and it’s not uncommon for the complaints to start flowing about BPM (Business Process Management). BPM, they usually conclude, is more pain than it’s worth. I don’t think that BPM is a bad technology, per se, but it does appear to be the wrong tool for the job. The root of the problem is that BPM is a handy tool for programming distributed systems, but the challenge of creating distributed systems is orthogonal to business process execution and management. We’re using a screw driver to belt in a nail. It’s more productive to think business process execution and management as a (realtime) planning problem.

Programming is the automation of the known. Take as stable, repeatable process and automate it; bake the process into silicone to make it go fast. This is the same tactic that I was using back in my image processing days (and that was a long time ago). We’d develop the algorithms in C, experiment and tweak until they were right, and once they were stable we’d burn them into an ASIC (Application-Specific Integrated Circuit) to provide a speed boost. The ASICs were a lot faster than the C version: more than an order of magnitude faster.

Programmers, and IT folk in general, have a habit of treating the problems we confront as programming challenges. This has been outstandingly successful to date; just try and find a home appliance or service that doesn’t have a programme buried in it somewhere. (It’s not an unmitigated success though, such as our tumble drier is driving us nuts if its overly frequent software errors.) It’s not surprising that we chose to treat business processes automation and management as a programming problem once it appeared on our radar.

Don’t get me wrong: BPM is a solid technology. A friend of mine once showed my how he’d used his BPM stack to test its BPEL engine. As side from being a nice example of eating your own dog food, it was a great example of using BPEL as a distributed programming tool to solve a small but complex problem.

So why do we see so many developers complaining about BPM? It’s not the technology itself: the technology works. The issue is that we’re using it to solve problems that it’s not suited for. The most obvious evidence of this is the current poor state of BPM support for business exception management. We’ve deployed a lot of technology to support exception management in business processes without really solving the problem.

Managing business exceptions is driving the developers nuts. I know of one example where managing a couple of not infrequent business exceptions was the major technical problem in a very significant project (well into eight figures). The problem is that business exceptions are not from the same family of beasts as programming exceptions. Programming exceptions are exceptional. Business exceptions are just a (slightly) different way to achieve the same goal. All our compensating actions and exception stacks just get in the way of solving the problem.

On PowerPoint, anything can look achievable. The BPMN diagram we shared with the business was extremely elegant: nice sharp angles and coloured bubbles. Everyone agreed that it was a good representation of what the business does. The devil is in the details though. The development team quickly becomes frustrated as they have to deal with the realities of implementing a dynamic and exception rich business processes. Exceptions pile up on top of exceptions, and soon that BPMN diagram covers a wall, littered as it is with branch and join operations. It’s not a complex process, but we’ve made it incredibly complicated.

Edward Tufte's take on explaining complex concepts with PowerPoint
A military parade explained, a la PowerPoint

We can’t program our way out of this box, trying to pile on more features and patches. We can rip the complications out – simplifying the process to the point that it becomes tractable with our programming tools (which is what happened in my example above). But this removes all the variation which which makes the processes so valuable. (This, of course, the dirty secret of LEAN et al: you’re trading flexibility for cost saving, making your processes very efficient but also very fragile.)

Or we can try solving the problem a different way.

Don’t treat the automation of a business processes as a programming task (and I by this I mean the capture of imperative instructions for a computer to execute, no matter how unstructured or parallel). Programming is the automation of the known. Business processes, however, are the management and anticipation of the unknown. Modelling business processes should be seen as a (realtime) planning problem.

Which comes back to one of my common themes: push vs pull models, or the importance of what over how. Or, as a friend of mine with a better turn of phrase puts it, we need to stop trying to invent new technologies and work out how to use what we already have more effectively. Rather than trying to invent new technologies to solve problems that are already well understood elsewhere, pushing the technology into the problem, a more pragmatic approach is to leverage that existing understanding and then pull in existing technologies as appropriate.

Planning and executing in a rapidly changing environment is a well understood problem. Just ask anyone who’s been involved with the military. If we view the management of a business processes as a realtime planning problem, then what were business exceptions are reduced to simply alternate routes to the same goal, rather than a problem which requires a compensating action.

Battle of Gaugamela (Arbela) (331BC)
Take that hill!

One key principle is to establish a clear goal – Take that hill!, or Find that lost shipment! – articulate the tactics, the courses of action we might use to achieve that goal, and then defer decisions on which course of action to take until the decision needs to be made. If we commit to a course of action too early, locking in a decision during design time, then it’s likely that we’ll be forced to manage the exception when we realise that we picked the wrong course of action. It’s better to wait until the moment when all relevant information and options are available to us, and then take decisive action.

From a modelling point of view, we need to establish where are the key events at which we need to make decisions in line with a larger strategy. The decisions at each of these events needs to weigh the available courses of action and select the most appropriate, much like using a set of business rules to identify applicable options. The course of action selected, a scenario or business process fragment, will be semi independent from the other in the applicable set, as it addresses a different business context. Nor can the scenario we pick cannot be predetermined, as it depends on the business context. Short and sharp, each scenario will be simple, general and flexible, enabling us to configure it for the specific circumstances at hand, as we can’t anticipate all possible scenarios. And finally, we need to ensure that the scenarios we provide cover the situations we can anticipate, including the provision of a manual escape hatch.

Goals, rules and process: in that order. Integrated rather than as standalone engines. Pull pull these established technologies into a single platform and we might just be closer to a BPM solution inline with what we really need. (And we know there is nothing new under the sun, as this essentially a build on Jim Sinurs rules-and-process argument, and borrows a lot from STRIPS, PRS, dMARS and even the work I did at Agentis.)

As I mentioned at the start of this missive, BPM as a product category makes sense and the current implementations are capable distributed programming tools. The problem is that business process management is not a distributed programming challenge. Business exceptions are not exceptional. I say steal a page from the military strategy book – they, after all, have been successfully working on this problem for some time – and build our solutions around ideas the military use to succeed in a rapidly changing environment. Goals, rules and processes. The trick is to be pragmatic, rather than dogmatic in our implementation, and focus on solving the problem rather then trying to create a new technology.

Decisions are more important than data

Names and categories are important. Just look at the challenges faced by the archeology community as DNA evidence forces history to be rewritten when it breaks old understandings, changing how we think and feel in the process. Just who invaded who? Or was related to who?

We have the same problem with (enterprise) technology; how we think about the building blocks of the IT estate has a strong influence on how approach the problems we need to solve. Unfortunately our current taxonomy has a very functional basis, rooted as it is in the original challenge of creating the major IT assets we have today. This is a problem, as it’s preventing us to taking full advantage of the technologies available to us. If we want to move forward, creating solutions that will thrive in a post GFC world, then we need to think about enterprise IT in a different way.

Enterprise applications – the applications we often know and love (or hate) – fall into a few distinct types. A taxonomy, if you will. This taxonomy has a very functional basis, founded as it is on the challenge of delivering high performance and stable solutions into difficult operational environments. Categories tend to be focused on the technical role a group of assets have in the overall IT estate. We might quibble over the precise number of categories and their makeup, but for the purposes of this argument I’m going to go with three distinct categories (plus another one).

SABER
SABER @ American Airlines

First, there’s the applications responsible for data storage and coherence: the electronic filing cabinets that replaced rooms full of clerks and accountants back in the day. From the first computerised general ledger through to CRM, their business case is a simple one of automating paper shuffling. Put the data in on place and making access quick and easy; like SABER did, which I’ve mentioned before.

Next, are the data transformation tools. Applications which take a bunch of inputs and generate an answer. This might be a plan (production plan, staffing roster, transport planning or supply chain movements …) or a figure (price, tax, overnight interest calculation). State might be stored somewhere else, but these solutions still need some some serious computing power to cope with hugh bursts in demand.

Third is data presentation: taking corporate information and presenting in some form that humans can consume (though looking at my latest phone bill, there’s no attempt to make the data easy to consume). This might be billing or invoicing engines, application specific GUIs, or even portals.

We can also typically add one more category – data integration – though this is mainly the domain of data warehouses. Solutions that pull together data from multiple sources to create a summary view. This category of solutions wouldn’t exist aside from the fact that our operational, data management solutions, can’t cope with an additional reporting load. This is also the category for all those XLS spreadsheets that spread through business like a virus, as high integration costs or more important projects prevent us from supporting user requests.

A long time ago we’d bake all these layers into the one solution. SABER, I’m sure, did a bit of everything, though its main focus was data management. Client-server changed things a bit by breaking user interface from back-end data management, and then portals took this a step further. Planning tools (and other data transformation tools) started as modules in larger applications, eventually popping out as stand alone solutions when they grew large enough (and complex enough) to justify their own delivery effort. Now we have separate solutions in each of these categories, and a major integration problem.

This categorisation creates a number of problems for me. First and foremost is the disconnection between what business has become, and what technology is trying to be. Back in the day when “computer” referred to someone sitting at a desk computing ballistics tables, we organised data processing in much the same way that Henry Ford organised his production line. Our current approach to technology is simply the latest step in the automation of this production line.

Computers in the past
Computers in the past

Quite a bit has changed since then. We’ve reconfigured out businesses, we’re reconfiguring our IT departments, and we need to reconfigure our approach to IT. Business today is really a network of actors who collaborate to make decisions, with most (if not all) of the heavy data lifting done by technology. Retail chains are trying to reduce the transaction load on their team working the tills so that they can focus on customer relationships. The focus in supply chains to on ensuring that your network of exception managers can work together to effectively manage disruptions in the supply chain. Even head office focused on understanding and responding to market changes, rather than trying to optimise the business in an unchanging market.

The moving parts of business have changed. Henry Ford focused on mass: the challenge of scaling manufacturing processes to get cost down. We’re moved well beyond mass, through velocity, to focus on agility. A modern business is a collection of actors collaborating and making decisions, not a set of statically defined processes backed by technology assets. Trying to force modern business practices into yesterdays IT taxonomy is the source of one of the disconnects between business and IT that we complain so much about.

There’s no finer example of this than Sales and Operations Planning (S&OP). What should be a collaborative and fluid process – forward planning among a network of stakeholders – has been shoehorned into a traditional n-tier, database driven, enterprise solution. While an S&OP solution can provided significant cost saving, many companies find it too hard to fit themselves into the solution. It’s not surprising that S&OP has a reputation for being difficult to deploy and use, with many planners preferring to work around the system than with it.

I’ve been toying with a new taxonomy for a little while now, one that tries to reflect the decision, actor and collaboration centric nature of modern business. Rather than fit the people to the factory, which was the approach during the industrial revolution, the idea is to fit the factory to the people, which is the approach we use today post LEAN and flexible manufacturing. While it’s a work in progress, it still provides a good starting point for discussions on how we might use technology to support business in the new normal.

In no particular order…

Fusion solutions blend data and process to create a clear and coherent environment to support specific roles and decisions. The idea is to provide the right data and process, at the right time, in a format that is easy to consume and use, to drive the best possible decisions. This might involve blending internal data with externally sourced data (potentially scraped from a competitor’s web site); whatever data required. Providing a clear and consistent knowledge work environment, rather than the siloed and portaled environment we have today, will improve productivity (more time on work that matters, and less time on busy work) and efficiency (fewer mistakes).

Next, decisioning solutions automate key decisions in the enterprise. These decisions might range from mortgage approvals through office work, such as logistics exception management, to supporting knowledge workers workers in the field. We also need to acknowledge that decisions are often decision making processes which require logic (roles) applied over a number of discrete steps (processes). This should not be seen as replacing knowledge workers, as a more productive approach is to view decision automation as a way of amplifying our users talents.

While we have a lot of information, some information will need to be manufactured ourselves. This might range from simple charts generated from tabular data, through to logistics plans or maintenance scheduling, or even payroll.

Information and process access provide stakeholders (both people and organisations) with access to our corporate services. This is not your traditional portal to web based GUI, as the focus will be on providing stakeholders with access wherever and whenever they need, on whatever device they happen to be using. This would mean embedding your content into a Facebook app, rather than investing in a strategic portal infrastructure project. Or it might involve developing a payment gateway.

Finally we have asset management, responsible for managing your data as a corporate asset. This looks beyond the traditional storage and consistency requires for existing enterprise applications to include the political dimension, accessibility (I can get at my data whenever and wherever I want to) and stability (earthquakes, disaster recovery and the like).

It’s interesting to consider the sort of strategy a company might use around each of these categories. Manufacturing solutions – such as crew scheduling – are very transactional. Old data out, new data in. This makes them easily outsourced, or run as a bureau service. Asset management solutions map very well to SaaS: commoditized, simple and cost effective. Access solutions are similar to asset management.

Fusion and decisioning solutions are interesting. The complete solution is difficult to outsource. For many fusion solutions, the data and process set presented to knowledge workers will be unique and will change frequently, while decisioning solutions contain decisions which can represent our competitive advantage. On the other hand, it’s the intellectual content in these solutions, and not the platform, which makes them special. We could sell our platform to our competitors, or even use a commonly available SaaS platform, and still retain our competitive advantage, as the advantage is in the content, while our barrier to competition is the effort required to recreate the content.

This set of categories seems to map better to where we’re going with enterprise IT at the moment. Consider the S&OP solution I mention before. Rather than construct a large, traditional, data-centric enterprise application and change our work practices to suit, we break the problem into a number of mid-sized components and focus on driving the right decisions: fusion, decisioning, manufacturing, access, and asset management. Our solution strategy becomes more nuanced, as our goal is to blend components from each category to provide planners with the right information at the right time to enable them to make the best possible decision.

After all, when the focus is on business agility, and when we’re drowning in a see of information, decisions are more important than data.

Consulting doesn’t work any more. We need to reinvent it.

What does it mean to be in consulting these days? The consulting model that’s evolved over the last 30 – 50 years seems to be breaking down. The internet and social media have shifted the way business operates, and the consulting industry has failed to move with it. The old tricks that the industry has relied on — the did it, done it stories and the assumption that I know something you don’t — no longer apply. Margins are under pressure and revenue is on the way down (though outsourcing is propping up some) as clients find smarter ways to solve problems, or decide that they can simply do without. The knowledge and resources the consulting industry has been selling are no longer scarce, and we need to sell something else. Rather than seeing this as a problem, I see it as a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. Sell outcomes, not scarcity and rationing.

I’m a consultant. I have been for some time too, working in both small and large consultancies. It seems to me that the traditional relationship between consultancy and client is breaking down. This also appears to be true for both flavours of consulting: business and technology. And by consulting I mean everything from the large tier ones down to the brave individuals carving a path for themselves.

Business is down, and the magic number seems to be roughly a 17% decline year-on-year. One possible cause might be that the life blood of the industry — the large multi-year transformation project — has lost a lot of its attraction in recent years. If you dig around in the financials for the large publicly listed consultancies and vendors you’ll find that the revenue from IT estate renewal and transformation (application licenses, application configuration and installation services, change management, and even advisory) is sagging by roughly 17% everywhere around the globe.

SABER @ American Airlines

Large transformation projects have lost much of their attraction. While IBM successfully delivered SABER back in the 60s, providing a heart transplant for American Airlines ticketing processes, more recent stabs at similarly sized projects have met with less than stellar results. Many more projects are quietly swept under the carpet, declared a success so that involved can move on to something else.

The consulting model is a simple one. Consultants work on projects, and the projects translate into billable hours. Consultancies strive to minimise overheads (working on customer premises and minimising support staff), while passing incidental costs through to clients in the form of expenses. Billable hours drive revenue, with lower grades provide higher margins.

This creates a couple of interesting, and predictable, behaviours. First, productivity enhancing tooling is frowned on. It’s better to deploy a graduate with a spreadsheet than a more senior consultant with effective tooling. Second, a small number of large transactions are preferred to a large number of small transactions. A small number of large transactions requires less overhead (sales and back-office infrastructure).

All this drives consultancies to create large, transformational projects. Advisory projects end up developing multi-year (or even multi-decade) roadmaps to consolidate, align and optimise the business. Technology projects deliver large, multi-million dollar, IT assets into the IT estate. These large, business and IT transformation projects provide the growth, revenue and margin targets required to beat the market.

This desire for large projects is packaged up in what is commonly called “best practice”. The consulting industry focuses on did it, done it stories, standard and repeatable projects to minimise risk. The sales pitch is straight-forward: “Do you want this thing we did over here?” This might be the development of a global sourcing strategy, an ERP implementation, …

Spencer Tracy & Katharine Hepburn in The Desk Set
Spencer Tracy & Katharine Hepburn in The Desk Set

This approach has worked for some time, with consultancy and client more-or-less aligned. Back when IBM developed SABER you were forced to build solutions from the tin up, and even small business solutions required significant effort to deliver. In the 1957, when Spencer Tracy played a productivity expert in The Desk Set, new IT solutions required very specific skills sets to develop and deploy. These skills were in short supply, making it hard for an organisation to create and maintain a critical mass of in-house expertise.

Rather than attempt to build an internal capability — forcing the organisation on a long learning journey, a journey involving making mistakes to acquire tacit knowledge — a more pragmatic approach is to rent the capability. Using a consultancy provides access to skills and knowledge you can’t get elsewhere, usually packaged up as a formal methodology. It’s a risk management exercise: you get a consultancy to deliver a solution or develop a strategy as they just did one last week and know where all the potholes are. If we were cheeky, then we would summerize this by stating that consultancies have a simple value proposition: I know something you don’t!

It’s a model defined by scarcity.

A lot has changed in the last few years; business moves a lot faster and a new generation of technology is starting to take hold. The business and technology environment is changing so fast that we’re struggling to keep up. Technology and business have become so interwoven that we now talk of Business-Technology, and a lot of that scarce knowledge is now easily obtainable.

The Diverging Pulse Rates of Business and Technology
The Diverging Pulse Rates of Business and Technology

The scarce tacit knowledge we used to require is now bundled up in methodologies; methodologies which are trainable, learnable, and scaleable. LEAN and Six Sigma are good examples of this, starting as more black art than science, maturing into respected methodologies, to today where certification is widely available and each methodology has a vibrate community of practitioners spread across both clients and consultancies. The growth of MBA programmes also ensures that this knowledge is spread far and wide.

Technology has followed a similar path, with the detailed knowledge required to develop distributed solutions incrementally reified in methodologies and frameworks. When I started my career XDR and sockets were the networking technologies of the day, and teams often grew to close to one hundred engineers. Today the same solution developed on a modern platform (Java, Ruby, Python …) has a team in the single digits, and takes a fraction of the time. Tacit knowledge has be reified in software platforms and frameworks. SaaS (Software as a Service) takes this to a while new level by enabling you to avoid software development entirely.

The did it, done it stories that consulting has thrived on in the past are being chewed up and spat out by the business schools, open source, and the platform and SaaS vendors. A casual survey of the market usually finds that SaaS-based solutions require 10% of the installation effort of a traditional on-premsis solution. (Yes, that’s 90% less effort.) Less effort means less revenue for the consultancies. It also reduces the need for advisory services, as provisioning a SaaS solution with the corporate credit card should not require a $200,000 project to build a cost-benefit analysis. And gone are the days when you could simply read the latest magazines and articles from the business schools, spouting what you’d read back to a client. Many clients have been on the consulting side of the fence, have a similar education in the business schools, and reads all the same articles.

I know and you don’t! no longer works. The world has moved on and the consulting industry needs to adapt. The knowledge and resources the industry has been selling are no longer scarce, and we need to sell something else. I see this is a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. As Jeff Jarvis has said: stop selling scarcity, sell outcomes.

Updated: A good friend has pointed out the one area of consulting — one which we might call applied business consulting — resists the trend to be commoditized. This is the old school task of sitting with clients one-on-one, working to understand their enterprise and what makes it special, and then using this understanding to find the next area or opportunity that the enterprise is uniquely qualified to exploit. There’s no junior consultants in this area, only old grey-beards who are too expensive to stay in their old jobs, but that still are highly useful to the industry. Unfortunately this model doesn’t scale, forcing most (if not all) consultancies into a more operational knowledge transfer role (think Six Sigma and LEAN) in an attempt to improve revenue and GOP.

Updated: Keith Coleman (global head of public sector at Capgemini Consulting) makes a similar case with Time to sell results, not just advice (via @rpetal27).

Updated: I’ve responded to my own post, tweaking my consulting page to capture my take on what a consultant needs to do in this day and age.

The rise of task worker 2.0

Companies are delayering (again) and pushing decisions to the surface of the organisation, where there is direct contact with customers and partners, in order to be more responsive. Some companies, Zara for example, are making this into a science as they re-engineer their organisations to maximise agility. To do this companies are empowering the people working at the customer and partner interface to solve the problems in front of them, without intervention from head office or middle management.

One interesting effect of this is a shift in the coalface of Enterprise 2.0 adoption. We’ve been focused on the white collar, office bound knowledge worker as the adopter of Web 2.0 tools in the enterprise, with mobility limited to the ability to work from a local coffee shop or an executive tweeting from the airport lounge. However, with decisions devolving to the customer and partner interface we are finding that the middle layers of our organisations are being trimmed, and their responsibilities transferred to the people with direct customer or operational contact. Knowledge workers are being superseded by task workers: people focused on consuming information in the field to solve operational or customer problems.

Think about how Toyota structures production lines—the whole LEAN story—empowering the people on the shop floor (traditional task workers) to solve problems. Or the utility field worker on maintenance, who used to work under instruction from the depot but is now mobile, working remotely. Or the transactional shop assistant who’s focus is shifting from the financial transaction to customer management. And so on.

To a certain extent, Web 2.0 and Enterprise 2.0’s traditional target, the white collar knowledge worker, is being eliminated by the very technology that is intended to empower them. And their replacement, the situated task workers, has been ignored by the Enterprise 2.0 rollout. Or, even worse, we’ve deliberately locked down their computing environment to prevent them going off task.

This creates an interesting challenge. How do we move from our early adopters and use our new collaboration tools and technique to support (and not distract) these task workers, situated in a challenging operational environment?

Posted via email from PEG @ Posterous

We need a better definition for “mash-up”

Mash-up no longer seems to mean was we thought it meant. The term has been claimed by the analysts and platform vendors as short hand for the current collection of hot product features, and no longer represents the goals and benefits of those original mash-ups that drew our interest. If we want to avoid the hype, firmly tying mash-up to the benefits we saw in those first solutions, then we need to reclaim the term, basing its definition on the outcomes those first mash-up solutions delivered, rather than the (fairly) conventional means used to deliver them.

Definitions are a good thing, as they help keep us all on the same page and make conversations easier. However, what often starts our as a powerful concept—with a clear value proposition—is rapidly diluted as the original definition gets pulled in different directions.

Over time, the foundation of a term’s definition moves from the outcome it represents (and the benefits this outcome provides), taking rest on the means which the original outcome was delivered, driven by everyones’ desire to define what they are doing in relation to the current hot topic. Next, the people who consider it to be just a means, often start redefining the meaning to make it more inclusive, while continuing to claim the original benefits. We end up selling the new hype as either means or goals or any half-hearted solution in between – and missing the original outcome nearly completely

The original mash-ups were simple things. Pulling together data from two or more sources to create a new consolidated view. Think push-pins on a map. Previously I would have had to access these data sources separately—find, select, remember, find, select correlation, click. With the mash-up this multi-step, and multi-decision workflow is reduced to a single look, select, click. Many decisions became one, and I was no longer forced to remember intermediate steps or data. 

It was this elimination of unnecessary decisions that first attracted many of us to the idea of a mash-up. As TQMLEAN, et al tell us, unnecessary decisions are a source of errors. If we want to deliver high quality at a low cost (i.e. efficient and effective knowledge workers) then we need to eliminate these decisions. This helps us become more productive by spending a greater proportion of our time on the decisions that really matter, rather than on messy busy work. Fewer decisions also means fewer chances for mistakes.

Since those original mash-up solutions, our definition of mash-up evolved. Todays definitions are founded on the tools and techniques used to deliver a modern web-based GUI. These definitions focus on the technology standards, where the data is processed (client vs. server), standards and APIs, and even mention application architectures used. Rarely do they talk about the outcome delivered, or the benefits this brings.

There’s little difference, for example, between some mashups and a modern portal. We can debate the differences between aggregating data on the client vs. the server, but does it really matter if it doesn’t change the outcome, and the difference is invisible to the user? The same can be said for the use of standards, APIs used, user configuration options, differing solution architectures and so on.

The shift to a feature-function base definition has allowed the product vendors and analysts of seize control of our definition, and apply it to the next generation of products they would like us to buy. This has diluted the term to the point that it seems to cover much of what we’ve been doing for the last decade, and many of the benefits ascribed to the original mash-ups don’t apply to solutions which fit under this new, broader church.

Modern consumer home pages, such as iGoogle and NetVibes for example, do allow us to use desk and screen real estate more effectively–providing a small productivity boost–but they don’t address the root of the problem. Putting two gadgets on a page does little to fuse the data. The user is still required to scan the CRM and order management gadgets separately, fusing the data in their head.  Find, select, remember, find, select correlation, click rather than a single look, select, click.

The gadgets might be visually proximate, but we could do that with two browser windows. Or two green screens side-by-side. The user is still required to look at both, and establish the correlation themselves. The chair might not swivel as much as with old school portlets, but eyeballs still do, and we are still forcing the user to make unnecessary decisions about data correlation. They don’t deliver that eliminate unnecessary decisions outcome that first attracted us to mash-ups.

The gold standard we need to measure potential mash-ups against is the melding of data used to eliminate unnecessary decisions. This might something visual, like push-pins on a map or markup on an x-ray. Or it might cover tabular data, where different cells in the table are sourced from different back-end systems. (Single customer view generated at the user interface.) If we fuse the data, building new gadgets which pull data attributes and function into one consistent view, then we eliminate these decisions. We can even extend this to function, allowing the user to trigger a workflow or process that make sense in the view they are presented, but with no knowledge of what or where implements the workflow.

We need a definition for mash-ups is that captures this outcome. Something like:

A mash-up is a user interface, or user interface element, that melds data and function from multiple sources to create one single, seamless view of a topic, eliminating unnecessary decisions and actions.

This v0.1 definition provides a nice, terse, strong definition for mash-up which we can hang a number of concrete benefits from.

  • More productive knowledge workers. Our knowledge workers only spend time on the decisions that really matter, rather than on messy busy work, making them more productive.
  • More effective knowledge workers. Fewer decisions mean fewer chances for mistakes, reducing the cost of error recovery and rework resulting in more effective knowledge workers.

Posted via email from PEG @ Posterous

What are the benefits of a mash-up?

The original mash-ups were simple things. Solutions like the Chicago Crime and AlertMap pulled together data from two or more sources (maps and crime databases, in the case of Chicago Crime) to create one single view. Previously I would have had to access these data sources separately–find, select, remember, find, correlate, click. With the mash-up this multi-step and multi-decision workflow is reduced to a single look, select, click. Many decisions became one, and I was no longer forced to remember intermediate data.

TQM, LEAN, et al tell us that unnecessary decisions are a source of errors. If we want to deliver high quality at a low cost (i.e. efficient and effective knowledge workers) then we need to eliminate these decisions. This brings a few immediate benefits:

  • More productive knowledge workers. Our knowledge workers only spend time on the decisions that really matter, rather than on messy busy work.
  • More effective knowledge workers. Fewer decisions mean fewer chances for mistakes.
If we were to use mash-ups in this way to simplify key, call centre processes (for example) then we can can translate these two points direct into business benefits:
  • Reduced staff on-boarding costs, cutting training time, and reducing time to competency by providing a simply and more direct workflow, one which leads the call centre operator through the workflow.
  • Reduce call servicing costs, including reduced escalations and improved first call resolution by avoiding mistakes and and ensuing that the operator has all the information required to solve the customer’s problem on hand.
  • Improved staff retention, by allowing them to focus on the customer engagement, rather than soul destroying swivel chair integration.

With a typical call centre agent using six applications per call, this represents a drastic simplification of the call centre work environment.

A third benefit is the decoupling a mash-up creates between presentation and back-end applications. As all user interaction is mediated by the mash-up, there is not direct connection between the data and function provided by a single application, and the work surface the knowledge worker interacts with. This enables us to evolve the UI and back-end separately, allowing us to keep the user interface in sync with business demands while continuing to pursue a separate, and longer cycle consolidation effort to consolidate backend systems to reduce operational costs.

It’s easy to extrapolate these (potential) benefits to other solutions. My favourite is human services, where providing a case worker with the right information at the right time, and removing unnecessary distractions, will result in a material difference in the quality of life for the people under their care. However, these benefits can easily be applied to any high value knowledge work processes, such as logistics exception manager, utility field worker, sales personnel, and so on.

Posted via email from PEG @ Posterous

Innovation should not be the race for the new-new thing

Note: This post is part of larger series on innovation, going under the collective name of Innovation and Art of Random.

We’re all searching for the new-new thing. Be it a product or a method, we’re looking for that innovation that will let us stand out from the pack, because in a world where we are all good, we need to be original. If an idea becomes a trend before we’re involved, we are not a leader. When we’re first to market, if we capture first mover advantage, then we can define the rules of the game. But how can we tap into valuable ideas for products, services or method before they are seen as trends, when they are just … random?

In today’s hyper-competitive business environment being good, being operationally efficient, has become the price of entry. We’ve leveraged methodologies like TQM, Six Sigma, LEAN to optimize our businesses, and while we might carry some baggage from our past, we are good at what we do. In this environment, it’s the ability to be original, the ability to innovate, that will let us stand out from the crowd. Innovation, though, is random. At least it often seems that way. A chance connection or unlikely insight takes someone on a journey to create something new. New developments, new product and services based on original ideas, seem to come out of the blue.

A product which created its own product category
A product which created it's own product category

Think of the first time you saw breath strips; small, minty strips that dissolve on your tongue, eliminating pre-meeting (or pre-date) bad breath. Where did they come from? Most of us can’t quite put our finger on their origin. We heard about them one day, and the next they seemed to be in every shop we walking into, anywhere around the world. A new market segment had been created, and its creator had captured most of the value.

The race for the new-new thing seems to have created an innovation arms race. We want to be the first to find an idea, nurture it, and turn it into a competitive advantage. This has made innovation—the search for new opportunities—into a race for more. More ideas, more connections, more investment, more involvement. If we can see more ideas, get access to more content, get more of our team involved, if we can get it earlier in its lifecycle, then we might be the ones with first mover advantage.

We’re starting to take this to extremes, industrialising the quest for more. Conferences (some of which are rapidly becoming media empires in their own right), such as TED, are creating idea smorgasbords for us to graze on. The industrialization of ideas has us all drinking from the same (soda) fountain. This is driving incremental improvement in our businesses by sharing best practice, which is a good thing, but it’s not going to help us find the new-new thing, the innovative product that will help us stand out from the crowd.

The challenge when managing innovation is not in capturing ideas before they develop into market shaping innovations. If we see an innovative idea outside our organization, then we must assume that we’re not the first to see it, and ideas are easily copied. If innovation is a transferable good, then we’d all have the latest version.

New ideas rarely just pop into existence though; technology, the development of ideas, is an evolutionary process. New, novel ideas, are simply combinations of existing ones, driven by someone’s desire to solve a problem. Breath strips, for example, were the chance connection between mouth wash, a Japanese trend for a dissolving sweets and our (western) desire for fresh breath, a connection made by a western executive on a business trip to Japan. As new ideas are simple combinations of existing ones, the technology we thought of yesterday might might be more valuable tomorrow, as the key component in a new solution.

Each small step of innovation is the result of someone, somewhere bringing together a collection of previously unconnected ideas to solve a problem. This is a pull, rather than a push process. Solutions are not created in search of a problem, but in response to a problem. A new idea is the result of a series of small, incremental steps from the ideas we have to the idea we need. The net result of this incremental development is huge. What makes innovation surprising, and seemingly random, is the fact that we often only see the end result, and not the journey.

Innovation, the ability to be original, comes from inside, not outside of our organizations. The real challenge is synthesis: understanding what problems are interesting, selecting the ideas which bring value to a solution (as not all ideas are created equal), and then bringing together these ideas to create something new. How do we create space and time to help our team synthesize these new, innovative ideas when presented with a challenge?

Accelerate along the road to happiness

Our ability to effectively manage time is central to success in today’s hype-competitive business environment. The streamlined and high velocity value-chains we’ve created are designed to invest as little time (and money) as possible in unproductive business activities. However, being fast, being good at optimizing our day-to-day operations, is no longer enough. We’ve reached a point where managing the acceleration of our business—the ability to change direction, redeploy resources to meet new opportunities more rapidly than our competition—is the driver for best in category performance. If we can react faster than our competition then we can capitalize on a business opportunity (or disruption, as they are often the same) and harvest any value the opportunity created.

Time is our overarching business driver at the moment. We hope to be the first to approve a mortgage, capturing the customer before our competitors have even responded to the original application. We strive to be first to market with a new portable music device (Walkman or iPod), establishing early mover advantage and taking the dominant position in the market. Or we might simply want to quickly restore essential services—power, gas or water—to our customers, as they have become intensely dependent upon them. Globalization has leveled the playing field, as we’re all working from the same play book and leveraging the same resources. The most significant factor for success in this environment is the ability to execute faster than our competition—harvesting the value in an opportunity before they can.

This focus on time is a recent phenomena. Not long ago, no further back than the early nineties, we were more concerned with mass. The challenge was too get the job done. Keep the wheels turning in the factories. Keep the workers busy in their cubicles. Time is money, so we’re told, and we need to ensure that we don’t waste money by laying idle. Mass was the key to success—ensuring that we had enough work to do, enough raw materials to work on, to keep our business busy and productive.

When mass is the focus, then bigger is better. This is a world where global conglomerates rule, as size is the driver for success. Supply chains were designed so that enough stuff was available right next to the factory, where supply can be ensured, that the factory would never run out of raw materials and grind to a halt. Whether shuffling paperwork or shifting widgets, the ability to move more stuff around the business was always seen as an improvement.

This is also the world that created a pile of shipping containers too behold in the Persian Gulf, during the Gulf War in the early nineties. With no known destination, some containers couldn’t be delivered. Without a clear understanding of where they came from, others couldn’t be returned. A few of these orphaned containers were opened in an attempt to determine their destination or origin; however the sweltering Arabian sun was not kind to their contents, which included items such as raw poultry, so a stop was soon put to that. The containers just kept piling up. 22,000 of 50,000 containers simply became invisible, collecting in a pile that went by the jaunty name of Iron Mountain.

Iron Mountain: 22,000 containers that became invisible
Iron Mountain: 22,000 containers that became invisible

Our answer was to stop focusing on mass, on having enough stuff on hand to keep the wheels of industry turning. We have to admit that Iron Mountain proves that we could move sufficient mass. The next challenge was to ensure that materials arrived at just the right time for them to be consumed by the business. We moved from worrying about mass, to managing velocity.

Total quality management and process improvement efforts finally found their niche. LEAN and Six Sigma rolled through the business landscape ripping cost out businesses where-ever they went. Equipped with books on Toyota’s Production System and kanban cards, we ripped excess material from the supply chain. Raw materials arrive just-in-time, and we avoid the costs associated with storing and handling vast warehouses of material, as well as the working capital tied up in the stored material itself. Quality went up, process cycle times shrunk, and the pace of business accelerated. Much like the tea clippers from China in the 1800s, with the annual race to get the first crop back to London for the maximum profit (with skipper paid a profit share as an incentive along with their salary), we’re focused on cranking the handle of business as fast as possible.

Zara, a fashion retailer, is the poster child for this generation of business. The fashion industry is built around a value-chain that tries to push out regular product updates, beating up demand via runway shows and media coverage to support a seasonal marketing cycle. Zara takes a different approach, tracking customer preferences and trends as they happen in the stores and trying to deliver an appropriate design as rapidly as possible, allowing customer demand to pull fashion. By focusing on responding to customer demand, wherever it is, Zara has built an organization designed too minimize time from design to marketed product. For example, onshore, high-tech, agile production is preferred to low-tech but low cost, offshore production which involves long production delays. Zara takes two weeks to take a product to market, where the industry average is six months; the lifetime of Zara’s products is measured in weeks, rather than months; and the products offered in each store are tailored to the interests of the community it serves rather than a long term marketing plan.

The change in product life-cycle has created a material change to customer buying habits. Traditionally customers’ will visit a fashion store a few times a year to see what a new season brings. There is no real pressure to buy in any particular visit, as they know they can return to buy the same garment later. Zara, however, with it’s dramatically shortened product cycles, drives different behavior. Customer visit more often, as they can expect to see a new range each visit. They are also more likely too buy, as they know that there is little chance of the same garment being available the next time. This approach has made Zara the most profitable arm of Inditex, a holding company of eight retail brands, and one of the biggest success stories in Spanish business.

The dirty secret of high velocity, lean businesses is that they are fragile: small disturbances can create massive knock-on effects. As we’ve ripped fat from the value chain, we’ve also weakened its ability to react to, and resolve, disruptions. A stockout can now flow all the way back along the supply chain to the literal coal face, stalling the entire business value-chain. Restoring an essential service is delayed while we scramble to procure the vital missing part. Mortgage approvals are deferred while we try reallocate the work load of a valuer dealing with a personal emergency. Or our carefully synchronized product launch falls apart for what seems like a trivial reason somewhere on the other side of the globe.

Our most powerful tools in creating todays high velocity businesses—tools like straight-through processing, LEAN and Six Sigma—worked by removing variation from business processes to increase throughput. The same tools prevent us from effectively responding to these disruptions.

Opportunities today are more frequent, but disruptive and fleeting. An open air festival in the country might represent an opportunity for a tolling operator to manage parking in an adjacent field, if the solution can be deployed as sufficient scale rapidly enough. Or the current trend for pop-up retail stores (if new products rapidly come and go, then why not stores) could be moved from an exceptional, special occasion marketing tool, into the mainstream as a means to optimize sales day-by-day. Responding to these opportunities implies reconfiguring our business on the fly—rapidly integrating business exceptions into the core of our business. This might range from reconfiguring our carefully designed global supply chain, through changing core mortgage approval criteria and processes to modifying category management strategies in (near) real time.

Sam: Waiting while his bank sorts itself out
Sam: Waiting while his bank sorts itself out

We’re entering a time when our ability to change direction, adapting to and leveraging changes in the commercial environment as they occur, will drive our success. If we can react faster than the competition then we can capitalize on a business opportunity and harvest any value the opportunity creates. Our focus will become acceleration: working too build businesses with the flexibility and spare energy required to turn and respond rapidly. These businesses will be the F1 cars of business, providing a massive step in performance over more conventional organizations. And, just like F1, they will also require a new level of performance from our knowledge workers. If acceleration is our focus, then our biggest challenge will be creating time and space required by our knowledge workers to identify these opportunities, turn the steering wheel and leverage them as they occur.

Update: A friend of mine just pointed out that the logical progression of mass → velocity → acceleration naturally leads to jerk, which is an informal unit of measurement for the third derivative.