Tag Archives: Capgemini

Think “in the market,” not “go to market”

A friend of mine{{1}} made an astute comment the other day.

We need to think about “in the market” models, rather than “go to market” models.

[[1]]Andy Mulholland @ Capgemini[[1]]

I think this nicely captures the shift we’re seeing in the market; businesses are moving away from offering products which (hopefully) will sell, and adopting models founded on successful long term relationships. This is true for both business-to-consumer and business-to-business relationships, as our success is increasingly dependent on the success of the community we are a part of and the problems that we solve for (our role in) this community.

For a long time we’ve sought that new widget we might offer to the market: the new candy bar everyone wants. It’s the old journey of:

  • find a need,
  • fulfil the need.

Our business models have been built around giving someone something they what, and making a margin on the way through. Sometimes our customers didn’t know that they had the need until we, or their peer group, pointed it out to them, but we were nevertheless, fulfilling a need.

Recent history has seen the more sophisticated version of this emerge in the last few decades:

Give them the razor and sell the razor blades{{2}}.

[[2]]Giving away the razor, selling the blades @ Interesting thing of the day[[2]]

which has the added advantage of fulfilling a reoccurring need. Companies such as HP have made good use of this, more-or-less giving away the printers while pricing printer ink so that it is one of the most expensive substances on the planet (per gram).

Since then, companies (both B2C and B2B) have been working hard to reach customers earlier and earlier in the buying process. Rather than simply responding, after a customer has identified a need, along with the rest of the pack, they want to engage the customer and help the customer shape their need in a way that provides the company with an advantage. A great example of this are the airlines who enable you to buy a short holiday somewhere warm rather than a return trip to some specified destination. The customer gets some help shaping their need (a holiday), while the company has the opportunity to shape the need is a way that prefers their products and services (a holiday somewhere that the airline flies to).

The most recent shift has been to flip this approach on its head. Rather than aligning themselves with the needs they fulfil, some companies are starting to align themselves with the problems they solve. Needs are, after all, just represent potential solutions to a problem.

Nike is an interesting case study. Back in the day Nike was a (marketing driven) sports shoe company. If you needed shoes, then they had shoes. Around 2006—2008 Nike started developing a range of complementary products – web sites, sensors integrated into clothing, etc. – and began positioning the company as providing excellence in running, rather than simply fulfilling a need. The company grew 27% in two years as a result.

Rolls Royce (who I’ve written about before{{3}}) are another good example, but business-to-business. They shifted from the need (jet engines) to the problem (moving the plane) with huge success.

[[3]]What I like about jet engines @ PEG[[3]]

While these companies still have product and service catalogues, what’s interesting is the diversity of their catalogues. Rather than structuring their catalogue around an internal capability (their ability to design and manufacture a shoe or jet engine), the focus is on their role in the market and the capabilities required to support this role.

As Andy said, they have an “in the market” model, rather than a “go to market” model.

Danger Will Robinson!

Ack! The scorecard's gone red!
Ack! The scorecard's gone red!

Andy Mulholland has a nice post over at the Capgemini CTO blog, which points out that we have a strange aversion to the colour red. Having red on your balanced scorecard is not necessarily a bad thing, as it tells you something that you didn’t know before. Insisting on managers delivering completely green scorecard is just throwing good information away.

Unfortunately something’s wrong with Capgemini’s blogging platform, and it won’t let me post a comment. Go and read the post, and then you can find my comment below.

Economists have a (rather old) saying: “if you don’t fail occasionally, then you’re not optimising (enough)”. We need to consider red squares on the board to be opportunities, just as much as they might be problems. Red just represents “something happened that we didn’t expect”. This might be bad (something broke), or it might be good (an opportunity).

Given the rapid pace of change today, and the high incidence of the unexpected, managing all the red out of your business instantly turns you into a dinosaur.

The IT department we have today is not the IT department we’ll need tomorrow

The IT departments many of us work in today (either as an employee or consultant) are often the result of thirty or more years of diligent labour. These departments are designed, optimised even, to create IT estates populated with large, expensive applications. Unfortunately these departments are also looking a lot like dinosaurs: large, slow and altogether unsuited for the the new normal. The challenge is to reconfigure our departments, transforming them from asset management functions into business (or business-technology) optimisation engines. This transformation should be a keen interest for all of us, as it’s going to drive a dramatic change in staffing profiles which will, in turn, effect our own jobs in the no so distant future.

Delivering large IT solutions is a tricky business. They’re big. They’re expensive. And the projects to create them go off the rails more often than we’d like to admit. IT departments have been built to minimise the risks associated with delivering and operating these applications. This means governance, and usually quite a lot of it. Departments which started off as small scale engineering functions soon picked up an administrative layer responsible to the mechanics of governance.

More recently we’ve been confronted with the challenge with managing the dependancies and interactions between IT applications. Initiatives like straight-through processing require us to take a holistic, rather than a pieces-parts, approach, and we’re all dealing with the problem of having one of each application or middleware product, as well as a few we brewed in the back room ourselves. Planning the operation and evolution of the IT estate became more important, and we picked up an enterprise architecture capability to manage the evolution of our IT estate.

It’s common to visualise these various departmental functions and roles as a triangle (or a pyramid, if you prefer). At the bottom we have engineering: the developers and other technical personnel who do the actual work to build and maintain our applications. Next layer up is governance, the project and operational administrators who schedule the work and check that it’s done to spec. Second from the top are the planners, the architects responsible for shaping the work to be done as well as acting as design authority. Capping of the triangle (or pyramid) is the IT leadership team who decide what should be done.

The departmental skills triangle

While specific techniques and technologies might come and go, the overall composition of the triangle has remained the same. From the sixties and seventies through to even quite recently, we’ve staffed our IT departments with many technical doers, a few less administrators, a smaller planning team, and a small IT leadership group. The career path for most of us been a progression from the bottom layers – when we were fresh out of school – to the highest point in the triangle that we can manage.

The emergence of off-shore and outsourcing put a spanner in the works. We all understand the rational: migrate the more junior positions – the positions with the least direct (if any) contact with the business proper – to a cheaper country. Many companies under intense cost pressure broke the triangle in two, keeping the upper planning and decision roles, while pushing the majority of the manage and all the do roles out of the country, or even out of the company.

Our first attempt at out-sourcing

Ignoring whether or not this drive to externalise the lower roles provided the expected savings or not, what it did do is break the career ladder for IT staff. Where does you next generation of senior IT personnel come from if you’ve pushed the lower ranks out of the business? Many companies found themselves with an awkward skills shortage a few years into an outsourcing / off-shore arrangement, as they were no longer able to train or promote senior personnel to replace those who were leaving through natural attrition.

The solution to this was to change how we brake-up the skills triangle; rather than a simple horizontal cut, we took a slice down the side. Retaining a portion of all skills in-house allows companies provide a career path and on the job training for their staff.

A second, improved, go at out-sourcing
A second, improved, go at out-sourcing

Many companies have tweaked this model, adding a bulge in the middle to provide a large enough resource pool to manage both internal projects, as well as those run by out-sourced and off-shore resources.

Factoring in the effort required to manage out-sourced projects
Factoring in the effort required to manage out-sourced projects

This model is now common in a lot of large companies, and it has served us well. However, the world has a funny habit of changing just when you’ve everything working smoothly.

The recent global financial criss has fundamentally changed the business landscape. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order. Many are even talking about the emergence of a new normal. The impact this will have on how we run our businesses (and our IT departments) is still being discussed, but we can see the outline of this impact already.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building internal capabilities. While this trend might have initially started as a cost saving, most of the benefit is in management time saved, which can then be used to focus on more important issues. We’re all finding that the limiting factor in our business is management time, so being able to hand off the management of less important tasks can help provide that edge you need.

We’re also seeing faster business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up. Nowhere is this more evident than the regulated industries (finance, utilities …), where updates in government regulation has changed from a generational to a quarterly occurrence as governments attempt to use regulation change to steer the economic boat.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

We can draw a few general conclusions on the potential impact on IT departments of these trends.

  • The increase reliance on partners, the broader partner ecosystem this implies, and an increasingly global approach to business will create more complex operational environments, increasing the importance of planning the IT estate and steering a company’s IT in the right direction.
  • The need to reduce leverage, and free up working capital, is pushing companies toward BPO and SaaS solutions, rather than the traditional on-premisses solutions, where the solution provider is paid per-seat, or might even be only paid a success fee.
  • The need for rapid project turn-around is pushing us toward running large portfolios of small projects, rather than a small number of large projects.
  • A lot of the admin work we used to do is now baked into web delivered solutions (BaseCamp et al).

This will trigger us to break up a the skills triangle in a different way.

A skills/roles triangle for the new normal
A skills/roles triangle for the new normal

While we’ll still take a slice down the side of the triangle, the buldge will move to the ends of the slice, giving it a skinny waist. The more complex operational environment means that we need to beef up planning (though we don’t want to get all dogmatic about our approach, as existing asset-centric IT planning methodologies won’t work in the new normal). A shift to large numbers of small projects (where the projects are potentially more technically complex) means that we’ll beef up our internal delivery capability, providing team leads with more autonomy. The move to smaller projects also means that we can reduce our administration and governance overhead.

We’ll replace some skills with automated (SaaS) solutions. Tools like BaseCamp will enable us to devolve responsibility for reporting and management to the team at the coalface. It will also reduce the need to develop and maintain infrastructure. Cloud technology is a good example of this, as it takes a lot of the tacit knowledge required to manage a fleet of servers and bakes it into software, placing it in the hands of the developers. Rumor has it that that a cloud admin can support 10,000 servers to a more traditional admin’s 500.

And finally, our suppliers act as a layer through the middle, a flex resource for us to call on. They can also provide us with a broader, cross-industry view, of how to best leverage technology.

This thinning out of the middle ranks is part of a trend we’re seeing elsewhere. Web2.0/E2.0/et al are causing organisations to remove knowledge workers — the traditional white collar middle layers of the organisaiton – leaving companies with a strategy/leadership group and task workers.

Update: Andy Mulholland has an interesting build on this post over at the Capgemini CTO blog. I particularly like the Holm service launched by Ford and Microsoft, a service that it’s hard to imagine a traditional IT department fielding.

Consulting doesn’t work any more. We need to reinvent it.

What does it mean to be in consulting these days? The consulting model that’s evolved over the last 30 – 50 years seems to be breaking down. The internet and social media have shifted the way business operates, and the consulting industry has failed to move with it. The old tricks that the industry has relied on — the did it, done it stories and the assumption that I know something you don’t — no longer apply. Margins are under pressure and revenue is on the way down (though outsourcing is propping up some) as clients find smarter ways to solve problems, or decide that they can simply do without. The knowledge and resources the consulting industry has been selling are no longer scarce, and we need to sell something else. Rather than seeing this as a problem, I see it as a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. Sell outcomes, not scarcity and rationing.

I’m a consultant. I have been for some time too, working in both small and large consultancies. It seems to me that the traditional relationship between consultancy and client is breaking down. This also appears to be true for both flavours of consulting: business and technology. And by consulting I mean everything from the large tier ones down to the brave individuals carving a path for themselves.

Business is down, and the magic number seems to be roughly a 17% decline year-on-year. One possible cause might be that the life blood of the industry — the large multi-year transformation project — has lost a lot of its attraction in recent years. If you dig around in the financials for the large publicly listed consultancies and vendors you’ll find that the revenue from IT estate renewal and transformation (application licenses, application configuration and installation services, change management, and even advisory) is sagging by roughly 17% everywhere around the globe.

SABER @ American Airlines

Large transformation projects have lost much of their attraction. While IBM successfully delivered SABER back in the 60s, providing a heart transplant for American Airlines ticketing processes, more recent stabs at similarly sized projects have met with less than stellar results. Many more projects are quietly swept under the carpet, declared a success so that involved can move on to something else.

The consulting model is a simple one. Consultants work on projects, and the projects translate into billable hours. Consultancies strive to minimise overheads (working on customer premises and minimising support staff), while passing incidental costs through to clients in the form of expenses. Billable hours drive revenue, with lower grades provide higher margins.

This creates a couple of interesting, and predictable, behaviours. First, productivity enhancing tooling is frowned on. It’s better to deploy a graduate with a spreadsheet than a more senior consultant with effective tooling. Second, a small number of large transactions are preferred to a large number of small transactions. A small number of large transactions requires less overhead (sales and back-office infrastructure).

All this drives consultancies to create large, transformational projects. Advisory projects end up developing multi-year (or even multi-decade) roadmaps to consolidate, align and optimise the business. Technology projects deliver large, multi-million dollar, IT assets into the IT estate. These large, business and IT transformation projects provide the growth, revenue and margin targets required to beat the market.

This desire for large projects is packaged up in what is commonly called “best practice”. The consulting industry focuses on did it, done it stories, standard and repeatable projects to minimise risk. The sales pitch is straight-forward: “Do you want this thing we did over here?” This might be the development of a global sourcing strategy, an ERP implementation, …

Spencer Tracy & Katharine Hepburn in The Desk Set
Spencer Tracy & Katharine Hepburn in The Desk Set

This approach has worked for some time, with consultancy and client more-or-less aligned. Back when IBM developed SABER you were forced to build solutions from the tin up, and even small business solutions required significant effort to deliver. In the 1957, when Spencer Tracy played a productivity expert in The Desk Set, new IT solutions required very specific skills sets to develop and deploy. These skills were in short supply, making it hard for an organisation to create and maintain a critical mass of in-house expertise.

Rather than attempt to build an internal capability — forcing the organisation on a long learning journey, a journey involving making mistakes to acquire tacit knowledge — a more pragmatic approach is to rent the capability. Using a consultancy provides access to skills and knowledge you can’t get elsewhere, usually packaged up as a formal methodology. It’s a risk management exercise: you get a consultancy to deliver a solution or develop a strategy as they just did one last week and know where all the potholes are. If we were cheeky, then we would summerize this by stating that consultancies have a simple value proposition: I know something you don’t!

It’s a model defined by scarcity.

A lot has changed in the last few years; business moves a lot faster and a new generation of technology is starting to take hold. The business and technology environment is changing so fast that we’re struggling to keep up. Technology and business have become so interwoven that we now talk of Business-Technology, and a lot of that scarce knowledge is now easily obtainable.

The Diverging Pulse Rates of Business and Technology
The Diverging Pulse Rates of Business and Technology

The scarce tacit knowledge we used to require is now bundled up in methodologies; methodologies which are trainable, learnable, and scaleable. LEAN and Six Sigma are good examples of this, starting as more black art than science, maturing into respected methodologies, to today where certification is widely available and each methodology has a vibrate community of practitioners spread across both clients and consultancies. The growth of MBA programmes also ensures that this knowledge is spread far and wide.

Technology has followed a similar path, with the detailed knowledge required to develop distributed solutions incrementally reified in methodologies and frameworks. When I started my career XDR and sockets were the networking technologies of the day, and teams often grew to close to one hundred engineers. Today the same solution developed on a modern platform (Java, Ruby, Python …) has a team in the single digits, and takes a fraction of the time. Tacit knowledge has be reified in software platforms and frameworks. SaaS (Software as a Service) takes this to a while new level by enabling you to avoid software development entirely.

The did it, done it stories that consulting has thrived on in the past are being chewed up and spat out by the business schools, open source, and the platform and SaaS vendors. A casual survey of the market usually finds that SaaS-based solutions require 10% of the installation effort of a traditional on-premsis solution. (Yes, that’s 90% less effort.) Less effort means less revenue for the consultancies. It also reduces the need for advisory services, as provisioning a SaaS solution with the corporate credit card should not require a $200,000 project to build a cost-benefit analysis. And gone are the days when you could simply read the latest magazines and articles from the business schools, spouting what you’d read back to a client. Many clients have been on the consulting side of the fence, have a similar education in the business schools, and reads all the same articles.

I know and you don’t! no longer works. The world has moved on and the consulting industry needs to adapt. The knowledge and resources the industry has been selling are no longer scarce, and we need to sell something else. I see this is a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. As Jeff Jarvis has said: stop selling scarcity, sell outcomes.

Updated: A good friend has pointed out the one area of consulting — one which we might call applied business consulting — resists the trend to be commoditized. This is the old school task of sitting with clients one-on-one, working to understand their enterprise and what makes it special, and then using this understanding to find the next area or opportunity that the enterprise is uniquely qualified to exploit. There’s no junior consultants in this area, only old grey-beards who are too expensive to stay in their old jobs, but that still are highly useful to the industry. Unfortunately this model doesn’t scale, forcing most (if not all) consultancies into a more operational knowledge transfer role (think Six Sigma and LEAN) in an attempt to improve revenue and GOP.

Updated: Keith Coleman (global head of public sector at Capgemini Consulting) makes a similar case with Time to sell results, not just advice (via @rpetal27).

Updated: I’ve responded to my own post, tweaking my consulting page to capture my take on what a consultant needs to do in this day and age.

Is “agile enterprise IT” an oxymoron?

Have we managed to design agility out of enterprise IT? Are the two now incompatible? Our decision to measure IT purely in terms of cost (ROI) or stability (SLAs) means that we have put aside other desirable characteristics like responsiveness, making our IT estates more like the lumbering airships of the 1920s. While efficient and reliable (once we got the hydrogen out of them), they are neither exciting or responsive to the business. The business ends up going elsewhere for their thrills. What to do?

LZ-127 Graf Zeppelin
LZ-127 Graf Zeppelin

An interesting post on jugaad over at the Capgemini CTO blog got me thinking. The tension between the managed chaos that jugaad seems to represent and the stability we strive for in IT seems to nicely capture the current tensions between business and IT. Business finds that opportunities are blinking in and out of existence faster than ever before, providing dramatically reduced windows of opportunity leaving IT departments unable to respond in time, prompting the business to look outside the organisation for solutions.

The first rule of CIOs is “you only have a seat at the strategy table if you’re keeping the lights on”. The pressure is on to keep the transactions flowing, and we spend a lot of time and money (usually the vast majority of our budget) ensuring that transactions do indeed flow. We often complain that our entire focus seems to be on cost and operations, when there is so much more we can bring to the leadership team. We forget that all departments labour under a similar rule, and all these rules are really just localised versions of a single overarching rule: the first rule of business, which is to be in business (i.e. remain solvent). Sales needs to sell, manufacturing needs to manufacture, … By devoting so much of our energy on cost and stability, we seems to have dug ourselves into a bit of a hole.

There’s another rule that I like to quote from time-to-time: management is not the art of making the perfect decision, but making a timely decision and then making it work. This seems to be something we’ve forgotten in the West, and particularly in IT. Perfection is an unattainable ideal in the real world, and agility requires a little chaos/instability. What’s interesting about jugaad is the concept’s ability to embrace the chaos required to succeed when resource constraints prevent you for using the perfect (or even simply the best) solution.

Vickers F.B. 5 Gunbus
Vickers F.B.5. Gunbus

Consider a fighter plane. The other day I was watching a documentary on the history of aircraft which showed how the evolution of fighters is a progression from stability to instability The first fighters (and we’re talking the start of WWI here–all fabric and glue) were designed to float above the battlefield where the pilots could shoot down at soldiers, or even lob bombs at them. They were designed to be very stable, so stable that the pilot could ignore the controls for a while and the plane would fly itself. Or you could shoot out most of the control surfaces and still land safely. (Sounds a bit like a modern, bullet proof, IT application, eh?)

The Red Baron: NAME
The Red Baron: Manfred von Richthofen

The problem with these planes is that they are very stable. It’s hard to make them turn and dance about, and this makes them easy to shoot down. They needed to be more agile, harder to shoot down, and the solution was to make them less stable. The result, by the end of WWI, was the fairly unstable tri-planes we associate with the Red Baron. Yes, this made them harder to fly, and even harder to land, but it also made them harder to hit.

Wizz forward to the modern day, and we find that all modern fighters are unstable by design. They’re so unstable that they’re unflyable without modern fly-by-wire systems. Forget about landing: you couldn’t even get them off the ground without their fancy control systems. The governance of the fly-by-wire systems lets the pilot control the uncontrollable.

The problem with modern IT is that it is too stable. Not the parts, the individual applications, but the IT estate as a whole. We’ve designed agility out of it, focusing on creating a stable and efficient platform for lobbing bombs onto the enemy below. This is great is the landscape below us doesn’t change, and the enemy promises not to move or shoot back, but not so good in today’s rapidly changing business environment. We need to be able to rapidly turn and dance about, both to dodge bullets and pounce on opportunities. We need some instability as instability means that we’re poised for change.

Jugaad points out that we need to allow in a bit of chaos if we want to bring the agility back in. The chaos jugaad provides is the instability we need. This will require us to update our governance processes, evolving them beyond simply being a tool to stop the bad happening, transforming governance into a tool for harvesting the jugaad where it occurs. After all, the role of enterprise IT is to capture good ideas and automate them, allowing them to be leveraged across the entire enterprise.

Managing chaos has become something of a science in the aircraft world. Tools like Energy-Maneuverability theory are used during aircraft design to make informed tradeoffs between weight, weapons load, amount of wing (i.e. ability to turn), and so on. This goes well beyond most efforts to map and score business processes, which is inherently a static pieces/parts and cost driven approach. Our focus should be on using different technologies and delivery approaches to modify how our IT estate responds to business change; optimising our IT estate’s dynamic, change-driven characteristics as well as its cost-driven static characteristics.

This might be the root of some of the problems we’re seeing between business and IT. IT’s tendency to measure value in terms of cost and/or stability leads us to create IT estates optimised for a static environment, which are at odds with the dynamic nature of the modern business environment. We should be focusing on the overall dynamic business performance of the IT estate, its energy-maneuverability profile.

Innovation [2010-01-18]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Is BI really the next big thing?

I think we’re at a tipping point with BI. Yes, it makes sense that BI should be the next big thing in the new year, as many pundits are predicting, driven by the need to make sense of the massive volume of data we’re accumulated. However, I doubt that BI in its current form is up to the task.

As one of the CEOs Andy Mulholland spoke to mentioned “I want to know … when I need to focus in.” The CEO’s problem is not more data, but the right data. As Andy rightfully points out in an earlier blog post, we’ve been focused on harvesting the value from our internal, manufactured data, ignoring the latent potential in our unstructured data (let alone the unstructured data we can find outside the enterprise). The challenge is not to find more data, but the right data to drive the CEO’s decision on where to focus.

It’s amazing how little data you need to make an effective decision—if you have the right data. Andrew McAfee wrote a nice blog post a few years ago (The case against the business case is the closest I can find to it), pointing out that the mass of data we pile into a conventional business case just clouds the issues, creating long cause-and-effect chains that make it hard to come to an effective decision. His solution was the one page business case: capability delivered, (rough) business requirements, solution footprint, and (rough) costing. It might be one page, but there is enough information, the right information, to make an effective decision. I’ve used his approach ever since.

Current BI seems to be approaching the horse from the wrong direction, much like Andrew’s business case problem. We focus on sifting through all the information we have, trying to glean any trends and correlations which might be useful. This works as small to moderate scales, but once we reach the huge end of the scale it starts to groan under its own weight. It’s the law of diminishing returns—adding more information to the mix will only have a moderate benefit compared to the effort required to integrate and process it.

A more productive method might be to use a hypothesis-driven approach. Rather than look for anything that might be interesting, why not go spelunking for specific features which we know will be interesting?  The features we’re looking for in the information are (almost always) to support a decision. Why not map out that decision, similar to how we map out the requires for a feedback loop in a control system, and identify the types of features that we need to support the decision we want to make? We can segment our data sets based on the features’ gross characteristics (inside vs. outside, predictive vs. historical …) and then search in the appropriate segments for the features we need. We’ve broken one large problem—find correlations in one massive data set—into a series of much more manageable tasks.

The information arms race, the race to search through more information for that golden ticket, is just a relic of the lack of information we’ve lived with in the past. In today’s land of plenty, more is not necessarily better. Finding the right features is our real challenge.

Posted via email from PEG @ Posterous

Inside vs. Outside

As Andy Mullholland pointed out in a recent post, all too often we manage our businesses by looking out the rear window to see where we’ve been, rather than looking forward to see where we’re going. How we use information too drive informed business decisions has a significant impact on our competitiveness.

I’ve made the point previously (which Andy built on) that not all information is of equal value. Success in today’s rapidly changing and uncertain business environment rests on our ability to make timely, appropriate and decisive action in response to new insights. Execution speed or organizational intelligence are not enough on their own: we need an intimate connection to the environment we operate in. Simply collecting more historical data will not solve the problem. If we want to look out the front window and see where we’re going, then we need to consider external market information, and not just internal historical information, or predictions derived from this information.

A little while ago I wrote about the value of information. My main point was that we tend to think of most information in one of two modes—either transactionally, with the information part of current business operations; or historically, when the information represents past business performance—where it’s more productive to think of an information age continuum.

The value of information
The value of information

Andy Mulholland posted an interesting build on this idea on the Capgemini CTO blog, adding the idea that information from our external environment provides mixed and weak signals, while internal, historical information provides focused and strong signals.

The value of information and internal vs. external drivers
The value of information and internal vs. external drivers

Andy’s major point was that traditional approaches to Business Intelligence (BI) focus on these strong, historical signals, which is much like driving a car by looking out the back window. While this works in a (relatively) unchanging environment (if the road was curving right, then keep turning right), it’s less useful in a rapidly changing environment as we won’t see the unexpected speed bump until we hit it. As Andy commented:

Unfortunately stability and lack of change are two elements that are conspicuously lacking in the global markets of today. Added to which, social and technology changes are creating new ideas, waves, and markets – almost overnight in some cases. These are the ‘opportunities’ to achieve ‘stretch targets’, or even to adjust positioning and the current business plan and budget. But the information is difficult to understand and use, as it is comprised of ‘mixed and weak signals’. As an example, we can look to what signals did the rise of the iPod and iTunes send to the music industry. There were definite signals in the market that change was occurring, but the BI of the music industry was monitoring its sales of CDs and didn’t react until these were impacted, by which point it was probably too late. Too late meaning the market had chosen to change and the new arrival had the strength to fight off the late actions of the previous established players.

We’ve become quite sophisticated at looking out the back window to manage moving forward. A whole class of enterprise applications, Enterprise Performance Management (EPM), has been created to harvest and analyze this data, aligning it with enterprise strategies and targets. With our own quants, we can create sophisticated models of our business, market, competitors and clients to predict where they’ll go next.

Robert K. Merton: Father of Quants
Robert K. Merton: Father of Quants

Despite EPM’s impressive theories and product sheets, it cannot, on its own, help us leverage these new market opportunities. These tools simply cannot predict where the speed bumps in the market, no matter how sophisticated they are.

There’s a simple thought experiment economists use to show the inherent limitations in using mathematical models to simulate the market. (A topical subject given the recent global financial crisis.) Imagine, for a moment, that you have a perfect model of the market; you can predict when and where the market will move with startling accuracy. However, as Sun likes to point out, statistically, the smartest people in your field do not work for your company; the resources in the general market are too big when compared to your company. If you have a perfect model, then you must assume that your competitors also have a perfect model. Assuming you’ll both use these models as triggers for action, you’ll both act earlier, and in possibly the same way, changing the state of the market. The fact that you’ve invented a tool to predicts the speed bumps causes the speed bumps to move. Scary!

Enterprise Performance Management is firmly in the grasp of the law of diminishing returns. Once you have the critical mass of data required to create a reasonable prediction, collecting additional data will have a negligible impact on the quality of this prediction. The harder your quants work, the more sophisticated your models, the larger the volume of data you collect and trawl, the lower the incremental impact will be on your business.

Andy’s point is a big one. It’s not possible to accurately predict future market disruptions with on historical data alone. Real insight is dependent on data sourced from outside the organization, not inside. This is not to diminish the important role BI and EPM play in modern business management, but to highlight that we need to look outside the organization if we are to deliver the next step change in performance.

Zara, a fashion retailer, is an interesting example of this. Rather than attempt to predict or create demand on a seasonal fashion cycle, and deliver product appropriately (an internally driven approach), Zara tracks customer preferences and trends as they happen in the stores and tries to deliver an appropriate design as rapidly as possible (an externally driven approach). This approach has made Zara the most profitable arm of Inditex, a holding company of eight retail brands, and one of the biggest success stories in Spanish business. You could say that Quants are out, and Blink is in.

At this point we can return to my original goal: creating a simple graphic that captures and communicates what drives the value of information. Building on both my own and Andy’s ideas we can create a new chart. This chart needs to capture how the value of information is effected by age, as well as the impact of externally vs. internally sourced. Using these two factors as dimensions, we can create a heat map capturing information value, as shown below.

Time and distance drive the value of information
Time and distance drive the value of information

Vertically we have the divide between inside and outside: internally created from processes; though information at the surface of our organization, sourced from current customers and partners; to information sourced from the general market and environment outside the organization. Horizontally we have information age, from information we obtain proactively (we think that customer might want a product), through reactively (the customer has indicated that they want a product) to historical (we sold a product to a customer). Highest value, in the top right corner, represents the external market disruption that we can tap into. Lowest value (though still important) represents an internal transactional processes.

As an acid test, I’ve plotted some of the case studies mentioned in to the conversation so far on a copy of this diagram.

  • The maintenance story I used in my original post. Internal, historical data lets us do predictive maintenance on equipment, while  external data enables us to maintain just before (detected) failure. Note: This also applies tasks like vegetation management (trimming trees to avoid power lines), as real time data and be used to determine where vegetation is a problem, rather than simply eyeballing the entire power network.
  • The Walkman and iPod examples from Andy’s follow-up post. Check out Snake Coffee for a discussion on how information driven the evolution of the Walkman.
  • The Walmart Telxon story, using floor staff to capture word of mouth sales.
  • The example from my follow-up (of Andy’s follow-up), of Albert Heijn (a Dutch Supermarket group) lifting the pricing of ice cream and certain drinks when the temperature goes above 25° C.
  • Netflix vs. (traditional) Blockbuster (via. Nigel Walsh in the comments), where Netflix helps you maintain a list of files you would like to see, rather than a more traditional brick-and-morter store which reacts to your desire to see a film.

Send me any examples that you know of (or think of) and I’ll add them to the acid test chart.

An acid test for our chart
An acid test for our chart

An interesting exercise left to the reader is to map Peter Drucker’s Seven Drivers for change onto the same figure.

Update: A discussion with a different take on the value of information is happening over at the Information Architects.

Update: The latest instalment in this thread is Working from the outside in.

Update: MIT Sloan Management Review weighs in with an interesting article on How to make sense of weak signals.

Have we really understood what Business Intelligence means?

Andy Mulholland has a nice build on my value of information bit over at Capgemini’s CTO blog, flipping the sense of the figure and showing how the time axis also connects to internal vs. external focus, and IT’s shift from cost control to value creation.

The value of information
The value of information and internal vs. external drivers

Check it out.

Update 2: Andy Mulholland came across a nice example:

Albert Heijn the Dutch Supermarket group lifts the pricing of ice cream and certain drinks when the temperature goes above 25’ C

Update 1: I’ve left a comment there building on what Andy has.

BI does seem to be moving in this direction, but still has a long way to go and is too internally focused. Customer Intelligence is moving the enterprise boundary out a little, and does not really address the challenge of integrating external information to create new insight. What about local events, weather, the memes from the social media community, the memes from our competitors customers, or anything else we can think of? The challenge is to fuse internal, customer, competitor, market and even environmental data to create new insight.

For example, consider current approaches to S&OP (sales and operations planning). We’ve take what is an inherently unstructured and collaborative activity and shoved it through the process and business intelligence meat grinder to create yet-another enterprise application. It’s no surprise that S&OP is a challenge to deploy, with few companies realizing (let alone capturing) the promised value. Customer Intelligence adds little to the benefit side of this this equation; it would seem impossible to justify CI in terms of cost saving, and challenging to justify it in terms of creating new business.

Imaging a world where we have our S&OP team focused on information synthesis rather than the planning process. They might pluck weather data (it’s going to be hot in St Kilda) and couple it with an event (the St Kilda festival), memes from their customers (and their competitor’s customers) plucked from hootsuite, and decide only 24 hours before the event to rapidly deploy a pop-up store. It’s this sort of sense-and-respond ability that will drive us to the next level of performance.

One of the best real world examples of this transition from internal-cost-control to external-value-capture has happened around the hand-held stock management devices used in retail. Initial deployed as a cost control measure (i.e. better information on what’s on the shelves) they have now become a tool for capturing value. Walmart has been using these devices for some time, devolving buying decisions to the team walking the shop floor and providing them with the information they need to make good buying decisions. As one reporter found:

“We received an inspirational talk on this subject, from an employee who reacted after the store test-marketed tents that could protect cars for people who didn’t have enough garage space. They sold out quickly, and several customers came in asking for more. Clearly this was a singular, exceptional case of word-of-mouth, so he ordered literally a truckload of tent-garages, “Which I shouldn’t have done really without asking someone,” he said with a shrug, “because I hadn’t been working at the store for long.” But the item was a huge success. His VPI was the biggest in store history—and that kind of thing doesn’t go unnoticed in Arkansas.”

Fly on the wall

In BI terms, we’re moving from large, centralized solutions used to drive planning, to distributed peer-to-peer networks focused on supporting local decisions. While corporate data stores will still play an important role, the advantage is moving to our ability to fuse multiple data sources, some which we do not own and some which only have local relevance. The right information, at the right time, in the right place, to empower knowledge workers to make the best possible decisions. Local Intelligence, rather than Business Intelligence.

From doctrine to dogma: when did a good idea become the only idea

When does a good method become the only method? The one true approach to solving a problem; the approach which will bind them all. The last few decades has seen radical change in our social and business environments, while the practice of business seems to have changed relatively little since the birth of the corporation. The problem of running a business, the problem we work every day to solve, has changed so much that the best practice of yesterday has become an albatross. The methods and practices that have brought us to the current level of performance are also one of the larger impediments to achieving the next level. When did the yesterday’s doctrine become today’s dogma? And what can we do about it?

Our methodologies and practices have been carefully designed to help steer our leviathan ships of industry, tuning their performance to with five and three year plans. The newspapers of today, for example, hold a marked resemblance to the news papers of 100 years ago, structured as large content factories churning out the stories with some ads slapped in the page next to them.

The best practices evident in companies today represent the culmination of generations of effort in building, running and improving our businesses. The doctrine embodied in each industry in a huge, a immensely valuable body of knowledge, tuned to solving the problem of business as we know it.

doctrine |ˈdäktrin|
noun
a belief or set of beliefs held and taught by a church, political party, or other group : the doctrine of predestination.
• a stated principle of government policy, mainly in foreign or military affairs: the Monroe Doctrine.
ORIGIN late Middle English : from Old French, from Latin doctrina ‘teaching, learning,’ from doctor ‘teacher,’ from docere ‘teach.’

OS X Dictionary, © Apple 2007

However, a number of fundamental changes have taken hold in recent years. The pace of business has increased markedly; what used to take years now takes months, or even weeks. The role of technology in business has changed as applications have become ubiquitous and commoditized. The assumptions which existing doctrine were developed under no longer hold.

Today, most (if not all) newspapers are watching their as revenue is eroded by the likes of Craigslist, who have used modern web technology to come up with a new take on the decades (if not centuries) old classified ad.

Let’s look at Craiglist. I’ve heard people estimate that they are doing close to $100mm in annual revenues at this point. Many say, “they could be doing so much more”. But the Craigslist profit equation is interesting. They apparently have less than 30 employees. That’s about $4mm/year in employee costs. Let’s assume that they spend another $6mm per year on hosting and bandwidth costs and other costs. So it’s very possible that Craigslist’s annual costs are around $10mm/year. Their value equation then is 10 x (100-10) = $900mm. That’s almost a billion dollars in value for a company with only 30 employees.

Fred Wilson, A VC

Craigslist has taken a fresh look at what it means to be in the business of classified ads, and used technology in a new way to help create business value, rather than restrict it to controlling costs and delivering process effencies; an approach Forrester have labeled Business-Technology.

The challenge is to acknowledge that the rules of business have changed, and modify our best practices to suit the new business environment because, as Albert Einstein pointed out “insanity is doing the same thing over and over again and expecting different results.” If we can’t change our best practices to suit, then our valuable doctrine has become worthless dogma.

dogma |ˈdôgmə|
noun
a principle or set of principles laid down by an authority as incontrovertibly true: the Christian dogma of the Trinity | the rejection of political dogma.
ORIGIN mid 16th cent.: via late Latin from Greek dogma ‘opinion,’ from dokein ‘seem good, think.’

OS X Dictionary, © Apple 2007

Enterprise architecture (EA) is prime example. As a doctrine, enterprise architecture has a proud history all the way back to John Zachman’s work in the 70s and the architecture framework which carries his name. EA has leveraged large, multi-year transformation programs to deliver huge operational effencies into the business. These programs have delivered a level of business performance unimaginable just a generation ago.

The pace of business has accelerated so much in recent years that the multiyear engagement model these transformations imply is no longer appropriate. What use is a five or three year plan in a world that changes every quarter? Transformation projects have been struggling recently. Some recent transformations edge across the line, at which point everyone moves onto the next project exhausted, and the promised benefits are neither identified or realized. Some transformations are simply declared a success after an appropriate effort has been applied, allowing the team to move on. A few explode, often quite publicly.

This approach made sense a decade or more ago, where IT was focused on delivering the next big IT asset into the enterprise. It’s application strategy, rather than technology strategy. However, the business and technology environment has changed radically recently since the emergence of the Internet as a public utility. The IT departments we’ve created as application factories have become an albatross for the business; making us incapable of engaging anything but a multiyear project worth tens of millions of dollars. They actively prevent the business from leveraging in innovative solutions or business opportunities. Even when there is a compelling reason to do so.

Simply put, the value created by enterprise architecture has moved, and the doctrine, or at least our approach to applying it, hasn’t kept up. For example, a common practice when establishing a new EA team seems to involve hiring architects to fill each role defined TOGAF’s IT Architecture Role and Skill Definitions to provide us with complete skills coverage. Driving this is a desire to align ourselves with best practice, and ensure we do the job properly.

Some of TOGAFs IT Architecture Role and Skill Definitions
Some of TOGAF's IT Architecture Role and Skill Definitions

Most companies don’t need, nor can they can afford, a complete toolbox of enterprise architecture skills inside the business. A strict approach to the the doctrine will result in a larger EA team than the company can sustain. A smarter approach is to balance the demands and available resources of the company against the skill requirements and possible outcomes. We can tune our approach by aligning it with new techniques, tools and capabilities, or integrating elements from other doctrines—agile or business planning techniques, for example—to create a broader pallet of tools to solve our problem with. This might involve new engagement models. We can buy some skills while renting others. Some skills might be sustainable at a lower levels. It is also possible multi-skill, playing the role of both enterprise and solution architect. Similarly, leveraging software as a service (SaaS) solutions can also force changes in our engagement model, as a methodology suitable for scoping a three year and $50 million investment in on-premises CRM might not be appropriate for a SaaS solution which only requires 10% of the effort and investment as the on-premises solution.

Treating doctrine as prescriptive converts it into dogma. As John Boyd pointed out, we should assume that all doctrine is not right—that it’s incomplete or incorrect to some extent. You need to challenge all assumptions and look outside your own doctrine for new ideas.

Our own, personal resistance to change is the strongest thing holding us back. It seems that we learn something in our early to mid twenties, and then spend the rest of our career happily doing the same thing over and over again. We define ourselves in terms of what we did yesterday. If we create an environment where we define ourselves in terms of how we will help the organization evolve, rather than in terms of the assets we manage or doctrine we apply, then we can convert change from an enemy into an opportunity.

There is light at the end of the tunnel. For all the talk of the end of newspapers, some journalists are banding together to create new business models which can hold their own in a post-Craigslist world. Some old school journalists have taken a fresh look at what it means to be a newspaper. Young but growing strong and profitable, Politico’s news room is 100 strong and they have more people in the white house bureau than any other brand.

As TechCrunch pointed out:

Journalists still matter. A lot. Especially the good ones.

The challenge is to focus on what really matters, get close to your customers and find what really drives your business, question all the common sense (which is neither common or sensible in many cases) in your industry’s doctrine, look into the doctrine of other industries to see what they are doing that you can use, and use technology to create a business which their more traditional competitors will find it impossible to compete against.