The following analogy popped up the other day in an email discussion with a friend.
Running a business is a bit like being the Fat Controller, running his vast train network. We spend our time trying to get the trains to run on time with the all too often distraction of digging the Troublesome Trucks out of trouble.
Improvement often means upgrading the tracks to create smoother, straighter lines. After years of doing this, any improvement to the tracks can only provide a minor, incremental benefit.
What we really need is a new signalling system. We need to better utilise the tracks we already have, and this means making better decisions about which trains to run where, and better coordination between the trains. Our tracks are fine (as long as we keep up the scheduled maintenance), but we do need to better manage transit across and between them.
Swap processes for tracks, and I think that this paints quite a nice visual picture.
Years of processes improvement (via LEAN, Six Sigma and, more recently, BPM) had straightened and smoothed our processes to the point that any additional investment has hit the law of diminishing returns. Rather than continue to try and improve the processes on my own, I’d outsource process maintenance to a collection of SaaS and BPO providers.
The greater scale of these providers allows them to invest in improvements which I don’t have the time or money for. Handing over responsibility also creates the time and space for me to focus on improving the decisions on which process to run where, and when: my signalling system.
This is especially important in a world where it is becoming rare to even own the processes these days.
We forget just how important a good signalling system is. Get it right and you get the German or Japanese train networks. Get it wrong and you rapidly descend into the second or third world, regardless of the quality of your tracks.
Here’s an interesting and topical question: is the market for enterprise IT services (SI, BPO, advisory et al) growing or shrinking? I’m doing the rounds at the moment to see where the market is going (a side effect of moving on), and different folk seems to have quite different views.
It’s shrinking as the new normal is squeezing budgets and OPEX is the new CAPEX.
It’s growing as companies are externalising more functions than ever before as they attempt to create a laser like focus on their core business.
It’s shrinking as the transition from on-premsis applications to SaaS implies a dramatic reduction (some folk are saying around 80-90%) in the effort required to deploy and maintain a solution.
It’s growing as the mid market is becoming a lot more sophisticated and starting to spend a lot more on enterprise software (witness Microsoft Dynamics huge market share).
It’s shrinking as SaaS is replacing BPO, in effect replacing people with cheaper software solutions? (Remember when TrueAdvantage, and Indian BPO, laid off all 150 of its workers after being purchased by InsideView?)
It’s growing as the need for more mobility solutions, and the massive growth in the mobile web, is driving us to create a new generation of enterprise solutions.
It’s shrinking as cloud computing and netbooks remove what little margin was left in infrastructure services.
It’s growing as investment in IT is a bit like gas, and tends to expand until it consumes all available funds. (Remember integration? As the cost of integration went down, we just found more integration projects to fill the gap.)
Does location matter? Or, put another way, is the world no longer flat? Many cloud and SaaS providers work under the assumption that where we store data where it is most efficient from an application performance point of view, ignoring political considerations. This runs counter to many company and governments who care greatly where their data is stored. Have we entered a time where location does matter, not for technical reasons, but for political reasons? Is globalisation (as a political thing) finally starting to impact IT architecture and strategy?
Thomas Friedman‘s book, The World is Flat, contained a number of stories which where real eye openers. The one I remember the most was the McDonald’s drive through. The idea was simple: once you’ve removed direct physical contact from the ordering process, then it’s more efficient to accept orders from a contact centre than from within the restaurant itself. We could event locate that contact centre in a cheaper geography such as another state, or even another country.
Telecommunications made the world flat, as cheap telecommunications allows us to locate work wherever it is cheapest. The opportunity for labour arbitrage this created drove offshoring through the late nineties and into the new millenium. Everything from call centres to tax returns and medical image diagnosis started to migrate to cheaper geographies. Competition to be the cheapest and most efficient service provider, rather than location, determines who does the work. The entire world would compete on a level playing field.
In the background, whilst this was happening, enterprise applications went from common to ubiquitous. Adoption was driven by the productivity benefits the applications brought, which started of as a source of differentiation, but has now become one of the many requirements of being in business. SaaS and cloud are the most recent step in this evolution, leveraging the global market to create solutions operating at such a massive scale that they can provide price points and service levels which are hard, if not impossible, for most companies to achieve internally.
Where we store our assets is important. Organisations want to put their assets somewhere safe, because without assets these the organisations don’t amount to much. Companies want to keep their information — their confidential trade secrets — hidden from prying eyes. Governments need to ensure they have the trust of their citizens by respecting their privacy. (Not to mention the skullduggery this is international relations.) While communications technology has made it incredibly easy to move this information around and keep it secure, it has yet to solve the political problem of ensuring that we can trust the people responsible for safeguarding our assets. And all these applications we have created — both the traditional on-premesis, hosted or SaaS and cloud versions — are really just asset management tools.
We’re reached a point where one of the a larger hidden assumptions of enterprise applications has been exposed. Each application was designed to live and operate within a single organisation. This organisation might be a company, or it might be a country, or it might be some combination of the two. The application you select to manage your data determines the political boundary it lives within. If you use any U.S. SaaS or cloud solution provider to manage your data, then your data falls under U.S. judicial discovery laws, irregardless of where you yourself are located. If your data transits through the U.S., then assume that the U.S. government has a copy. The world might be flat, but where you store your assets and where you send them still matters.
We can already see some moves by the vendors to address this problem. Microsoft, for example, has developed a dedicated cloud for the U.S. government, known as BPOS Federal, which is designed to meet the government’s stringent security and privacy standards. Amazon has also taken a portion of the cloud it runs and dedicated it to, and located it in, the EU, for similar reasons.
If we consider enterprise applications to be asset management tools rather than productivity tools, then ideas like private clouds start to make a lot of sense. Cloud technology reifies a lot of the knowledge required to configure and manage a virtualised environment in software, eliminating the data centre voodoo and empowering the development teams to manage the solutions themselves. This makes cloud technology simply a better asset management tool, but we need to freedom to locate the data (and therefore the application) where it makes the most sense from an asset management point of view. Sometimes this might imply a large, location agnostic, public cloud. Other times it might require a much smaller private cloud located within a specific political boundary. (And the need to prevent some data even transiting through a few specific geographies – requiring us to move the code to the data, rather than the data to the code – might be the killer application that mobile agents have been waiting for.)
What we really need are meta-clouds: clouds created by aggregating a number of different clouds, just as the Internet is a network of separate networks. While the clouds would all be technically similar, each would be located in a different political geography. This might be inside vs. outside the organisation, or in different states, or even different countries. The data would be stored and maintained where it made the most sense from an asset management point of view, with few technical considerations, the meta-cloud providing a consistent approach to locating and moving our assets within and across individual clouds as we see fit.
What does it mean to be in consulting these days? The consulting model that’s evolved over the last 30 – 50 years seems to be breaking down. The internet and social media have shifted the way business operates, and the consulting industry has failed to move with it. The old tricks that the industry has relied on — the did it, done it stories and the assumption that I know something you don’t — no longer apply. Margins are under pressure and revenue is on the way down (though outsourcing is propping up some) as clients find smarter ways to solve problems, or decide that they can simply do without. The knowledge and resources the consulting industry has been selling are no longer scarce, and we need to sell something else. Rather than seeing this as a problem, I see it as a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. Sell outcomes, not scarcity and rationing.
I’m a consultant. I have been for some time too, working in both small and large consultancies. It seems to me that the traditional relationship between consultancy and client is breaking down. This also appears to be true for both flavours of consulting: business and technology. And by consulting I mean everything from the large tier ones down to the brave individuals carving a path for themselves.
Business is down, and the magic number seems to be roughly a 17% decline year-on-year. One possible cause might be that the life blood of the industry — the large multi-year transformation project — has lost a lot of its attraction in recent years. If you dig around in the financials for the large publicly listed consultancies and vendors you’ll find that the revenue from IT estate renewal and transformation (application licenses, application configuration and installation services, change management, and even advisory) is sagging by roughly 17% everywhere around the globe.
Large transformation projects have lost much of their attraction. While IBM successfully delivered SABER back in the 60s, providing a heart transplant for American Airlines ticketing processes, more recent stabs at similarly sized projects have met with lessthanstellarresults. Many more projects are quietly swept under the carpet, declared a success so that involved can move on to something else.
The consulting model is a simple one. Consultants work on projects, and the projects translate into billable hours. Consultancies strive to minimise overheads (working on customer premises and minimising support staff), while passing incidental costs through to clients in the form of expenses. Billable hours drive revenue, with lower grades provide higher margins.
This creates a couple of interesting, and predictable, behaviours. First, productivity enhancing tooling is frowned on. It’s better to deploy a graduate with a spreadsheet than a more senior consultant with effective tooling. Second, a small number of large transactions are preferred to a large number of small transactions. A small number of large transactions requires less overhead (sales and back-office infrastructure).
All this drives consultancies to create large, transformational projects. Advisory projects end up developing multi-year (or even multi-decade) roadmaps to consolidate, align and optimise the business. Technology projects deliver large, multi-million dollar, IT assets into the IT estate. These large, business and IT transformation projects provide the growth, revenue and margin targets required to beat the market.
This desire for large projects is packaged up in what is commonly called “best practice”. The consulting industry focuses on did it, done it stories, standard and repeatable projects to minimise risk. The sales pitch is straight-forward: “Do you want this thing we did over here?” This might be the development of a global sourcing strategy, an ERP implementation, …
This approach has worked for some time, with consultancy and client more-or-less aligned. Back when IBM developed SABER you were forced to build solutions from the tin up, and even small business solutions required significant effort to deliver. In the 1957, when Spencer Tracy played a productivity expert in The Desk Set, new IT solutions required very specific skills sets to develop and deploy. These skills were in short supply, making it hard for an organisation to create and maintain a critical mass of in-house expertise.
Rather than attempt to build an internal capability — forcing the organisation on a long learning journey, a journey involving making mistakes to acquire tacit knowledge — a more pragmatic approach is to rent the capability. Using a consultancy provides access to skills and knowledge you can’t get elsewhere, usually packaged up as a formal methodology. It’s a risk management exercise: you get a consultancy to deliver a solution or develop a strategy as they just did one last week and know where all the potholes are. If we were cheeky, then we would summerize this by stating that consultancies have a simple value proposition: I know something you don’t!
It’s a model defined by scarcity.
A lot has changed in the last few years; business moves a lot faster and a new generation of technology is starting to take hold. The business and technology environment is changing so fast that we’re struggling to keep up. Technology and business have become so interwoven that we now talk of Business-Technology, and a lot of that scarce knowledge is now easily obtainable.
The scarce tacit knowledge we used to require is now bundled up in methodologies; methodologies which are trainable, learnable, and scaleable. LEAN and Six Sigma are good examples of this, starting as more black art than science, maturing into respected methodologies, to today where certification is widely available and each methodology has a vibrate community of practitioners spread across both clients and consultancies. The growth of MBA programmes also ensures that this knowledge is spread far and wide.
Technology has followed a similar path, with the detailed knowledge required to develop distributed solutions incrementally reified in methodologies and frameworks. When I started my career XDR and sockets were the networking technologies of the day, and teams often grew to close to one hundred engineers. Today the same solution developed on a modern platform (Java, Ruby, Python …) has a team in the single digits, and takes a fraction of the time. Tacit knowledge has be reified in software platforms and frameworks. SaaS (Software as a Service) takes this to a while new level by enabling you to avoid software development entirely.
The did it, done it stories that consulting has thrived on in the past are being chewed up and spat out by the business schools, open source, and the platform and SaaS vendors. A casual survey of the market usually finds that SaaS-based solutions require 10% of the installation effort of a traditional on-premsis solution. (Yes, that’s 90% less effort.) Less effort means less revenue for the consultancies. It also reduces the need for advisory services, as provisioning a SaaS solution with the corporate credit card should not require a $200,000 project to build a cost-benefit analysis. And gone are the days when you could simply read the latest magazines and articles from the business schools, spouting what you’d read back to a client. Many clients have been on the consulting side of the fence, have a similar education in the business schools, and reads all the same articles.
I know and you don’t! no longer works. The world has moved on and the consulting industry needs to adapt. The knowledge and resources the industry has been selling are no longer scarce, and we need to sell something else. I see this is a huge opportunity; an opportunity to establish a more collaborative and productive relationship founded on shared, long term success. As Jeff Jarvis has said: stop selling scarcity, sell outcomes.
Updated: A good friend has pointed out the one area of consulting — one which we might call applied business consulting — resists the trend to be commoditized. This is the old school task of sitting with clients one-on-one, working to understand their enterprise and what makes it special, and then using this understanding to find the next area or opportunity that the enterprise is uniquely qualified to exploit. There’s no junior consultants in this area, only old grey-beards who are too expensive to stay in their old jobs, but that still are highly useful to the industry. Unfortunately this model doesn’t scale, forcing most (if not all) consultancies into a more operational knowledge transfer role (think Six Sigma and LEAN) in an attempt to improve revenue and GOP.
The wisdom of the crowd seems to have decided that both cloud computing and its sibling SaaS are cost plays. You engage a cloud or SaaS vendor to reduce costs, as their software utility has the scale to deliver the same functionality at a lower price point than you could do yourself.
I think this misses some of the potential benefits that these new delivery models can provide, from reducing your management overhead, allowing you to focus on more important or pressing problems, through to acting as a large flex resource or providing you with a testbed for innovation. In an environment where we’re all racing to keep up, the time and space we can create through intelligently leveraging cloud and SaaS solutions could provide us with the competitive advantage we need.
Could and SaaS are going to take over the world, or so I hear. And it increasingly looks that way, from Nicholas Carr‘s entertaining stories about Sameul Insull through to Salesforce.com, Google and Amazon‘s attempts to box-up SaaS and cloud for easy consumption. These companies massive economies of scale enable them to deliver commoditized functionality at a dramatically lower price point that most companies could achieve with even the best on-premises applications.
This simple fact causes many analysts to point out the folly of creating a private cloud. While a private cloud enables a company to avoid the security and ownership issues associated with a public service, they will never be able to realise the same economies of scale as their public brethren. It’s these economies of scale that enables companies like Google to devote significant time and effort into finding new and ever more creative techniques to extract every last drip of efficiency from their data centres, techniques which give them a competitive advantage.
I’ve always had problems with this point of view, as it ignores one important fact: a modern IT estate must deliver more than efficiency. Constant and dramatic business change means that our IT estate must be able to be rapidly reconfigured to support an ever evolving business environment. This might be as simple as scaling up and down, inline with changing transaction volumes, but it might also involve rewriting business rules and processes as the organisation enters and leaves countries with differing regulation regimes, as well as adapting to mergers, acquisitions and divestments.
Once we look beyond cost, a few interesting potential uses for cloud and SaaS emerge.
First, we can use cloud as a tool to increase the flexibility of our IT estate. Using a standard cloud platform, such as an Amazon Machine Image, provides us with more deployment options than more traditional approaches. Development and testing can be streamlined, compressing development and testing time, while deployed applications can be migrated to the cloud instance which makes the most sense. We might choose to use public cloud for development and testing, while deploying to a private cloud under our own control to address privacy or political concerns. We might develop, test and deploy all into the public cloud. Or we might even use a hybrid strategy, retaining some business functionality in a private cloud, while using one or more public clouds as a flex resource to cope with peak loads.
Second, we can use cloud and SaaS as tools to increase the agility of our IT estate. By externalising the the management of our infrastructure (via cloud), or even the management of entire applications (via SaaS), we can create time and space to worry about more important problems. This enables us to focus on what needs to happen, rather than how to make it happen, and rely on the greater scale of our SaaS or cloud provider to respond more rapidly than we could if we were maintaining a traditional on-premises solution.
And finally, we can use cloud as the basis of an incubator strategy where an organisation may test a new idea using externalised resources, proving the business case before (potentially) moving to a more traditional internal deployment model.
We seem to be torn between two masters. On one hand we’re driven to renew our IT estate, consolidating solutions to deliver long term efficiency and cost savings. On the other hand, the business wants us to deliver new, end user functionality (new consumer kiosks, workforce automation and operational excellence solutions …) to support tactical needs. But how do we balance these conflicting demands, when our vertically integrated solutions tightly bind user interaction to the backend business systems and their multi-year life-cycle? We need to decouple the two, breaking the strong connection between business system and user interface. This will enable us to evolve them separately, delivering long term savings while meeting short term needs.
Business software’s proud history is the story of managing the things we know. From the first tabulation systems through enterprise applications to modern SaaS solutions, the majority of our efforts have been focused data: capturing or manufacturing facts, and pumping them around the enterprise.
We’ve become so adept at delivering these IT assets into the business, that most companies’ IT estates a populated with an overabundance of solutions. Many good solutions, some no so good, and many redundant or overlapping. Gardening our IT estate has become a major preoccupation, as we work to simplify and streamline our collection of applications to deliver cost savings and operational improvements. These efforts are often significant undertakings, with numbers like “5 years” and “$50 million” not uncommon.
While we’ve become quite sophisticated at delivering modular business functionality (via methods such as SOA), our approach to supporting users is still dominated by a focus on isolated solutions. Most user interfaces are slapped on as nearly an after thought, providing stakeholders with a means to interact with the vast, data processing monsters we create. Tightly coupled to the business system (or systems) they are deployed with, these user interfaces are restricted to evolving at a similar pace.
Business has changed while we’ve been honing our application development skills. What used to take years, now takes months, if not weeks. What used to make sense now seems confusing. Business is often left waiting while we catch up, working to improve our IT estate to the point that we can support their demands for new consumer kiosks, solutions to support operational excellence, and so on.
What was one problem has now become two. We solved the first order challenge of managing the vast volumes of data an enterprise contains, only to unearth a second challenge: delivering the right information, at the right time, to users so that they can make the best possible decision. Tying user interaction to the back end business systems forces our solutions for these two problems to evolve at a similar pace. If we break this connection, we can evolve users interfaces at a more rapid pace. A pace more in line with business demand.
We’ve been chipping away at this second problem for a quite a while. Our first green screen and client-server solutions were over taken from portals, which promised to solve the problem of swivel-chair integration. However, portals seem to be have been defeated by browser tabs. While these allowed us to bring together the screens from a collection of applications, providing a productivity boost by reducing the number of interfaces a user interacted with, it didn’t break the user interfaces explicit dependancy on the back end business systems.
We need to create a modular approach to composing new, task focused user interfaces, doing to user interfaces what SOA has done for back-end business functionality. The view users see should be focused on supporting the decision they are making. Data and function sourced from multiple back-end systems, broken into reusable modules and mashed together, creating an enterprise mash-up. A mashup spanning multiple screens to fuse both data and process.
Some users will find little need an enterprise mash-up—typically users who spend the vast majority of their time working within a single application. Others, who work between applications, will see a dramatic benefit. These users typically include the knowledge rich workers who drive the majority of value in a modern enterprise. These users are the logistics exception managers, who can make the difference between a “best of breed” supply chain and a category leading one. They are the call centre operators, whose focus should be on solving the caller’s problem, and not worrying about which backend system might have the data they need. Or they could be field personnel (sales, repairs …), working between a range of systems as they engage with you customer’s or repair your infrastructure.
By reducing the number of ancillary decisions required, and thereby reducing the number of mistakes made, enterprise mash-ups make knowledge workers more effective. By reducing the need to manually synchronise applications, copying data between them, we make them more efficient.
But more importantly, enterprise mash-ups enable us to decouple development of user interfaces from the evolution of the backend systems. This enables us to evolve the two at different rates, delivering long term savings while meeting short term need, and mitigating one of the biggest risks confronting IT departments today: the risk of becoming irrelevant to the business.
Being involved in enterprise IT, we tend to think that the applications we build, install and maintain will provide a competitive advantage to the companies we work for.
Take Walmart, for example. During the early 80s, Walmart invested heavily in creating a data warehouse to help it analyze its end-to-end supply chain. The data was used to statically optimize Walmart’s supply chain, creating the most efficient, lowest cost supply chain in the world at the time. Half the savings were passed on to Walmart’s customers, half whet directly to the bottom line, and the rest is history. The IT asset, the data warehouse, enabled Walmart to differentiate, while the investment and time required to develop the data warehouse created a barrier to competition. Unfortunately this approach doesn’t work anymore.
Fast forward to the recent past. The market for enterprise applications has grown tremendously since Walmart first brought that data warehouse online. Today, applications providing solutions to most business problems are available from a range of vendors, and at a fraction of the cost required for the first bespoke solutions that blazed the enterprise application trail. Walmart even replaced that original bespoke supply chain data warehouse, which had become something of an expensive albatross, with an off-the-rack solution. How is it possible for enterprise applications to provide a competitive advantage if we’re all buying from the same vendors?
One argument is that differentiation rests in how we use enterprise applications, rather than in the applications themselves. Think of the manufacturing industries (to use a popular analogy at the moment). If two companies have access to identical factories, then they can still make different, and differentiated, products. Now think of enterprise applications as business process factories. Instead of turning out products, we use these factories to turn out business processes. These digital process factories are very flexible. Even if we all start with the same basic functionality, if I’m smarter at configuring the factory, then I’ll get ahead over time and create a competitive advantage.
This analogy is so general that it’s hard to disagree with. Yes, enterprise applications are (mostly) commodities so any differentiation they might provide now rests in how you use them. However, this is not a simple question of configuration and customization. The problem is a bit more nuanced than that.
Many companies make the mistake that customizing (code changes etc) their unique business processes into an application will provide them with a competitive advantage. Unfortunately the economics of the enterprise software market mean that they are more likely to have created an albatross for their enterprise, than provided a competitive advantage.
Applications are typically parameterized bespoke solutions. (Many of the early enterprise applications were bespoke COBOL solutions where some of the static information—from company name through shop floor configuration—has been pushed into databases as configuration parameters. ) The more configuration parameters provided by the vendor, the more you can bend the application to a shape that suits you.
Each of these configuration parameters requires and investment of time and effort to develop and maintain. They complicate the solution, pushing up its maintenance cost. This leads vendors to try and minimize the number of configuration points they provide to a set of points that will meet most, but not all customers’ needs. In practical terms, it is not possible to configure an application to let you differentiate in a meaningful way. The configuration space is simply too small.
Some companies resort to customizing the application—changing its code—to get their “IP” in. While this might give you a solution reflecting how your business runs today, every customization takes you further from a packaged solution (low cost, easy to maintain, relatively straight forward to upgrade …) and closer to a bespoke solution (high cost, expensive to maintain, difficult or impossible to upgrade). I’ve worked with a number of companies where an application is so heavily customized that it is impossible to deploy vendor patches and/or upgrades. The application that was supposed to help them differentiate had become an expensive burden.
Any advantage to be wrung from enterprise IT now comes from the gaps between applications, not from the applications themselves. Take supply chain for example. Most large businesses have deployed planning and supply chain management solutions, and have been on either the LEAN or Six Sigma journey. Configuring your planning solution slightly differently to your competitors is not going to provide much of an edge, as we’re all using the same algorithms, data models and planning drivers to operate our planning process.
Most of the potential for differentiation now lies with the messier parts of the process, such as exception management (the people who deal with stock-outs and lost or delayed shipments). If I can bring together a work environment that makes my exception managers more productive than yours—responding more rapidly and accurately to exceptions—then I’ve created a competitive advantage as my supply chain is now more agile than yours. If I can capture what it is that my exception managers do, their non-linear and creative problem solving process, automate it, and use this to create time and space for my exception managers to continuously improve how supply chain disruptions are handled, then I’ve created a sustainable competitive advantage. (This is why Enterprise 2.0 is so exciting, since a lot of this IP in this space is tacit information or collaboration.)
Simply configuring an application with today’s best practice—how your company currently does stuff—doesn’t cut it. You need to understand the synergies between your business and the technologies available, and find ways to exploit these synergies. The trick is to understand the 5% that really makes your company different, and then reconfiguring both the business and technology to amplify this advantage while commoditizing the other 95%. Rolls-Royce (appears to be) a great example of getting this right. Starting life as an manufacturer of aircraft engines, Rolls Royce has leveraged its deep understanding of how aircraft engines work (from design through operation and maintenance), reifying this knowledge in a business and IT estate that can provide clients with a service to keep their aircraft moving.
We’re getting it all wrong—we focused on managing the technology delivery process rather than the technology itself. Where do business process outsourcing (BPO), software as a service (SaaS), Web 2.0 and partner organisations sit in our IT strategy? All too often we focus on the delivery of large IT assets into our enterprise, missing the opportunity to leverage leaner disruptive solutions that could provide a significantly better outcome for the business.
IT departments are, by tradition, inward looking asset management functions. Initially this was a response to the huge investment and effort required to operate early mainframe computers, while more recently it has been driven by the effort required to develop and maintain increasingly complex enterprise applications. We’ve organised our IT departments around the activities we see as key to being a successful asset manager: business analysis, software development & integration, infrastructure & facilities, and project or programme management. The result is a generation of IT departments closely aligned with the enterprise application development value-chain, as we focus on managing the delivery of large IT assets into the enterprise.
Building our IT departments as enterprise application factories has been very successful, but the maturation of applications over the last decade and recent emergence of approaches like SaaS means that it has some distinct limitations today. An IT department that defines itself in terms of managing the delivery of large technology assets tends to see a large technology asset as the solution to every problem. Want to support a new pricing strategy? Need to improve cross-sell and up-sell? Looking for ways to support the sales force while in the field? Upgrade to the latest and greatest CRM solution from your vendor of choice. The investment required is grossly out of proportion with the business benefit it will bring, making it difficult to engage with the rest of the business who view IT as a cost centre rather than an enabler.
Unfortunately the structure of many of our IT departments—optimised to create large IT assets—actively prohibits any other approach. More incremental or organic approaches to meeting business needs are stopped before they even get started, killed by an organisation structure and processes that impose more overhead than they can tolerate.
Applications were rare and expensive during most of enterprise IT’s history, but today they are plentiful and (comparativly) cheap. Software as a Service (SaaS) is also emerging to provide best of breed functionality but with a utillity delivery model; leveraging an externally managed service and paying per use, rather requiring capital investment in an IT asset to provide the service internally. Our focus is increasingly turned to ensuring that business processes and activities are supported with an appropriate level of technology, leveraging solutions from traditional enterprise applications through to SaaS, outsourced solutions or even bespoke elements where we see fit. We need to be focused on managing technology enablement, rather than IT assets, and many IT departments are responding to this by reorganising their operations to explore new strategies for managing IT.
Central to this new generation of IT departments is a sound understanding of how the business needs to operate—what it wants to be famous for. The old technology centric departmental roles are being deprecated, replaced with business centric roles. One strategy is to focus on Operational Excellence, Technology Enablement and Contract Management. A number of Chief Process Officer (CPO) roles are created as part of the Operational Excellence team, each focusing on optimising one or more end-to-end processes. The role is defined and measured by the business outcomes it will deliver rather than by the technology delivery process. CPOs are also integrating themselves with organisation wide business improvement and operational excellence initiatives, taking a proactive stance with the business instead of reactively waiting for the business to identify a need.
The Technology Enablement team works with Operational Excellence to deliver the right level of technology required to support the business. Where Operational Excellence looks out into the business to gain a better understanding of how the business functions, Technology Enablement looks out into the technology community to understand what technologies and approaches can be leveraged to create the most suitable solution. (As opposed to traditional, inward focused IT department concerned with developing and managing IT assets.) These solutions can range from SaaS through to BPO, AM (application management), custom development or traditional on-premises applications. However, the mix of solutions used will change over time as we move from today’s application centric enterprise IT to new process driven approaches. Solutions today are dominated by enterprise applications (most likely via BPO or AM), but increasingly shifting to utility models such as SaaS as these offerings mature.
Finally a contract management team is responsible for managing the contractual & financial obligations, and service level agreements between the organisation and suppliers.
One pronounced effect of a strongly business focused IT organisation is the externalisation of many asset management activities. Rather than trying to be good at everything needed to deliver a world class IT estate, and ending up beginning good at nothing, the department focuses its energies on only those activities that will have the greatest impact on the business. Other activities are supported by a broad partner ecosystem: systems integrators to install applications, outsourcers for application management and business process outsourcing, and so on. Rather than ramping up for a once-in-four-year application renewal—an infrequent task for which the department has trouble retaining expertise—the partner ecosystem ensures that the IT department has access to organisations whose core focus is installing and running applications, and have been solving this problem every year for the last four years.
This approach allows the IT department to concentrate on what really matters for the business to succeed. Its focus and expertise is firmly on the activities that will have the greatest impact on the business, while a broad partner ecosystem provides world class support for the activities that it cannot afford to develop world class expertise in. Rather than representing a cost centre in the business, the IT department can be seen as an enabler, working with other business to leverage new ideas and capabilities and drive the enterprise forward.