Tag Archives: Information technology management

The rules of enterprise IT

As I’ve pointed out before (possibly as I’m quite fond of games{{1}}) the game of enterprise IT has a long an proud history. I’ve also pointed out that the rules of this game need to change if enterprise IT — as we know it — is to remain relevant in the future{{2}}. This is triggered a few interesting conversations at the pub on just what are the old rules of IT.

[[1]]Capitalise: A game for the whole company to play![[1]]
[[2]]People don’t like change. (Or do they?)[[2]]

Enterprise IT, as we know it today, is an asset management business, the bastard son of Henry Ford’s moving production line. Enterprise IT takes the raw material of business processes and technology and turns them into automated solutions. From those first card tabulators through to today’s enterprise applications, the focus has been on delivering large IT solutions into the business.

The rules of enterprise IT are the therefore rules of business operations. After a fair amount of coffee and beer with friends, the following 4 ± 2 rules seems to be a fair minimum set (in no particular order).

Keep the lights on. Or, put more gently, the ticket to the strategy table is a smooth running business. Business has become totally reliant on IT, while at the same time IT is still seen as something of a black art run by a collection of unapproachable high priests. The board might complain about the cost and pain of an ERP upgrade, but they know they have to find the money if they want to successfully close the books at the end of the financial year. While this means that the money will usually be found, it also means that the number one rule of being a CIO is to keep the transactions flowing. Orders must be taken, products shipped (or services provided), invoices sent and cash collected. IT is an operational essential, and any CIO who can’t be trusted to keep the lights on won’t even have time to warm up their seat.

Save money. IT started as a cost saving exercise: automatic tabulation machines to replace rooms full of people shuffling papers, networks to eliminate the need to truck paper from one place to another. From those first few systems through to today’s modern enterprise solutions, applications have been seen as a tool to save time and money. Understand what the business processes or problem is, and then support the heavy information lifting with technology to drive cost savings and reduce cycle time. Business cases are driven by ideas like ROI, capturing these savings over time. Keep pushing the bottom line down. These incremental savings can add up to significant changes, such as Dell’s make-to-order solution{{3}} which enabled the company to operate with negative working capital (ie. they took your cash before they needed to pay their suppliers), but the overall approach is still based on using IT to drive cost savings through the automation of predefined business processes.

[[3]]Dell’s make to order solution leaves competitors in the dust.[[3]]

Build what you need. When applications are rare, then building them is an engineering challenge. You can’t just go to the store and by the parts you need, you need to create a lot of the parts yourself in your own machine shop. I remember the large teams (compared to today) from the start of my career. A CORBA project didn’t just need a team to implement the business logic, it needed a large infrastructure team (security guy, transaction guy …) as well. Many organisations (and their strong desire to build – or at least heavily customise – solutions) still work under this assumption. IT was the department to marshal large engineering teams who deliver the industrial grade solutions which can form the backbone of a business.

Ferrero Rocher
Crunch on the outside, soft and chewy in the middle.

Keep the outside outside. It’s common to have what is called a Ferrero Rocher{{4}} approach to IT: crunchy on the outside while soft and chewy in the middle. This applies to both security and data management. We visualise a strong distinction between inside and outside the enterprise. Inside we have our data, processes and people. Outside is everyone else (including our customers and partners). We harvest data from our operations and inject it into business intelligence solutions to create insight (and drive operational savings). We trust whatever’s inside our four walls, while deploying significant security measures to keep the evil outside.

[[4]]Ferrero[[4]]

It’s a separate question of whether or not these rules are still relevant in an age when business cycles are measured in weeks rather than years, and SaaS and cloud computing are emerging as the dominate modes of software delivery.

BPM is not a programming challenge

Get a few beers into a group of developers these days and it’s not uncommon for the complaints to start flowing about BPM (Business Process Management). BPM, they usually conclude, is more pain than it’s worth. I don’t think that BPM is a bad technology, per se, but it does appear to be the wrong tool for the job. The root of the problem is that BPM is a handy tool for programming distributed systems, but the challenge of creating distributed systems is orthogonal to business process execution and management. We’re using a screw driver to belt in a nail. It’s more productive to think business process execution and management as a (realtime) planning problem.

Programming is the automation of the known. Take as stable, repeatable process and automate it; bake the process into silicone to make it go fast. This is the same tactic that I was using back in my image processing days (and that was a long time ago). We’d develop the algorithms in C, experiment and tweak until they were right, and once they were stable we’d burn them into an ASIC (Application-Specific Integrated Circuit) to provide a speed boost. The ASICs were a lot faster than the C version: more than an order of magnitude faster.

Programmers, and IT folk in general, have a habit of treating the problems we confront as programming challenges. This has been outstandingly successful to date; just try and find a home appliance or service that doesn’t have a programme buried in it somewhere. (It’s not an unmitigated success though, such as our tumble drier is driving us nuts if its overly frequent software errors.) It’s not surprising that we chose to treat business processes automation and management as a programming problem once it appeared on our radar.

Don’t get me wrong: BPM is a solid technology. A friend of mine once showed my how he’d used his BPM stack to test its BPEL engine. As side from being a nice example of eating your own dog food, it was a great example of using BPEL as a distributed programming tool to solve a small but complex problem.

So why do we see so many developers complaining about BPM? It’s not the technology itself: the technology works. The issue is that we’re using it to solve problems that it’s not suited for. The most obvious evidence of this is the current poor state of BPM support for business exception management. We’ve deployed a lot of technology to support exception management in business processes without really solving the problem.

Managing business exceptions is driving the developers nuts. I know of one example where managing a couple of not infrequent business exceptions was the major technical problem in a very significant project (well into eight figures). The problem is that business exceptions are not from the same family of beasts as programming exceptions. Programming exceptions are exceptional. Business exceptions are just a (slightly) different way to achieve the same goal. All our compensating actions and exception stacks just get in the way of solving the problem.

On PowerPoint, anything can look achievable. The BPMN diagram we shared with the business was extremely elegant: nice sharp angles and coloured bubbles. Everyone agreed that it was a good representation of what the business does. The devil is in the details though. The development team quickly becomes frustrated as they have to deal with the realities of implementing a dynamic and exception rich business processes. Exceptions pile up on top of exceptions, and soon that BPMN diagram covers a wall, littered as it is with branch and join operations. It’s not a complex process, but we’ve made it incredibly complicated.

Edward Tufte's take on explaining complex concepts with PowerPoint
A military parade explained, a la PowerPoint

We can’t program our way out of this box, trying to pile on more features and patches. We can rip the complications out – simplifying the process to the point that it becomes tractable with our programming tools (which is what happened in my example above). But this removes all the variation which which makes the processes so valuable. (This, of course, the dirty secret of LEAN et al: you’re trading flexibility for cost saving, making your processes very efficient but also very fragile.)

Or we can try solving the problem a different way.

Don’t treat the automation of a business processes as a programming task (and I by this I mean the capture of imperative instructions for a computer to execute, no matter how unstructured or parallel). Programming is the automation of the known. Business processes, however, are the management and anticipation of the unknown. Modelling business processes should be seen as a (realtime) planning problem.

Which comes back to one of my common themes: push vs pull models, or the importance of what over how. Or, as a friend of mine with a better turn of phrase puts it, we need to stop trying to invent new technologies and work out how to use what we already have more effectively. Rather than trying to invent new technologies to solve problems that are already well understood elsewhere, pushing the technology into the problem, a more pragmatic approach is to leverage that existing understanding and then pull in existing technologies as appropriate.

Planning and executing in a rapidly changing environment is a well understood problem. Just ask anyone who’s been involved with the military. If we view the management of a business processes as a realtime planning problem, then what were business exceptions are reduced to simply alternate routes to the same goal, rather than a problem which requires a compensating action.

Battle of Gaugamela (Arbela) (331BC)
Take that hill!

One key principle is to establish a clear goal – Take that hill!, or Find that lost shipment! – articulate the tactics, the courses of action we might use to achieve that goal, and then defer decisions on which course of action to take until the decision needs to be made. If we commit to a course of action too early, locking in a decision during design time, then it’s likely that we’ll be forced to manage the exception when we realise that we picked the wrong course of action. It’s better to wait until the moment when all relevant information and options are available to us, and then take decisive action.

From a modelling point of view, we need to establish where are the key events at which we need to make decisions in line with a larger strategy. The decisions at each of these events needs to weigh the available courses of action and select the most appropriate, much like using a set of business rules to identify applicable options. The course of action selected, a scenario or business process fragment, will be semi independent from the other in the applicable set, as it addresses a different business context. Nor can the scenario we pick cannot be predetermined, as it depends on the business context. Short and sharp, each scenario will be simple, general and flexible, enabling us to configure it for the specific circumstances at hand, as we can’t anticipate all possible scenarios. And finally, we need to ensure that the scenarios we provide cover the situations we can anticipate, including the provision of a manual escape hatch.

Goals, rules and process: in that order. Integrated rather than as standalone engines. Pull pull these established technologies into a single platform and we might just be closer to a BPM solution inline with what we really need. (And we know there is nothing new under the sun, as this essentially a build on Jim Sinurs rules-and-process argument, and borrows a lot from STRIPS, PRS, dMARS and even the work I did at Agentis.)

As I mentioned at the start of this missive, BPM as a product category makes sense and the current implementations are capable distributed programming tools. The problem is that business process management is not a distributed programming challenge. Business exceptions are not exceptional. I say steal a page from the military strategy book – they, after all, have been successfully working on this problem for some time – and build our solutions around ideas the military use to succeed in a rapidly changing environment. Goals, rules and processes. The trick is to be pragmatic, rather than dogmatic in our implementation, and focus on solving the problem rather then trying to create a new technology.

Decisions are more important than data

Names and categories are important. Just look at the challenges faced by the archeology community as DNA evidence forces history to be rewritten when it breaks old understandings, changing how we think and feel in the process. Just who invaded who? Or was related to who?

We have the same problem with (enterprise) technology; how we think about the building blocks of the IT estate has a strong influence on how approach the problems we need to solve. Unfortunately our current taxonomy has a very functional basis, rooted as it is in the original challenge of creating the major IT assets we have today. This is a problem, as it’s preventing us to taking full advantage of the technologies available to us. If we want to move forward, creating solutions that will thrive in a post GFC world, then we need to think about enterprise IT in a different way.

Enterprise applications – the applications we often know and love (or hate) – fall into a few distinct types. A taxonomy, if you will. This taxonomy has a very functional basis, founded as it is on the challenge of delivering high performance and stable solutions into difficult operational environments. Categories tend to be focused on the technical role a group of assets have in the overall IT estate. We might quibble over the precise number of categories and their makeup, but for the purposes of this argument I’m going to go with three distinct categories (plus another one).

SABER
SABER @ American Airlines

First, there’s the applications responsible for data storage and coherence: the electronic filing cabinets that replaced rooms full of clerks and accountants back in the day. From the first computerised general ledger through to CRM, their business case is a simple one of automating paper shuffling. Put the data in on place and making access quick and easy; like SABER did, which I’ve mentioned before.

Next, are the data transformation tools. Applications which take a bunch of inputs and generate an answer. This might be a plan (production plan, staffing roster, transport planning or supply chain movements …) or a figure (price, tax, overnight interest calculation). State might be stored somewhere else, but these solutions still need some some serious computing power to cope with hugh bursts in demand.

Third is data presentation: taking corporate information and presenting in some form that humans can consume (though looking at my latest phone bill, there’s no attempt to make the data easy to consume). This might be billing or invoicing engines, application specific GUIs, or even portals.

We can also typically add one more category – data integration – though this is mainly the domain of data warehouses. Solutions that pull together data from multiple sources to create a summary view. This category of solutions wouldn’t exist aside from the fact that our operational, data management solutions, can’t cope with an additional reporting load. This is also the category for all those XLS spreadsheets that spread through business like a virus, as high integration costs or more important projects prevent us from supporting user requests.

A long time ago we’d bake all these layers into the one solution. SABER, I’m sure, did a bit of everything, though its main focus was data management. Client-server changed things a bit by breaking user interface from back-end data management, and then portals took this a step further. Planning tools (and other data transformation tools) started as modules in larger applications, eventually popping out as stand alone solutions when they grew large enough (and complex enough) to justify their own delivery effort. Now we have separate solutions in each of these categories, and a major integration problem.

This categorisation creates a number of problems for me. First and foremost is the disconnection between what business has become, and what technology is trying to be. Back in the day when “computer” referred to someone sitting at a desk computing ballistics tables, we organised data processing in much the same way that Henry Ford organised his production line. Our current approach to technology is simply the latest step in the automation of this production line.

Computers in the past
Computers in the past

Quite a bit has changed since then. We’ve reconfigured out businesses, we’re reconfiguring our IT departments, and we need to reconfigure our approach to IT. Business today is really a network of actors who collaborate to make decisions, with most (if not all) of the heavy data lifting done by technology. Retail chains are trying to reduce the transaction load on their team working the tills so that they can focus on customer relationships. The focus in supply chains to on ensuring that your network of exception managers can work together to effectively manage disruptions in the supply chain. Even head office focused on understanding and responding to market changes, rather than trying to optimise the business in an unchanging market.

The moving parts of business have changed. Henry Ford focused on mass: the challenge of scaling manufacturing processes to get cost down. We’re moved well beyond mass, through velocity, to focus on agility. A modern business is a collection of actors collaborating and making decisions, not a set of statically defined processes backed by technology assets. Trying to force modern business practices into yesterdays IT taxonomy is the source of one of the disconnects between business and IT that we complain so much about.

There’s no finer example of this than Sales and Operations Planning (S&OP). What should be a collaborative and fluid process – forward planning among a network of stakeholders – has been shoehorned into a traditional n-tier, database driven, enterprise solution. While an S&OP solution can provided significant cost saving, many companies find it too hard to fit themselves into the solution. It’s not surprising that S&OP has a reputation for being difficult to deploy and use, with many planners preferring to work around the system than with it.

I’ve been toying with a new taxonomy for a little while now, one that tries to reflect the decision, actor and collaboration centric nature of modern business. Rather than fit the people to the factory, which was the approach during the industrial revolution, the idea is to fit the factory to the people, which is the approach we use today post LEAN and flexible manufacturing. While it’s a work in progress, it still provides a good starting point for discussions on how we might use technology to support business in the new normal.

In no particular order…

Fusion solutions blend data and process to create a clear and coherent environment to support specific roles and decisions. The idea is to provide the right data and process, at the right time, in a format that is easy to consume and use, to drive the best possible decisions. This might involve blending internal data with externally sourced data (potentially scraped from a competitor’s web site); whatever data required. Providing a clear and consistent knowledge work environment, rather than the siloed and portaled environment we have today, will improve productivity (more time on work that matters, and less time on busy work) and efficiency (fewer mistakes).

Next, decisioning solutions automate key decisions in the enterprise. These decisions might range from mortgage approvals through office work, such as logistics exception management, to supporting knowledge workers workers in the field. We also need to acknowledge that decisions are often decision making processes which require logic (roles) applied over a number of discrete steps (processes). This should not be seen as replacing knowledge workers, as a more productive approach is to view decision automation as a way of amplifying our users talents.

While we have a lot of information, some information will need to be manufactured ourselves. This might range from simple charts generated from tabular data, through to logistics plans or maintenance scheduling, or even payroll.

Information and process access provide stakeholders (both people and organisations) with access to our corporate services. This is not your traditional portal to web based GUI, as the focus will be on providing stakeholders with access wherever and whenever they need, on whatever device they happen to be using. This would mean embedding your content into a Facebook app, rather than investing in a strategic portal infrastructure project. Or it might involve developing a payment gateway.

Finally we have asset management, responsible for managing your data as a corporate asset. This looks beyond the traditional storage and consistency requires for existing enterprise applications to include the political dimension, accessibility (I can get at my data whenever and wherever I want to) and stability (earthquakes, disaster recovery and the like).

It’s interesting to consider the sort of strategy a company might use around each of these categories. Manufacturing solutions – such as crew scheduling – are very transactional. Old data out, new data in. This makes them easily outsourced, or run as a bureau service. Asset management solutions map very well to SaaS: commoditized, simple and cost effective. Access solutions are similar to asset management.

Fusion and decisioning solutions are interesting. The complete solution is difficult to outsource. For many fusion solutions, the data and process set presented to knowledge workers will be unique and will change frequently, while decisioning solutions contain decisions which can represent our competitive advantage. On the other hand, it’s the intellectual content in these solutions, and not the platform, which makes them special. We could sell our platform to our competitors, or even use a commonly available SaaS platform, and still retain our competitive advantage, as the advantage is in the content, while our barrier to competition is the effort required to recreate the content.

This set of categories seems to map better to where we’re going with enterprise IT at the moment. Consider the S&OP solution I mention before. Rather than construct a large, traditional, data-centric enterprise application and change our work practices to suit, we break the problem into a number of mid-sized components and focus on driving the right decisions: fusion, decisioning, manufacturing, access, and asset management. Our solution strategy becomes more nuanced, as our goal is to blend components from each category to provide planners with the right information at the right time to enable them to make the best possible decision.

After all, when the focus is on business agility, and when we’re drowning in a see of information, decisions are more important than data.

Having too much SOA is a bad thing (and what we might do about it)

SOA enablement projects (like a lot of IT projects) have a bad name. An initiative that starts as a good idea to create a bit more flexibility in the IT estate often seems to end up mired in its own complexity. The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software. Without any solid guidance on how much flexibility to create (and where to create it) most SOA initiatives simply keep creating flexibility until either the project collapses under its own weight, or the projected development work to create all the services exceeds the available CAPEX budget. A little flexility is good, but too much is bad. How can we scope the flexibility, pointing it where it’s most needed while preventing it from becoming a burden?

The challenge with SOA enablement is in determining how much flexibility to build into the IT estate. Some flexibility is good – especially if it’s focused on where the business needs it the most – but too much flexibility is simply another unnecessary cost. The last decade or so is littered with stories of companies who’s SOA initiatives were either brought to an early close or canned as they had consumed all the cash the business was prepared to invest into a major infrastructure project. Finance and telecoms seem particularly prone of creating these gold-plated SOA initiatives. (How many shelf-ware SDFs – service delivery frameworks – do you know of?)

The problem seems to be a lack of guidance on how much flexibility to build, or where to put it. We sold the business on the idea that a flexible, service-oriented IT estate would be better then the evil monolithic applications of old, but the details of just how flexible the new estate would be were a little fuzzy. Surely these details can be sorted out in service discovery? And governance should keep service discovery on track! We set ourselves up by over-promising and under-delivering.

Mario Batali: Too much is never enough!
Mario Batali

This much was clear: the business wanted agility, and agility requires flexibility. As flexibility comes from having more moving parts (services), we figured that creating more moving parts will create more agility. Service discovery rapidly became a process of identifying every bit of (reusable) functionality that we can pack into a service. More is better, or, as the man with the loud shoes says:

Too much is never enough!
Mario Batali

The problem with this approach is that it confuses flexibility and agility. It’s possible to be very flexible without being agile, and vica versa. Think of a formula one car: they’re fast and they’re agile (which is why driving them tends to be a young mans game), and they’re very stiff. Agility comes from keeping the weight down and being prepared to act quickly. This means keeping things simple, ensuring that we have minimum set of moving parts required. They might have an eye for detail, such as nitrogen in the tyres, but unnecessary moving parts that might reduce reliability or performance are eliminated.

This gold plated approach to SOA creates a lot of unrequired flexibility, this additional flexibility increases complexity, and the complexity becomes the boat anchor that slows you down and stops you from being agile. Turning the car is no longer a simple of tugging on the steering wheel, as we need governance to stop us from pulling the wrong lever in the bank of 500 identical levers in front of us.

It's really that simple!
It's really that simple!

We’ve made everything too complicated. Mario was wrong: too much is too much.

What we need is some guidance – a way of scoping and directing the flexibility we’re going to create. Governance isn’t enough, as governance is focused on stopping bad things from happening. We have a scoping problem. Our challenge is to understand what flexibility will be required in the future, and agreeing on the best way to support it.

To date I’ve been using a very fuzzy “business interest” metric for this, where services are decomposed until the business is no longer interested. The rational is that we put the flexibility only were the business thinks it needs to focus. This approach works fairly well, but it relies too much on the tacit judgement of a few skilled business analysts and architects, making it too opaque and hard to understand for the people not involved in the decision making process. It’s also hard to scale. We need something more deterministic and repeatable.

Which brings me to a friend’s MBA thesis, which he passed to me the other week. It’s an interesting approach to building business cases for IT solutions, one based on real options.

The problem with the usual approaches to building a business case, using tools like net present value (NPV) and discounted cash flow, is that we assume that the world doesn’t change post the decision to build the solution (or not). They don’t factor in the need to change a solution once it’s in the field, or even during development.

The world doesn’t work this way: the solution you approved in yesterday’s business environment will be deployed into a radically different business environment tomorrow. This makes it hard to justify the additional investment required for a more flexible SOA based solution, when compared to a conventional monolithic solution. The business case doesn’t include flexibility as a factor, so more flexible (and therefore complex and expensive) solutions lose to the cheaper, monolithic approach.

Real options address this by pushing you down a scenario planning based approach. You estimate the future events that you want to guard against, and their probabilities, creating a set of possible futures. Each event presents you with options to take action. The action, for example, might be to change, update or replace components in the solution to bring them in line with evolving business realities. The options are – in effect – flex-points that we might design into our solutions SOA. The real options methodology enables us to ascribe costs to these future events and the create a decision tree that captures the benefits of investing in specific flex points, all in a clear and easily understandable chain of reasoning.

The decision tree and options provide us with a way to map out where to place flex points in the SOA solution. They also provide us with strong guidance on how much flexibility to introduce. And this is the part I found really interesting about the approach. It also provides us with a nice framework to govern the evolution of the SOA solution, as changes are (generally) only made when an option is taken: when it’s business case is triggered.

It’s a bit like those formula one cars. A friend of mine used to work for one F1 manufacturer designing and testing camshafts. These camshafts had to fall within a 100,000 lifetime revolution window. An over-designed camshaft was unnecessary weight, while an under-designed one means that you wouldn’t win (or possibly even finish) the race. Work it out: a 100,000 revolutions is a tiny window for an F1 car, given the length of a race.

An approach like real options helps us ensure that we only have the flexibility required in the solution, and that it is exactly where it is required. Not too much, and not too little. Just enough to help us win the race.

We’re making our lives too complicated

Has SOA (Service Oriented Architecture) finally jumped the shark? After years of hype and failed promises, SOA seems to be in trouble. In a few short months it’s gone from IT’s great savour to something some people think is better forgotten.

The great promise of SOA was to deliver an IT estate which is more agile and cost effective than was possible with other, more conventional, approaches. By breaking our large problems into a set of much smaller ones, we would see more agility and a low total cost of ownership. The agility would come from the more flexible architecture provided by SOAs many moving parts. A lower cost of ownership would come from reuse of many of these moving parts. Many companies bought into this promise, and started major SOA transformation programs to “SOA enable their business”. Once the program of work was delivered they would have a shiny new flexible, and cost effective IT estate. The business would be thrilled, and the old tensions between business and IT would just melt away. More often than not the business wasn’t thrilled as the program failed to deliver the promised benefits.

The problem, it seems, is that we’re focused on creating cathedrals of technology. Cathedrals were the result of large bespoke development efforts. The plans often consisted of only a rough sketch on a scrap of paper, before a large number of skilled craftsmen were engaged. The craftsmen broke the problem into many small parts that were then laboriously assembled into the final structure, often adjusting the parts to fit in the process. While this process created a number of spectacular buildings, the journey from initial conception to completed build was long and challenging.

The lack of engineering pragmatism frequently resulted in cathedrals collapsing before they were finished, often multiple times. The reason we know that a flying buttress worked was because it hadn’t failed, yet. People died when a structure collapsed, and there was no way of telling if the latest version of the structure was about to collapse. The lengthy development process often lasted generations, passing through the stewardship of multiple architects with no clear end in sight. Many cathedrals, such as the one in New York, are still considered unfinished.

A lot of SOA projects give off a strong smell of cathedral. They are being constantly re-architected—while still in development—to cope with the latest business exception or demand. When they’re introduced to the hard reality of supporting solutions bits of them collapse and need to be rebuilt to support our new (and improved) understand of what will be demanded of them. And, finally, many of them are declared “finished” even though they are never fully baked, just so we can close that chapter in our company’s history and move onto the next problem.

Modern approaches to building construction take a different approach. A high level plan is created to capture the overall design for the building. The design is then broken into a small number of components, with the intention for using bespoke craftsmen for the fine details that will make the building different, while leveraging large, commoditized, pre-fabricated components for the supporting structures that form the majority of the building. Construction follows a clear timetable, with each component—from the largest pre-fabricated panel through to the smallest detail—integrated into the end-to-end solution as it is delivered. Complexity and detail were added only where needed, with cost effective commoditized approaches minimizing complexity elsewhere. A clear focus on the end goal is maintained throughout the effort, while clear work practices focused on delivering to the deadline ensure that the process was carried out with a minimum of fuss (and no loss of life).

The problem, it seems, is that we’re confusing agility with flexibility. The business is asking for agility; the world is changing faster than ever and the business needs to be in a position to react to these changes. Agility, or so our thinking goes, requires flexibility, so to provide a lot of agility we need to provide a lot of flexibility. Very soon we find ourselves breaking the IT estate (via our favorite domain model) into a large number of small services. These small parts will provide a huge amount of flexibility, therefore problem solved!

This misses the point. Atomizing the business in this way creates overhead, and the overhead soon swamps any benefit. The effort to define all these services is huge. Add a governance process—since we need governance to manage all the complexity we created—and we just amplify the effect of this overhead. Our technically pure approach to flexibly is creating too much complexity, and our approach to managing this complexity is just making the problem worst.

We need to think more like the architect of the modern prefabricated building. Have a clear understanding of how the business will use our building. Leverage prefabricated components (applications or SaaS) where appropriate; applications are still the most efficient means of delivering large, undifferentiated slabs of functionality. And add complexity only in those differentiating areas where it is justified, providing flexibility only where the business needs. In the end, creating good software is about keeping it simple.  If it’s simple, it gets done quickly and can be maintained more readily.

Above all, favor architectural pragmatism over architectural purity. The point of the architecture is to support the business, not to be an object of beauty.

Managing technology, not applications

We’re getting it all wrong—we focused on managing the technology delivery process rather than the technology itself. Where do business process outsourcing (BPO), software as a service (SaaS), Web 2.0 and partner organisations sit in our IT strategy? All too often we focus on the delivery of large IT assets into our enterprise, missing the opportunity to leverage leaner disruptive solutions that could provide a significantly better outcome for the business.

IT departments are, by tradition, inward looking asset management functions. Initially this was a response to the huge investment and effort required to operate early mainframe computers, while more recently it has been driven by the effort required to develop and maintain increasingly complex enterprise applications. We’ve organised our IT departments around the activities we see as key to being a successful asset manager: business analysis, software development & integration, infrastructure & facilities, and project or programme management. The result is a generation of IT departments closely aligned with the enterprise application development value-chain, as we focus on managing the delivery of large IT assets into the enterprise.

Building our IT departments as enterprise application factories has been very successful, but the maturation of applications over the last decade and recent emergence of approaches like SaaS means that it has some distinct limitations today. An IT department that defines itself in terms of managing the delivery of large technology assets tends to see a large technology asset as the solution to every problem. Want to support a new pricing strategy? Need to improve cross-sell and up-sell? Looking for ways to support the sales force while in the field? Upgrade to the latest and greatest CRM solution from your vendor of choice. The investment required is grossly out of proportion with the business benefit it will bring, making it difficult to engage with the rest of the business who view IT as a cost centre rather than an enabler.



A typical IT department value-chain

Unfortunately the structure of many of our IT departments—optimised to create large IT assets—actively prohibits any other approach. More incremental or organic approaches to meeting business needs are stopped before they even get started, killed by an organisation structure and processes that impose more overhead than they can tolerate.

Applications were rare and expensive during most of enterprise IT’s history, but today they are plentiful and (comparativly) cheap. Software as a Service (SaaS) is also emerging to provide best of breed functionality but with a utillity delivery model; leveraging an externally managed service and paying per use, rather requiring capital investment in an IT asset to provide the service internally. Our focus is increasingly turned to ensuring that business processes and activities are supported with an appropriate level of technology, leveraging solutions from traditional enterprise applications through to SaaS, outsourced solutions or even bespoke elements where we see fit. We need to be focused on managing technology enablement, rather than IT assets, and many IT departments are responding to this by reorganising their operations to explore new strategies for managing IT.

Central to this new generation of IT departments is a sound understanding of how the business needs to operate—what it wants to be famous for. The old technology centric departmental roles are being deprecated, replaced with business centric roles. One strategy is to focus on Operational Excellence, Technology Enablement and Contract Management. A number of Chief Process Officer (CPO) roles are created as part of the Operational Excellence team, each focusing on optimising one or more end-to-end processes. The role is defined and measured by the business outcomes it will deliver rather than by the technology delivery process. CPOs are also integrating themselves with organisation wide business improvement and operational excellence initiatives, taking a proactive stance with the business instead of reactively waiting for the business to identify a need.



Managing technology, not applications

The Technology Enablement team works with Operational Excellence to deliver the right level of technology required to support the business. Where Operational Excellence looks out into the business to gain a better understanding of how the business functions, Technology Enablement looks out into the technology community to understand what technologies and approaches can be leveraged to create the most suitable solution. (As opposed to traditional, inward focused IT department concerned with developing and managing IT assets.) These solutions can range from SaaS through to BPO, AM (application management), custom development or traditional on-premises applications. However, the mix of solutions used will change over time as we move from today’s application centric enterprise IT to new process driven approaches. Solutions today are dominated by enterprise applications (most likely via BPO or AM), but increasingly shifting to utility models such as SaaS as these offerings mature.

Finally a contract management team is responsible for managing the contractual & financial obligations, and service level agreements between the organisation and suppliers.

One pronounced effect of a strongly business focused IT organisation is the externalisation of many asset management activities. Rather than trying to be good at everything needed to deliver a world class IT estate, and ending up beginning good at nothing, the department focuses its energies on only those activities that will have the greatest impact on the business. Other activities are supported by a broad partner ecosystem: systems integrators to install applications, outsourcers for application management and business process outsourcing, and so on. Rather than ramping up for a once-in-four-year application renewal—an infrequent task for which the department has trouble retaining expertise—the partner ecosystem ensures that the IT department has access to organisations whose core focus is installing and running applications, and have been solving this problem every year for the last four years.

This approach allows the IT department to concentrate on what really matters for the business to succeed. Its focus and expertise is firmly on the activities that will have the greatest impact on the business, while a broad partner ecosystem provides world class support for the activities that it cannot afford to develop world class expertise in. Rather than representing a cost centre in the business, the IT department can be seen as an enabler, working with other business to leverage new ideas and capabilities and drive the enterprise forward.