Category Archives: Strategy

Complexity isn’t (at least in enterprise IT)

There’s a lot of talk these days about complexity in enterprise IT. The heterogeneous solutions we’re building today seem more complex than the monolithic solutions of the past. But are they really? I’ll grant you that a lot of what is being built at the moment is complicated. Complex though? I don’t think so. The problem is that we’re building new things in old ways when we need to be building new things in new ways.

I’ve always used a simple rule of thumb when thinking about complexity. Some folk like to get fancy with two and three dimensional models that enable us to ascribe degrees of complexity to problems. While I find these models interesting, my focus has always been on how do I solve the problem in front of me. What is the insight that will make the hard easy? For me, one simple distinction seems to provide the information I need. Is a solution complex? Or is it complicated?

Something is complicated if the model we use to understand the problem requires patches, exceptions to make it work. The model might be simple and well understood, but we’re forced to patch the model for it to succeed when confronted by the real world; we’re adding epicycles. It’s not a complex system, but it is complicated.

Adding epicycles didn't manage to keep the earth at the centre of the universe
Adding epicycles didn't manage to keep the earth at the centre of the universe

On the other hand, something is complex if it’s difficult to develop a consistent model for the problem. While we might have a well understood model, it’s definitely not simple, requiring a great deal of academic and tacit knowledge to understand. There’s no epicycles, but there are a large number of variables involved, and their interactions are often non-linear.

While this binary separation might not be strictly true (the complicated can sometimes be complex), I find that the truly complex problems are rare enough that the rule of thumb is useful most of the time. After all, that’s what a rule of thumb is. The few times that it breaks down, experience comes to the rescue.

Distinguishing between the complex and the complicated is not hard; just look for the epicycles. Planning engines – such as material planning or crew scheduling – are a good example of a complex solutions. Business processes management is a good example of a complicated solution. Psi calculus – the model at the heart of a modern BPM engine – is well understood and BPM engines work as described. However, managing business exceptions is a mess as support for them is tacked on rather than an inherent part of the model. Smashing together psi calculus, transactions and number of other models has resulted in epicycles.

Most of the problems we’re seeing in enterprise IT are complicated, but not complex. Take the current efforts to create IT estates integrating SaaS and public cloud with more traditional enterprise IT tools, such as on-premesis applications and BPO. Conventional approaches to understanding and planning IT estates are creaking at the seems. The model – the enterprise integration patterns – which we’ve used for so long is well understood, but it’s creaking at the seams as we bolt on more epicycles to cope with exceptions as they arise.

Dr. Khee Pang
Dr. Khee Pang

A great piece of advice from a former lecturer of mine always comes to mind in situations like this. As I’ve mentioned before:

If you don’t like a problem, then change it.
KK Pang

Our solution is complicated because we’re trying to solve the wrong problem. We need to change it.

The problem with BPM is that business exceptions are not exceptional, but are simply alternative ways of achieving the same goals. To resolve the epicycles we need to shift the problems centre of gravity, moving the earth from the centre of the universe to a more stable orbit. If business exceptions are not exceptional, then we should simply consider them as different business scenarios, and use a scenario based approach to capturing business processes. The epicycles then melt away.

I think we can use a similar approach to help us with the challenges we’re seeing in today’s IT estates, the same challenges which are trigger some of the discussion on complexity. The current approach to planning and provisioning IT is data centric; most applications are, after all, just large data management engines. This data-centric approach is forcing us to create epicycles as we attempt to integrate SaaS, cloud, and a whole raft of new tools and techniques. The solution is to move from a data-centric approach, to decision-centric approach. But that’s a different blog post.

Having too much SOA is a bad thing (and what we might do about it)

SOA enablement projects (like a lot of IT projects) have a bad name. An initiative that starts as a good idea to create a bit more flexibility in the IT estate often seems to end up mired in its own complexity. The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software. Without any solid guidance on how much flexibility to create (and where to create it) most SOA initiatives simply keep creating flexibility until either the project collapses under its own weight, or the projected development work to create all the services exceeds the available CAPEX budget. A little flexility is good, but too much is bad. How can we scope the flexibility, pointing it where it’s most needed while preventing it from becoming a burden?

The challenge with SOA enablement is in determining how much flexibility to build into the IT estate. Some flexibility is good – especially if it’s focused on where the business needs it the most – but too much flexibility is simply another unnecessary cost. The last decade or so is littered with stories of companies who’s SOA initiatives were either brought to an early close or canned as they had consumed all the cash the business was prepared to invest into a major infrastructure project. Finance and telecoms seem particularly prone of creating these gold-plated SOA initiatives. (How many shelf-ware SDFs – service delivery frameworks – do you know of?)

The problem seems to be a lack of guidance on how much flexibility to build, or where to put it. We sold the business on the idea that a flexible, service-oriented IT estate would be better then the evil monolithic applications of old, but the details of just how flexible the new estate would be were a little fuzzy. Surely these details can be sorted out in service discovery? And governance should keep service discovery on track! We set ourselves up by over-promising and under-delivering.

Mario Batali: Too much is never enough!
Mario Batali

This much was clear: the business wanted agility, and agility requires flexibility. As flexibility comes from having more moving parts (services), we figured that creating more moving parts will create more agility. Service discovery rapidly became a process of identifying every bit of (reusable) functionality that we can pack into a service. More is better, or, as the man with the loud shoes says:

Too much is never enough!
Mario Batali

The problem with this approach is that it confuses flexibility and agility. It’s possible to be very flexible without being agile, and vica versa. Think of a formula one car: they’re fast and they’re agile (which is why driving them tends to be a young mans game), and they’re very stiff. Agility comes from keeping the weight down and being prepared to act quickly. This means keeping things simple, ensuring that we have minimum set of moving parts required. They might have an eye for detail, such as nitrogen in the tyres, but unnecessary moving parts that might reduce reliability or performance are eliminated.

This gold plated approach to SOA creates a lot of unrequired flexibility, this additional flexibility increases complexity, and the complexity becomes the boat anchor that slows you down and stops you from being agile. Turning the car is no longer a simple of tugging on the steering wheel, as we need governance to stop us from pulling the wrong lever in the bank of 500 identical levers in front of us.

It's really that simple!
It's really that simple!

We’ve made everything too complicated. Mario was wrong: too much is too much.

What we need is some guidance – a way of scoping and directing the flexibility we’re going to create. Governance isn’t enough, as governance is focused on stopping bad things from happening. We have a scoping problem. Our challenge is to understand what flexibility will be required in the future, and agreeing on the best way to support it.

To date I’ve been using a very fuzzy “business interest” metric for this, where services are decomposed until the business is no longer interested. The rational is that we put the flexibility only were the business thinks it needs to focus. This approach works fairly well, but it relies too much on the tacit judgement of a few skilled business analysts and architects, making it too opaque and hard to understand for the people not involved in the decision making process. It’s also hard to scale. We need something more deterministic and repeatable.

Which brings me to a friend’s MBA thesis, which he passed to me the other week. It’s an interesting approach to building business cases for IT solutions, one based on real options.

The problem with the usual approaches to building a business case, using tools like net present value (NPV) and discounted cash flow, is that we assume that the world doesn’t change post the decision to build the solution (or not). They don’t factor in the need to change a solution once it’s in the field, or even during development.

The world doesn’t work this way: the solution you approved in yesterday’s business environment will be deployed into a radically different business environment tomorrow. This makes it hard to justify the additional investment required for a more flexible SOA based solution, when compared to a conventional monolithic solution. The business case doesn’t include flexibility as a factor, so more flexible (and therefore complex and expensive) solutions lose to the cheaper, monolithic approach.

Real options address this by pushing you down a scenario planning based approach. You estimate the future events that you want to guard against, and their probabilities, creating a set of possible futures. Each event presents you with options to take action. The action, for example, might be to change, update or replace components in the solution to bring them in line with evolving business realities. The options are – in effect – flex-points that we might design into our solutions SOA. The real options methodology enables us to ascribe costs to these future events and the create a decision tree that captures the benefits of investing in specific flex points, all in a clear and easily understandable chain of reasoning.

The decision tree and options provide us with a way to map out where to place flex points in the SOA solution. They also provide us with strong guidance on how much flexibility to introduce. And this is the part I found really interesting about the approach. It also provides us with a nice framework to govern the evolution of the SOA solution, as changes are (generally) only made when an option is taken: when it’s business case is triggered.

It’s a bit like those formula one cars. A friend of mine used to work for one F1 manufacturer designing and testing camshafts. These camshafts had to fall within a 100,000 lifetime revolution window. An over-designed camshaft was unnecessary weight, while an under-designed one means that you wouldn’t win (or possibly even finish) the race. Work it out: a 100,000 revolutions is a tiny window for an F1 car, given the length of a race.

An approach like real options helps us ensure that we only have the flexibility required in the solution, and that it is exactly where it is required. Not too much, and not too little. Just enough to help us win the race.

The IT department we have today is not the IT department we’ll need tomorrow

The IT departments many of us work in today (either as an employee or consultant) are often the result of thirty or more years of diligent labour. These departments are designed, optimised even, to create IT estates populated with large, expensive applications. Unfortunately these departments are also looking a lot like dinosaurs: large, slow and altogether unsuited for the the new normal. The challenge is to reconfigure our departments, transforming them from asset management functions into business (or business-technology) optimisation engines. This transformation should be a keen interest for all of us, as it’s going to drive a dramatic change in staffing profiles which will, in turn, effect our own jobs in the no so distant future.

Delivering large IT solutions is a tricky business. They’re big. They’re expensive. And the projects to create them go off the rails more often than we’d like to admit. IT departments have been built to minimise the risks associated with delivering and operating these applications. This means governance, and usually quite a lot of it. Departments which started off as small scale engineering functions soon picked up an administrative layer responsible to the mechanics of governance.

More recently we’ve been confronted with the challenge with managing the dependancies and interactions between IT applications. Initiatives like straight-through processing require us to take a holistic, rather than a pieces-parts, approach, and we’re all dealing with the problem of having one of each application or middleware product, as well as a few we brewed in the back room ourselves. Planning the operation and evolution of the IT estate became more important, and we picked up an enterprise architecture capability to manage the evolution of our IT estate.

It’s common to visualise these various departmental functions and roles as a triangle (or a pyramid, if you prefer). At the bottom we have engineering: the developers and other technical personnel who do the actual work to build and maintain our applications. Next layer up is governance, the project and operational administrators who schedule the work and check that it’s done to spec. Second from the top are the planners, the architects responsible for shaping the work to be done as well as acting as design authority. Capping of the triangle (or pyramid) is the IT leadership team who decide what should be done.

The departmental skills triangle

While specific techniques and technologies might come and go, the overall composition of the triangle has remained the same. From the sixties and seventies through to even quite recently, we’ve staffed our IT departments with many technical doers, a few less administrators, a smaller planning team, and a small IT leadership group. The career path for most of us been a progression from the bottom layers – when we were fresh out of school – to the highest point in the triangle that we can manage.

The emergence of off-shore and outsourcing put a spanner in the works. We all understand the rational: migrate the more junior positions – the positions with the least direct (if any) contact with the business proper – to a cheaper country. Many companies under intense cost pressure broke the triangle in two, keeping the upper planning and decision roles, while pushing the majority of the manage and all the do roles out of the country, or even out of the company.

Our first attempt at out-sourcing

Ignoring whether or not this drive to externalise the lower roles provided the expected savings or not, what it did do is break the career ladder for IT staff. Where does you next generation of senior IT personnel come from if you’ve pushed the lower ranks out of the business? Many companies found themselves with an awkward skills shortage a few years into an outsourcing / off-shore arrangement, as they were no longer able to train or promote senior personnel to replace those who were leaving through natural attrition.

The solution to this was to change how we brake-up the skills triangle; rather than a simple horizontal cut, we took a slice down the side. Retaining a portion of all skills in-house allows companies provide a career path and on the job training for their staff.

A second, improved, go at out-sourcing
A second, improved, go at out-sourcing

Many companies have tweaked this model, adding a bulge in the middle to provide a large enough resource pool to manage both internal projects, as well as those run by out-sourced and off-shore resources.

Factoring in the effort required to manage out-sourced projects
Factoring in the effort required to manage out-sourced projects

This model is now common in a lot of large companies, and it has served us well. However, the world has a funny habit of changing just when you’ve everything working smoothly.

The recent global financial criss has fundamentally changed the business landscape. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order. Many are even talking about the emergence of a new normal. The impact this will have on how we run our businesses (and our IT departments) is still being discussed, but we can see the outline of this impact already.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building internal capabilities. While this trend might have initially started as a cost saving, most of the benefit is in management time saved, which can then be used to focus on more important issues. We’re all finding that the limiting factor in our business is management time, so being able to hand off the management of less important tasks can help provide that edge you need.

We’re also seeing faster business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up. Nowhere is this more evident than the regulated industries (finance, utilities …), where updates in government regulation has changed from a generational to a quarterly occurrence as governments attempt to use regulation change to steer the economic boat.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

We can draw a few general conclusions on the potential impact on IT departments of these trends.

  • The increase reliance on partners, the broader partner ecosystem this implies, and an increasingly global approach to business will create more complex operational environments, increasing the importance of planning the IT estate and steering a company’s IT in the right direction.
  • The need to reduce leverage, and free up working capital, is pushing companies toward BPO and SaaS solutions, rather than the traditional on-premisses solutions, where the solution provider is paid per-seat, or might even be only paid a success fee.
  • The need for rapid project turn-around is pushing us toward running large portfolios of small projects, rather than a small number of large projects.
  • A lot of the admin work we used to do is now baked into web delivered solutions (BaseCamp et al).

This will trigger us to break up a the skills triangle in a different way.

A skills/roles triangle for the new normal
A skills/roles triangle for the new normal

While we’ll still take a slice down the side of the triangle, the buldge will move to the ends of the slice, giving it a skinny waist. The more complex operational environment means that we need to beef up planning (though we don’t want to get all dogmatic about our approach, as existing asset-centric IT planning methodologies won’t work in the new normal). A shift to large numbers of small projects (where the projects are potentially more technically complex) means that we’ll beef up our internal delivery capability, providing team leads with more autonomy. The move to smaller projects also means that we can reduce our administration and governance overhead.

We’ll replace some skills with automated (SaaS) solutions. Tools like BaseCamp will enable us to devolve responsibility for reporting and management to the team at the coalface. It will also reduce the need to develop and maintain infrastructure. Cloud technology is a good example of this, as it takes a lot of the tacit knowledge required to manage a fleet of servers and bakes it into software, placing it in the hands of the developers. Rumor has it that that a cloud admin can support 10,000 servers to a more traditional admin’s 500.

And finally, our suppliers act as a layer through the middle, a flex resource for us to call on. They can also provide us with a broader, cross-industry view, of how to best leverage technology.

This thinning out of the middle ranks is part of a trend we’re seeing elsewhere. Web2.0/E2.0/et al are causing organisations to remove knowledge workers — the traditional white collar middle layers of the organisaiton – leaving companies with a strategy/leadership group and task workers.

Update: Andy Mulholland has an interesting build on this post over at the Capgemini CTO blog. I particularly like the Holm service launched by Ford and Microsoft, a service that it’s hard to imagine a traditional IT department fielding.

With cloud computing, the world is not flat

Does location matter? Or, put another way, is the world no longer flat? Many cloud and SaaS providers work under the assumption that where we store data where it is most efficient from an application performance point of view, ignoring political considerations. This runs counter to many company and governments who care greatly where their data is stored. Have we entered a time where location does matter, not for technical reasons, but for political reasons? Is globalisation (as a political thing) finally starting to impact IT architecture and strategy?

Just who is taking your order?
Just who is taking your order?

Thomas Friedman‘s book, The World is Flat, contained a number of stories which where real eye openers. The one I remember the most was the McDonald’s drive through. The idea was simple: once you’ve removed direct physical contact from the ordering process, then it’s more efficient to accept orders from a contact centre than from within the restaurant itself. We could event locate that contact centre in a cheaper geography such as another state, or even another country.

Telecommunications made the world flat, as cheap telecommunications allows us to locate work wherever it is cheapest. The opportunity for labour arbitrage this created drove offshoring through the late nineties and into the new millenium. Everything from call centres to tax returns and medical image diagnosis started to migrate to cheaper geographies. Competition to be the cheapest and most efficient service provider, rather than location, determines who does the work. The entire world would compete on a level playing field.

In the background, whilst this was happening, enterprise applications went from common to ubiquitous. Adoption was driven by the productivity benefits the applications brought, which started of as a source of differentiation, but has now become one of the many requirements of being in business. SaaS and cloud are the most recent step in this evolution, leveraging the global market to create solutions operating at such a massive scale that they can provide price points and service levels which are hard, if not impossible, for most companies to achieve internally.

The growth of the U.S. enterprise application market
The growth of the U.S. enterprise application market (via INPUT)

Despite the world being laser levelled within an inch of its life, many companies are finding it difficult to move their operations to the cost-effective nirvana that is cloud and SaaS services. Location matters, it seems. Not for technical reasons, but for political ones.

Where we store our assets is important. Organisations want to put their assets somewhere safe, because without assets these the organisations don’t amount to much. Companies want to keep their information — their confidential trade secrets — hidden from prying eyes. Governments need to ensure they have the trust of their citizens by respecting their privacy. (Not to mention the skullduggery this is international relations.) While communications technology has made it incredibly easy to move this information around and keep it secure, it has yet to solve the political problem of ensuring that we can trust the people responsible for safeguarding our assets. And all these applications we have created — both the traditional on-premesis, hosted or SaaS and cloud versions — are really just asset management tools.

We’re reached a point where one of the a larger hidden assumptions of enterprise applications has been exposed. Each application was designed to live and operate within a single organisation. This organisation might be a company, or it might be a country, or it might be some combination of the two. The application you select to manage your data determines the political boundary it lives within. If you use any U.S. SaaS or cloud solution provider to manage your data, then your data falls under U.S. judicial discovery laws, irregardless of where you yourself are located. If your data transits through the U.S., then assume that the U.S. government has a copy. The world might be flat, but where you store your assets and where you send them still matters.

Country-specific regulations governing privacy and data protection vary greatly.
Global data protection heat map (via Forrester)

We can already see some moves by the vendors to address this problem. Microsoft, for example, has developed a dedicated cloud for the U.S. government, known as BPOS Federal, which is designed to meet the government’s stringent security and privacy standards. Amazon has also taken a portion of the cloud it runs and dedicated it to, and located it in, the EU, for similar reasons.

If we consider enterprise applications to be asset management tools rather than productivity tools, then ideas like private clouds start to make a lot of sense. Cloud technology reifies a lot of the knowledge required to configure and manage a virtualised environment in software, eliminating the data centre voodoo and empowering the development teams to manage the solutions themselves. This makes cloud technology simply a better asset management tool, but we need to freedom to locate the data (and therefore the application) where it makes the most sense from an asset management point of view. Sometimes this might imply a large, location agnostic, public cloud. Other times it might require a much smaller private cloud located within a specific political boundary. (And the need to prevent some data even transiting through a few specific geographies – requiring us to move the code to the data, rather than the data to the code – might be the killer application that mobile agents have been waiting for.)

What we really need are meta-clouds: clouds created by aggregating a number of different clouds, just as the Internet is a network of separate networks. While the clouds would all be technically similar, each would be located in a different political geography. This might be inside vs. outside the organisation, or in different states, or even different countries. The data would be stored and maintained where it made the most sense from an asset management point of view, with few technical considerations, the meta-cloud providing a consistent approach to locating and moving our assets within and across individual clouds as we see fit.

One of the only two sources of sustainable competitive advantage available to us today

I stumbled onto a somewhat interesting post over at HBR, which talks Garry Kasparov’s ideas in the business world. This is actually quite a relevant pairing, though an old one in the tradition of human-computer augmentation.

The idea a simple one, which takes far fewer words to express than the article took.

Use information technology to augment users, rather than replace them.

IT is good at lot of tasks, and less good at others. People, too, have their strengths and weaknesses. What’s interesting is that computers are weak where people are strong, and vice-versa. Computers excel as appliers of algorithms with huge memories and an attention to detail; people are powerful, creative problem solvers who have trouble thinking of four things at once and like coffee breaks. Why not pair the two, and get the best of both worlds.

Rather than replace the users, why don’t we use technology to automate the easy (for technology) 80% of what they do. (This is something I’ve written about before.) In the chess example, the easy 80% is providing the user with a chess computer for the commoditized solution space search, allowing them to focus on strategy. The performance improvement this approach provides can create an significant competitive advantage. As Garry Kasparov found, even a weak user with a chess computer can be impossible to defeat, by human or computer.

This then provides us with two options:

  1. Take the improvement as a saving by reducing head count.
  2. Reinvest the improvement by providing our users with more time to focus on the hard 20%.

(I must admit, i much prefer the later.)

If we continue to focus on automating the next easy 80%, we’ve created a platform and process for continual business optimisation. (Improvements in search efficiency would simply be harvested when appropriate to maintain parity.) Interestingly, this is one of only two sources of a sustainable competitive advantage available to us today.

The competative advantage with this approach rests with the user, in the commonplaces, the strategies, they use to solve problems. By reifying the easy 80% these strategies in software (processes and rules) we are moving some of the competitive advantage into the organisation with it can leveraged by other users. By continually attacking the easy 80% of what the users are doing, we are continually improving our competitive position. We could even sell our IT platform (but not the reified problem solving strategies) to our competitors — commoditzing the platform to realise a cost saving — without endangering our competitive position, as they would need to go through the same improvement and learning process that we did, while we continue to race ahead.

Now that’s scary: as long as we keep improving our process, our competitors will never be able to catch us.

Posted via web from PEG @ Posterous

The benefits of SaaS (beyond low cost)

I’ve already written about why I think private clouds can be a good idea. Similar arguments can be made for SaaS, and then some. A friend and I did the email-ping-pong thing and ended up with a (shortish) list of reasons why to go with a SaaS solution over an traditional on-premises solution.

  • OPEX rather than CAPEX cost. The CAPEX gulp is minimised, and the ongoing costs are tied to your own operational cost (head count, etc).
  • Faster provisioning. SaaS is can be up to 90% faster to deploy than on-premises solutions. (Weeks/months rather than months/years.)
  • No more upgrades. You’re always on the latest version, and new features are roll out organically rather than every few years as part of a change management process.
  • More focused vendor and community support. As there is only a single version in play, support efforts from the vendor and user community are focused on the version that you’re using. This also avoids the problem of getting left behind on a stale and unsupported platform (been there, done that, and have the scars to prove it).
  • SaaS provides a platform that scales organically with our organization. You’re not required to invest in additional hardware, software, and provisioning processes, letting your business focus on the business.
  • Reduced IT involvement. IT resources can focus on specific business problems rather than the care and feeding of the system.
  • Try before you buy. Instead of a traditional big license gulp at risk, sign up for a handful of SaaS seats for a few weeks and try it out. (From @shermo1.)

Any more?

Posted via web from PEG @ Posterous

Private clouds are (not) the future

Google (well, James Hamilton) has weighted in on the question of private clouds. As expected from a large cloud provider, James takes the position that private clouds make no sense. His reasoning is straight forward: private clouds will never have the scale of public clouds, therefore private clouds can never achieve the same price point as their public brethren. Ergo, there’s no point in building private clouds.

As I’ve pointed out before, there’s a lot more to cloud than simply reducing costs. The biggest benefit is probably the agility that cloud can bring to your IT estate, leveraging a cloud platform’s ability to codify and automate many of the management practices and create a target platform that can work across a range of deployment options, as well as streamlining hardware provisioning. Companies are also increasingly having to deal with the realities of political boundaries, a situation where the best technical solution might not be acceptable due to legal requirements (such as privacy legislation). Developing a private cloud can be a sensible move in this context.

Of course, if you want to compete purely on cost then private cloud will never hit the same price point as public cloud. But this misses the point that for many companies IT flexibility/agility is more important than cost.

Note: I was going to post this as a comment on James’ post, but comments appear to be broken.

Posted via web from PEG @ Posterous

Is “agile enterprise IT” an oxymoron?

Have we managed to design agility out of enterprise IT? Are the two now incompatible? Our decision to measure IT purely in terms of cost (ROI) or stability (SLAs) means that we have put aside other desirable characteristics like responsiveness, making our IT estates more like the lumbering airships of the 1920s. While efficient and reliable (once we got the hydrogen out of them), they are neither exciting or responsive to the business. The business ends up going elsewhere for their thrills. What to do?

LZ-127 Graf Zeppelin
LZ-127 Graf Zeppelin

An interesting post on jugaad over at the Capgemini CTO blog got me thinking. The tension between the managed chaos that jugaad seems to represent and the stability we strive for in IT seems to nicely capture the current tensions between business and IT. Business finds that opportunities are blinking in and out of existence faster than ever before, providing dramatically reduced windows of opportunity leaving IT departments unable to respond in time, prompting the business to look outside the organisation for solutions.

The first rule of CIOs is “you only have a seat at the strategy table if you’re keeping the lights on”. The pressure is on to keep the transactions flowing, and we spend a lot of time and money (usually the vast majority of our budget) ensuring that transactions do indeed flow. We often complain that our entire focus seems to be on cost and operations, when there is so much more we can bring to the leadership team. We forget that all departments labour under a similar rule, and all these rules are really just localised versions of a single overarching rule: the first rule of business, which is to be in business (i.e. remain solvent). Sales needs to sell, manufacturing needs to manufacture, … By devoting so much of our energy on cost and stability, we seems to have dug ourselves into a bit of a hole.

There’s another rule that I like to quote from time-to-time: management is not the art of making the perfect decision, but making a timely decision and then making it work. This seems to be something we’ve forgotten in the West, and particularly in IT. Perfection is an unattainable ideal in the real world, and agility requires a little chaos/instability. What’s interesting about jugaad is the concept’s ability to embrace the chaos required to succeed when resource constraints prevent you for using the perfect (or even simply the best) solution.

Vickers F.B. 5 Gunbus
Vickers F.B.5. Gunbus

Consider a fighter plane. The other day I was watching a documentary on the history of aircraft which showed how the evolution of fighters is a progression from stability to instability The first fighters (and we’re talking the start of WWI here–all fabric and glue) were designed to float above the battlefield where the pilots could shoot down at soldiers, or even lob bombs at them. They were designed to be very stable, so stable that the pilot could ignore the controls for a while and the plane would fly itself. Or you could shoot out most of the control surfaces and still land safely. (Sounds a bit like a modern, bullet proof, IT application, eh?)

The Red Baron: NAME
The Red Baron: Manfred von Richthofen

The problem with these planes is that they are very stable. It’s hard to make them turn and dance about, and this makes them easy to shoot down. They needed to be more agile, harder to shoot down, and the solution was to make them less stable. The result, by the end of WWI, was the fairly unstable tri-planes we associate with the Red Baron. Yes, this made them harder to fly, and even harder to land, but it also made them harder to hit.

Wizz forward to the modern day, and we find that all modern fighters are unstable by design. They’re so unstable that they’re unflyable without modern fly-by-wire systems. Forget about landing: you couldn’t even get them off the ground without their fancy control systems. The governance of the fly-by-wire systems lets the pilot control the uncontrollable.

The problem with modern IT is that it is too stable. Not the parts, the individual applications, but the IT estate as a whole. We’ve designed agility out of it, focusing on creating a stable and efficient platform for lobbing bombs onto the enemy below. This is great is the landscape below us doesn’t change, and the enemy promises not to move or shoot back, but not so good in today’s rapidly changing business environment. We need to be able to rapidly turn and dance about, both to dodge bullets and pounce on opportunities. We need some instability as instability means that we’re poised for change.

Jugaad points out that we need to allow in a bit of chaos if we want to bring the agility back in. The chaos jugaad provides is the instability we need. This will require us to update our governance processes, evolving them beyond simply being a tool to stop the bad happening, transforming governance into a tool for harvesting the jugaad where it occurs. After all, the role of enterprise IT is to capture good ideas and automate them, allowing them to be leveraged across the entire enterprise.

Managing chaos has become something of a science in the aircraft world. Tools like Energy-Maneuverability theory are used during aircraft design to make informed tradeoffs between weight, weapons load, amount of wing (i.e. ability to turn), and so on. This goes well beyond most efforts to map and score business processes, which is inherently a static pieces/parts and cost driven approach. Our focus should be on using different technologies and delivery approaches to modify how our IT estate responds to business change; optimising our IT estate’s dynamic, change-driven characteristics as well as its cost-driven static characteristics.

This might be the root of some of the problems we’re seeing between business and IT. IT’s tendency to measure value in terms of cost and/or stability leads us to create IT estates optimised for a static environment, which are at odds with the dynamic nature of the modern business environment. We should be focusing on the overall dynamic business performance of the IT estate, its energy-maneuverability profile.

Reducing costs is not the only benefit of cloud computing & SaaS

The wisdom of the crowd seems to have decided that both cloud computing and its sibling SaaS are cost plays. You engage a cloud or SaaS vendor to reduce costs, as their software utility has the scale to deliver the same functionality at a lower price point than you could do yourself.

I think this misses some of the potential benefits that these new delivery models can provide, from reducing your management overhead, allowing you to focus on more important or pressing problems, through to acting as a large flex resource or providing you with a testbed for innovation. In an environment where we’re all racing to keep up, the time and space we can create through intelligently leveraging cloud and SaaS solutions could provide us with the competitive advantage we need.

Sameul Insull

Could and SaaS are going to take over the world, or so I hear. And it increasingly looks that way, from Nicholas Carr‘s entertaining stories about Sameul Insull through to Salesforce.com, Google and Amazon‘s attempts to box-up SaaS and cloud for easy consumption. These companies massive economies of scale enable them to deliver commoditized functionality at a dramatically lower price point that most companies could achieve with even the best on-premises applications.

This simple fact causes many analysts to point out the folly of creating a private cloud. While a private cloud enables a company to avoid the security and ownership issues associated with a public service, they will never be able to realise the same economies of scale as their public brethren. It’s these economies of scale that enables companies like Google to devote significant time and effort into finding new and ever more creative techniques to extract every last drip of efficiency from their data centres, techniques which give them a competitive advantage.

I’ve always had problems with this point of view, as it ignores one important fact: a modern IT estate must deliver more than efficiency. Constant and dramatic business change means that our IT estate must be able to be rapidly reconfigured to support an ever evolving business environment. This might be as simple as scaling up and down, inline with changing transaction volumes, but it might also involve  rewriting business rules and processes as the organisation enters and leaves countries with differing regulation regimes, as well as adapting to mergers, acquisitions and divestments.

Once we look beyond cost, a few interesting potential uses for cloud and SaaS emerge.

First, we can use cloud as a tool to increase the flexibility of our IT estate. Using a standard cloud platform, such as an Amazon Machine Image, provides us with more deployment options than more traditional approaches. Development and testing can be streamlined, compressing development and testing time, while deployed applications can be migrated to the cloud instance which makes the most sense. We might choose to use public cloud for development and testing, while deploying to a private cloud under our own control to address privacy or political concerns. We might develop, test and deploy all into the public cloud. Or we might even use a hybrid strategy, retaining some business functionality in a private cloud, while using one or more public clouds as a flex resource to cope with peak loads.

Second, we can use cloud and SaaS as tools to increase the agility of our IT estate. By externalising the the management of our infrastructure (via cloud), or even the management of entire applications (via SaaS), we can create time and space to worry about more important problems. This enables us to focus on what needs to happen, rather than how to make it happen, and rely on the greater scale of our SaaS or cloud provider to respond more rapidly than we could if we were maintaining a traditional on-premises solution.

And finally, we can use cloud as the basis of an incubator strategy where an organisation may test a new idea using externalised resources, proving the business case before (potentially) moving to a more traditional internal deployment model.

One problem I’ve been thinking about recently is how to make our incredibly stable and reliable IT estates respond better to business change. Cloud and SaaS, with the ability to shape the flexibility and agility of our IT estate to meet what the business needs, might just be the tools we need to do this.

The price of regret

I learnt a new term at lunch the other day: regret cost. Apparently this is the cost incurred to re-platform or replace a tactical solution when it can no longer scale to support current demand. If we’d just built the big one in the first place, then we wouldn’t need to write of the investment in the tactical solution. An investment we now regret, apparently.

This attitude completely misses the point. The art of business is not to take the time to make a perfect decision, but to make a timely decision and make it work. Business opportunities are often only accessible in a narrow time window. If we miss the window then we can’t harvest the opportunity, and we might as well have not bothered.

Building the big, scalable perfect solution in the first place might be more efficient from an engineering point of view.  However, if we make the delivery effort so large that we miss the window of opportunity, then we’ve just killed any chance of helping the business to capitalise on the opportunity. IT has positioned itself as department that says no, which does little to support a productive relationship with the business.

Size the solution to match the business opportunity, and accept that there may need to be some rework in the future. Make the potential need for rework clear to the business so that there are no surprises. Don’t use potential rework in the future as a reason to do nothing. Or to force approval of a strategic infrastructure project which will deliver sometime in the distant future, a future which may never come.

While rework is annoying and, in an ideal world, a cost to be avoided, sometimes the right thing to do is to build tactical solution that will need to be replaced. After all, the driver to replacing it is the value it’s generating for the business. What is there to regret? That we helped the business be successful? Or that we’re about to help the business be even more successful?

Posted via email from PEG @ Posterous