Tag Archives: IT Strategy

How to cope with an IT transformation

Once started, an IT transformation is hard to stop. Such huge efforts – often involving investments of hundreds of millions, or even billions of dollars – take on a life of their own. Once the requirements have been scoped and IT has started the hard work of development, business’s thoughts often turn from anticipation to dread. How do we understand what we’re getting? How do we cope with business change between when we signed off the requirements to the solution entering production? Will the solution actually be able support an operating and constantly evolving business?

Transformations take a lot of time and huge amount of resources, giving them a life of their own within the business. It’s not uncommon for the team responsible for the transformation to absorb a significant proportion of the people and capital from the main business, often into the double-digit percentages. It’s also not uncommon for the the time between kicking off the project and delivery of the first components in to the business to be five years or more.

The world can change a lot in five years. Take Apple for example: sixty percent of the products they sell did not exist three years ago{{1}}. It’s not rare for the business to have a little buyers remorse once the requirements have been sign-off and we sit waiting for the solution to arrive. Did we ask for the right thing? Will the platforms and infrastructure perform as expected? Are our requirements good enough for IT to deliver what we need? Will what we asked for be relevant when it’s delivered?

[[1]]60 percent of Apple’s sales are from products that did not exist three years ago @ asumco.com[[1]]

Apple quarterly sales by product
Apple quarterly sales by product

The business has placed a large bet – often putting the entire company’s life on the line – so it’s understandable to be a little worried, and the investment is usually large enough that the business is committed: there’s no backing out now. However, the decision to undertake the transformation has been made, our bets have been placed, and there’s no point regretting carefully considered decisions made in the past with the best evidence and information we could gather at the time. We should be looking forward, focusing on how we can best leverage this investment once it is delivered.

We can break our concerns into a few distinct groups: completeness, suitability, relevance and adaptability.

First, we tend to worry that our requirements were complete. Did we give IT the information they need to do their job? Or were there holes and oversights in the requirements which will require interpretation by IT, interpretation which may or may-not align with how the business conceived the requirement when we wrote down the bullet points.

Next, we are concerned that we asked for the right thing. I don’t know about you, but I find it hard to imagine a finished solution from tables, bullet points and process diagrams. And I know that if I’m having trouble, then you’re probably imagining a slightly different finished solution than I’m thinking of. And IT probably has a different picture in their heads again. Someone is bound to be disappointed when the final solution is rolled out.

Thirdly, we have relevance. Five years is a long time. Even three years is long, as Apple has shown us. Our requirements were conceived in a very different business environment to the one that the solution will be deployed into. We probably did our best to guess what will change during the delivery journey, we can also be sure that some of our predictions will be wrong. How accurate our predictions are (which is largely a question of how lucky we were) will determine how relevant the solution will be. If our predictions were off the mark, then we might have a lot of work to do after the final release to bring the solution up to scratch.

Finally, we have adaptability. A business is not a fixed target, as it constantly evolves and adapts in response to the business environment it is situated in. Hopefully we specified the right flex-points – areas in the solution which will need to change rapidly in response to changing business need – back at the start of the journey. We don’t want our transformed IT estate to become instant legacy.

A lot of these concerns have already been address with ideas like rapid-productionisation{{2}} and (gasp!) agile methodologies, but they’re solving a different problem. Once you have a transformation underway, advice that you should hire lots of Scrum masters will fall on dead ears. While there’s a lot of good advice in these methodologies, our concern is coping with the transformation we have, not to throw away all effort to-date and try something different.

[[2]]Rapid productionising @ Shermo[[2]]

So what can we do to help IT ensure that the transformed IT estate is the best that it can be?

We could try to test to success, making IT jump through even more hoops by create more and increasing strenuous tests to add to the acceptance criteria, but while faster and more intense might work for George Lucas{{3}}, it doesn’t add a lot of value in this instance. Our concerns are understanding the requirements we have and safeguarding the relevance of our IT estate in a rapidly evolving business environment. We’re less concerned that existing requirements are implemented correctly (we should have already done that work).

[[3]]Fan criticism of George Lucas: Ability as a film director @ Wookieepedia[[3]]

I can see two clear strategies for coping with the IT transformation we have. First, is to create a better shared understanding of what the final solution might look like (shared between business and IT, as well as between business stakeholders). Second is to start understanding how the future IT estate might need to evolve and adapt in the future. Learnings from both of these activities can be feed back into the transformation to help improve the outcomes, as well as providing the business with a platform to communicate the potential scale and impact of the change with the broader user population.

There are a number of light-weight approaches to building and testing new user interfaces and workflows, putting the to-be requirements in the hands of the users in a very real and tactic way which enables them to understand what the world will look like post transformation. This needs to be more than UI wireframes or user storyboards. We need to trial new work practice, process improvements and decisioning logic. The team members at the coalface of the business also need to use these new tools in anger before we really under their impact. Above all, they need time with these solutions, time to form an opinion, as I’ve written about before{{4}}.

[[4]]I’ve already told you 125% of what I know @ PEG[[4]]

Much like the retail industry, with their trial stores, we can create a trial solution to explore how the final solution should move and act. We’re less worried about the plumbing and infrastructure, as we’re focused on the layout and how the trial store is used. This trial solution can be integrated with existing operations and provided to a small user population -– perhaps a branch in a bank, an single operations centre for back-office processing, or a one factory operated by a manufacturer – where we can realise, measure, test and update our understanding of what the to-be solution should look like, bringing our business and technology stakeholders to a single shared understanding of what we’re trying to achieve.

Our trial solution need not be on the production platform, as we’re trying to understand how the final solution should work and be used, not how it should be implemented. Startups are already providing enterprise mash-up platforms{{5}} which let you integrate UI, process and decisioning elements into one coherent user interface, often in weeks rather than months or years. Larger vendors – such as IBM and Oracle – are already integrating these technologies into their platforms. New vendors are also emerging which offer BPM on demand via a SaaS model.

[[5]]Enterprise Mash-Ups defined at Wikipedia[[5]]

Concerns about the scaleability and maintainability of these new technologies can be balanced with the limited scale and lifetime of our trial deployment. A trial operations centre in one city often need not require 24×7 support, perfectly capable of limping along with a nine-to-five phone number of someone from the development team. We can also always fail back to the older solution if the trial solution isn’t up to scratch.

Our second strategy might be to experiment with new ideas and wholly new models of operation, collecting data and insight on how the transformed IT estate might need to evolve once it becomes operational. This is the disruptive sibling of the incremental improvements in the trial solution. (Indeed, some of the insights from these experiments might even be tested in a trial solution, if feasible.)

In the spirit of experimental scenario planning, a bank might look to Mint{{6}} or Kiva{{7}}, while a retailer might look to Zara{{8}}. Or, even more interesting, you might look across industries, with a bank looking to Zara for inspiration, for example. The scenarios we identify might range from tactical plays, through major disruptions. What would happen if you took a different approach to planning{{9}}, as Tesco did{{10}} or if we, like Zara, focused on time to market rather than cost, and inverted how we think about our supply chain in the process{{11}}.

[[6]]Mint[[6]]
[[7]]Kiva[[7]]
[[8]]Zara[[8]]
[[9]]Inside vs. Outside @ PEG[[9]]
[[10]]Tesco is looking outside the building to predict customer needs @ PEG[[10]]
[[11]]Accelerate along the road to happiness @ PEG[[11]]

We can frame what we learn from these experiments in terms of the business drivers and activities they impact, allowing us to understand how the transformed IT estate would need to change in response. The data we obtain can be compiled and weighted to create a heat map which highlights potential change hotspots in the to-be IT estate, valuable information which can be feed back into the transformation effort, while the (measured, evaluated and updated) scenarios can be compiled into a playbook to prepare use when the new IT estate goes live.

Whatever we do, we can can’t sit by passively waiting for our new, transformed IT estate to be handed to us. Five years is a very long time in business, and if we want an IT estate that will support us into the future, then we need to start thinking about it now.

A prediction: many companies will start shedding IT architects in the next six to eighteen months

Business is intensely competitive these days. Under such intense pressure strategy usually breaks down into two things: do more of whatever is creating value, and do less of anything that doesn’t add value. This has put IT architecture in the firing line, as there seems to be a strong trend for architects to focus on technology and transformation, rather than business outcomes. If architects are not seen as taking responsibility for delivering a business outcome, then why does the business need them? I predict that business will start shedding the majority of their architects, just as they did in the eighties. Let’s say in six to eighteen months.

I heard a fascinating distinction the other day at breakfast. It’s the difference between “Architects” and “architects”. (That’s one with a little “a”, and the other with a large one.) It seems that some organisations have two flavours of architect. Those with the big “A” do the big thinking and the long meetings, they worry about the Enterprise, Application and Technology Architectures, and are skilled in the use of whiteboards. And those with the little “a” do the documenting and some implementation work, with Microsoft Visio and Word their tool of choice.

When did we starting trying to define an “Architect” as someone who doesn’t have some responsibility for execution? That’s a new idea for me. I thought that this Architect-architect split was a nice nutshell definition of what seems to be wrong with IT architecture at the moment.

We know that the best architects engage directly with the business and take accountability in providing solutions and outcomes the business cares about. However, splitting accountability between “Architects” and “architects” creates a structure and operation we know is potentially inefficient and disconnected from what’s really important. If the business sees architects (with either a big or little “a”) as not being responsible for delivering an outcome, then why does the business need them?

There’s a lot of hand wringing around the IT architecture community as proponents try to explain the benefits of architecture, and then communicate these benefits to the business. More often than not these efforts fall flat, with abstract arguments about governance, efficiency and business-technology alignment failing to resonate with the business.

“Better communication” might be pragmatic advice, but it ignores the fact that you need to be communicating something the audience cares about. And the business doesn’t care about governance, efficiency of the IT estate or business-technology alignment. You might: they don’t.

In my experience there are only three things that business does care about (and I generally work for the business these days).

  • Create a new product, service or market
  • Change the cost of operations or production
  • Create new interactions between customers and the company

And this seems to be the root of the problem. Neither IT efficiency, nor or governance or business-technology alignment are on that list. Gartner even highlighted this in a recent survey when they queried more than 1,500 business and technology executives to find out their priorities going forward.

Top 10 Business and Technology Priorities in 2010
Top 10 Business and Technology Priorities in 2010

Business need their applications — and are willing to admit this — but do they need better technical infrastructure or SOA (whatever that is)? How does that relate to workforce effectiveness? Will it help sell more product? Eventually the business will reach a point where doing nothing with IT seems like the most pragmatic option.

There’s a few classic examples of companies who get by while completely ignoring the IT estate. They happily continue using decades old applications, tweaking operational costs or worrying about M&A, and making healthy profits all the while. Their IT systems were good enough and fully depreciated, so why bother doing anything?

So what is the cost of doing nothing? Will the business suffer if the EA team just up and left? Or if the business let the entire architecture team go? The business will only invest in an architecture function if having one provides a better outcome than doing nothing. The challenge is that architecture has become largely detached from the businesses they are supposed to support. Architecture have forgotten that they work in logistics company, a bank or a government department, and not for “IT”. The tail is trying to wag the dog.

Defining Architecture (that’s the one with a big “A”) as a group who think the big technological thoughts, and who attend the long and very senior IT vendor meetings just compounds the problem. It sends a strong message to the business that architecture is not interested in the helping the business with the problems that it is facing. Technology and transformation are seen as more important.

It also seems that the business is starting to hear this message, which means that action can’t be far behind. Unless architecture community wakes up and reorganises around to what’s really important — the things that business care about — then we shouldn’t be surprised if business starts shedding these IT architecture functions that the business sees as adding no value. I give it six to eighteen months.

Having too much SOA is a bad thing (and what we might do about it)

SOA enablement projects (like a lot of IT projects) have a bad name. An initiative that starts as a good idea to create a bit more flexibility in the IT estate often seems to end up mired in its own complexity. The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software. Without any solid guidance on how much flexibility to create (and where to create it) most SOA initiatives simply keep creating flexibility until either the project collapses under its own weight, or the projected development work to create all the services exceeds the available CAPEX budget. A little flexility is good, but too much is bad. How can we scope the flexibility, pointing it where it’s most needed while preventing it from becoming a burden?

The challenge with SOA enablement is in determining how much flexibility to build into the IT estate. Some flexibility is good – especially if it’s focused on where the business needs it the most – but too much flexibility is simply another unnecessary cost. The last decade or so is littered with stories of companies who’s SOA initiatives were either brought to an early close or canned as they had consumed all the cash the business was prepared to invest into a major infrastructure project. Finance and telecoms seem particularly prone of creating these gold-plated SOA initiatives. (How many shelf-ware SDFs – service delivery frameworks – do you know of?)

The problem seems to be a lack of guidance on how much flexibility to build, or where to put it. We sold the business on the idea that a flexible, service-oriented IT estate would be better then the evil monolithic applications of old, but the details of just how flexible the new estate would be were a little fuzzy. Surely these details can be sorted out in service discovery? And governance should keep service discovery on track! We set ourselves up by over-promising and under-delivering.

Mario Batali: Too much is never enough!
Mario Batali

This much was clear: the business wanted agility, and agility requires flexibility. As flexibility comes from having more moving parts (services), we figured that creating more moving parts will create more agility. Service discovery rapidly became a process of identifying every bit of (reusable) functionality that we can pack into a service. More is better, or, as the man with the loud shoes says:

Too much is never enough!
Mario Batali

The problem with this approach is that it confuses flexibility and agility. It’s possible to be very flexible without being agile, and vica versa. Think of a formula one car: they’re fast and they’re agile (which is why driving them tends to be a young mans game), and they’re very stiff. Agility comes from keeping the weight down and being prepared to act quickly. This means keeping things simple, ensuring that we have minimum set of moving parts required. They might have an eye for detail, such as nitrogen in the tyres, but unnecessary moving parts that might reduce reliability or performance are eliminated.

This gold plated approach to SOA creates a lot of unrequired flexibility, this additional flexibility increases complexity, and the complexity becomes the boat anchor that slows you down and stops you from being agile. Turning the car is no longer a simple of tugging on the steering wheel, as we need governance to stop us from pulling the wrong lever in the bank of 500 identical levers in front of us.

It's really that simple!
It's really that simple!

We’ve made everything too complicated. Mario was wrong: too much is too much.

What we need is some guidance – a way of scoping and directing the flexibility we’re going to create. Governance isn’t enough, as governance is focused on stopping bad things from happening. We have a scoping problem. Our challenge is to understand what flexibility will be required in the future, and agreeing on the best way to support it.

To date I’ve been using a very fuzzy “business interest” metric for this, where services are decomposed until the business is no longer interested. The rational is that we put the flexibility only were the business thinks it needs to focus. This approach works fairly well, but it relies too much on the tacit judgement of a few skilled business analysts and architects, making it too opaque and hard to understand for the people not involved in the decision making process. It’s also hard to scale. We need something more deterministic and repeatable.

Which brings me to a friend’s MBA thesis, which he passed to me the other week. It’s an interesting approach to building business cases for IT solutions, one based on real options.

The problem with the usual approaches to building a business case, using tools like net present value (NPV) and discounted cash flow, is that we assume that the world doesn’t change post the decision to build the solution (or not). They don’t factor in the need to change a solution once it’s in the field, or even during development.

The world doesn’t work this way: the solution you approved in yesterday’s business environment will be deployed into a radically different business environment tomorrow. This makes it hard to justify the additional investment required for a more flexible SOA based solution, when compared to a conventional monolithic solution. The business case doesn’t include flexibility as a factor, so more flexible (and therefore complex and expensive) solutions lose to the cheaper, monolithic approach.

Real options address this by pushing you down a scenario planning based approach. You estimate the future events that you want to guard against, and their probabilities, creating a set of possible futures. Each event presents you with options to take action. The action, for example, might be to change, update or replace components in the solution to bring them in line with evolving business realities. The options are – in effect – flex-points that we might design into our solutions SOA. The real options methodology enables us to ascribe costs to these future events and the create a decision tree that captures the benefits of investing in specific flex points, all in a clear and easily understandable chain of reasoning.

The decision tree and options provide us with a way to map out where to place flex points in the SOA solution. They also provide us with strong guidance on how much flexibility to introduce. And this is the part I found really interesting about the approach. It also provides us with a nice framework to govern the evolution of the SOA solution, as changes are (generally) only made when an option is taken: when it’s business case is triggered.

It’s a bit like those formula one cars. A friend of mine used to work for one F1 manufacturer designing and testing camshafts. These camshafts had to fall within a 100,000 lifetime revolution window. An over-designed camshaft was unnecessary weight, while an under-designed one means that you wouldn’t win (or possibly even finish) the race. Work it out: a 100,000 revolutions is a tiny window for an F1 car, given the length of a race.

An approach like real options helps us ensure that we only have the flexibility required in the solution, and that it is exactly where it is required. Not too much, and not too little. Just enough to help us win the race.

The IT department we have today is not the IT department we’ll need tomorrow

The IT departments many of us work in today (either as an employee or consultant) are often the result of thirty or more years of diligent labour. These departments are designed, optimised even, to create IT estates populated with large, expensive applications. Unfortunately these departments are also looking a lot like dinosaurs: large, slow and altogether unsuited for the the new normal. The challenge is to reconfigure our departments, transforming them from asset management functions into business (or business-technology) optimisation engines. This transformation should be a keen interest for all of us, as it’s going to drive a dramatic change in staffing profiles which will, in turn, effect our own jobs in the no so distant future.

Delivering large IT solutions is a tricky business. They’re big. They’re expensive. And the projects to create them go off the rails more often than we’d like to admit. IT departments have been built to minimise the risks associated with delivering and operating these applications. This means governance, and usually quite a lot of it. Departments which started off as small scale engineering functions soon picked up an administrative layer responsible to the mechanics of governance.

More recently we’ve been confronted with the challenge with managing the dependancies and interactions between IT applications. Initiatives like straight-through processing require us to take a holistic, rather than a pieces-parts, approach, and we’re all dealing with the problem of having one of each application or middleware product, as well as a few we brewed in the back room ourselves. Planning the operation and evolution of the IT estate became more important, and we picked up an enterprise architecture capability to manage the evolution of our IT estate.

It’s common to visualise these various departmental functions and roles as a triangle (or a pyramid, if you prefer). At the bottom we have engineering: the developers and other technical personnel who do the actual work to build and maintain our applications. Next layer up is governance, the project and operational administrators who schedule the work and check that it’s done to spec. Second from the top are the planners, the architects responsible for shaping the work to be done as well as acting as design authority. Capping of the triangle (or pyramid) is the IT leadership team who decide what should be done.

The departmental skills triangle

While specific techniques and technologies might come and go, the overall composition of the triangle has remained the same. From the sixties and seventies through to even quite recently, we’ve staffed our IT departments with many technical doers, a few less administrators, a smaller planning team, and a small IT leadership group. The career path for most of us been a progression from the bottom layers – when we were fresh out of school – to the highest point in the triangle that we can manage.

The emergence of off-shore and outsourcing put a spanner in the works. We all understand the rational: migrate the more junior positions – the positions with the least direct (if any) contact with the business proper – to a cheaper country. Many companies under intense cost pressure broke the triangle in two, keeping the upper planning and decision roles, while pushing the majority of the manage and all the do roles out of the country, or even out of the company.

Our first attempt at out-sourcing

Ignoring whether or not this drive to externalise the lower roles provided the expected savings or not, what it did do is break the career ladder for IT staff. Where does you next generation of senior IT personnel come from if you’ve pushed the lower ranks out of the business? Many companies found themselves with an awkward skills shortage a few years into an outsourcing / off-shore arrangement, as they were no longer able to train or promote senior personnel to replace those who were leaving through natural attrition.

The solution to this was to change how we brake-up the skills triangle; rather than a simple horizontal cut, we took a slice down the side. Retaining a portion of all skills in-house allows companies provide a career path and on the job training for their staff.

A second, improved, go at out-sourcing
A second, improved, go at out-sourcing

Many companies have tweaked this model, adding a bulge in the middle to provide a large enough resource pool to manage both internal projects, as well as those run by out-sourced and off-shore resources.

Factoring in the effort required to manage out-sourced projects
Factoring in the effort required to manage out-sourced projects

This model is now common in a lot of large companies, and it has served us well. However, the world has a funny habit of changing just when you’ve everything working smoothly.

The recent global financial criss has fundamentally changed the business landscape. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order. Many are even talking about the emergence of a new normal. The impact this will have on how we run our businesses (and our IT departments) is still being discussed, but we can see the outline of this impact already.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building internal capabilities. While this trend might have initially started as a cost saving, most of the benefit is in management time saved, which can then be used to focus on more important issues. We’re all finding that the limiting factor in our business is management time, so being able to hand off the management of less important tasks can help provide that edge you need.

We’re also seeing faster business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up. Nowhere is this more evident than the regulated industries (finance, utilities …), where updates in government regulation has changed from a generational to a quarterly occurrence as governments attempt to use regulation change to steer the economic boat.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

We can draw a few general conclusions on the potential impact on IT departments of these trends.

  • The increase reliance on partners, the broader partner ecosystem this implies, and an increasingly global approach to business will create more complex operational environments, increasing the importance of planning the IT estate and steering a company’s IT in the right direction.
  • The need to reduce leverage, and free up working capital, is pushing companies toward BPO and SaaS solutions, rather than the traditional on-premisses solutions, where the solution provider is paid per-seat, or might even be only paid a success fee.
  • The need for rapid project turn-around is pushing us toward running large portfolios of small projects, rather than a small number of large projects.
  • A lot of the admin work we used to do is now baked into web delivered solutions (BaseCamp et al).

This will trigger us to break up a the skills triangle in a different way.

A skills/roles triangle for the new normal
A skills/roles triangle for the new normal

While we’ll still take a slice down the side of the triangle, the buldge will move to the ends of the slice, giving it a skinny waist. The more complex operational environment means that we need to beef up planning (though we don’t want to get all dogmatic about our approach, as existing asset-centric IT planning methodologies won’t work in the new normal). A shift to large numbers of small projects (where the projects are potentially more technically complex) means that we’ll beef up our internal delivery capability, providing team leads with more autonomy. The move to smaller projects also means that we can reduce our administration and governance overhead.

We’ll replace some skills with automated (SaaS) solutions. Tools like BaseCamp will enable us to devolve responsibility for reporting and management to the team at the coalface. It will also reduce the need to develop and maintain infrastructure. Cloud technology is a good example of this, as it takes a lot of the tacit knowledge required to manage a fleet of servers and bakes it into software, placing it in the hands of the developers. Rumor has it that that a cloud admin can support 10,000 servers to a more traditional admin’s 500.

And finally, our suppliers act as a layer through the middle, a flex resource for us to call on. They can also provide us with a broader, cross-industry view, of how to best leverage technology.

This thinning out of the middle ranks is part of a trend we’re seeing elsewhere. Web2.0/E2.0/et al are causing organisations to remove knowledge workers — the traditional white collar middle layers of the organisaiton – leaving companies with a strategy/leadership group and task workers.

Update: Andy Mulholland has an interesting build on this post over at the Capgemini CTO blog. I particularly like the Holm service launched by Ford and Microsoft, a service that it’s hard to imagine a traditional IT department fielding.

With cloud computing, the world is not flat

Does location matter? Or, put another way, is the world no longer flat? Many cloud and SaaS providers work under the assumption that where we store data where it is most efficient from an application performance point of view, ignoring political considerations. This runs counter to many company and governments who care greatly where their data is stored. Have we entered a time where location does matter, not for technical reasons, but for political reasons? Is globalisation (as a political thing) finally starting to impact IT architecture and strategy?

Just who is taking your order?
Just who is taking your order?

Thomas Friedman‘s book, The World is Flat, contained a number of stories which where real eye openers. The one I remember the most was the McDonald’s drive through. The idea was simple: once you’ve removed direct physical contact from the ordering process, then it’s more efficient to accept orders from a contact centre than from within the restaurant itself. We could event locate that contact centre in a cheaper geography such as another state, or even another country.

Telecommunications made the world flat, as cheap telecommunications allows us to locate work wherever it is cheapest. The opportunity for labour arbitrage this created drove offshoring through the late nineties and into the new millenium. Everything from call centres to tax returns and medical image diagnosis started to migrate to cheaper geographies. Competition to be the cheapest and most efficient service provider, rather than location, determines who does the work. The entire world would compete on a level playing field.

In the background, whilst this was happening, enterprise applications went from common to ubiquitous. Adoption was driven by the productivity benefits the applications brought, which started of as a source of differentiation, but has now become one of the many requirements of being in business. SaaS and cloud are the most recent step in this evolution, leveraging the global market to create solutions operating at such a massive scale that they can provide price points and service levels which are hard, if not impossible, for most companies to achieve internally.

The growth of the U.S. enterprise application market
The growth of the U.S. enterprise application market (via INPUT)

Despite the world being laser levelled within an inch of its life, many companies are finding it difficult to move their operations to the cost-effective nirvana that is cloud and SaaS services. Location matters, it seems. Not for technical reasons, but for political ones.

Where we store our assets is important. Organisations want to put their assets somewhere safe, because without assets these the organisations don’t amount to much. Companies want to keep their information — their confidential trade secrets — hidden from prying eyes. Governments need to ensure they have the trust of their citizens by respecting their privacy. (Not to mention the skullduggery this is international relations.) While communications technology has made it incredibly easy to move this information around and keep it secure, it has yet to solve the political problem of ensuring that we can trust the people responsible for safeguarding our assets. And all these applications we have created — both the traditional on-premesis, hosted or SaaS and cloud versions — are really just asset management tools.

We’re reached a point where one of the a larger hidden assumptions of enterprise applications has been exposed. Each application was designed to live and operate within a single organisation. This organisation might be a company, or it might be a country, or it might be some combination of the two. The application you select to manage your data determines the political boundary it lives within. If you use any U.S. SaaS or cloud solution provider to manage your data, then your data falls under U.S. judicial discovery laws, irregardless of where you yourself are located. If your data transits through the U.S., then assume that the U.S. government has a copy. The world might be flat, but where you store your assets and where you send them still matters.

Country-specific regulations governing privacy and data protection vary greatly.
Global data protection heat map (via Forrester)

We can already see some moves by the vendors to address this problem. Microsoft, for example, has developed a dedicated cloud for the U.S. government, known as BPOS Federal, which is designed to meet the government’s stringent security and privacy standards. Amazon has also taken a portion of the cloud it runs and dedicated it to, and located it in, the EU, for similar reasons.

If we consider enterprise applications to be asset management tools rather than productivity tools, then ideas like private clouds start to make a lot of sense. Cloud technology reifies a lot of the knowledge required to configure and manage a virtualised environment in software, eliminating the data centre voodoo and empowering the development teams to manage the solutions themselves. This makes cloud technology simply a better asset management tool, but we need to freedom to locate the data (and therefore the application) where it makes the most sense from an asset management point of view. Sometimes this might imply a large, location agnostic, public cloud. Other times it might require a much smaller private cloud located within a specific political boundary. (And the need to prevent some data even transiting through a few specific geographies – requiring us to move the code to the data, rather than the data to the code – might be the killer application that mobile agents have been waiting for.)

What we really need are meta-clouds: clouds created by aggregating a number of different clouds, just as the Internet is a network of separate networks. While the clouds would all be technically similar, each would be located in a different political geography. This might be inside vs. outside the organisation, or in different states, or even different countries. The data would be stored and maintained where it made the most sense from an asset management point of view, with few technical considerations, the meta-cloud providing a consistent approach to locating and moving our assets within and across individual clouds as we see fit.

Reducing costs is not the only benefit of cloud computing & SaaS

The wisdom of the crowd seems to have decided that both cloud computing and its sibling SaaS are cost plays. You engage a cloud or SaaS vendor to reduce costs, as their software utility has the scale to deliver the same functionality at a lower price point than you could do yourself.

I think this misses some of the potential benefits that these new delivery models can provide, from reducing your management overhead, allowing you to focus on more important or pressing problems, through to acting as a large flex resource or providing you with a testbed for innovation. In an environment where we’re all racing to keep up, the time and space we can create through intelligently leveraging cloud and SaaS solutions could provide us with the competitive advantage we need.

Sameul Insull

Could and SaaS are going to take over the world, or so I hear. And it increasingly looks that way, from Nicholas Carr‘s entertaining stories about Sameul Insull through to Salesforce.com, Google and Amazon‘s attempts to box-up SaaS and cloud for easy consumption. These companies massive economies of scale enable them to deliver commoditized functionality at a dramatically lower price point that most companies could achieve with even the best on-premises applications.

This simple fact causes many analysts to point out the folly of creating a private cloud. While a private cloud enables a company to avoid the security and ownership issues associated with a public service, they will never be able to realise the same economies of scale as their public brethren. It’s these economies of scale that enables companies like Google to devote significant time and effort into finding new and ever more creative techniques to extract every last drip of efficiency from their data centres, techniques which give them a competitive advantage.

I’ve always had problems with this point of view, as it ignores one important fact: a modern IT estate must deliver more than efficiency. Constant and dramatic business change means that our IT estate must be able to be rapidly reconfigured to support an ever evolving business environment. This might be as simple as scaling up and down, inline with changing transaction volumes, but it might also involve  rewriting business rules and processes as the organisation enters and leaves countries with differing regulation regimes, as well as adapting to mergers, acquisitions and divestments.

Once we look beyond cost, a few interesting potential uses for cloud and SaaS emerge.

First, we can use cloud as a tool to increase the flexibility of our IT estate. Using a standard cloud platform, such as an Amazon Machine Image, provides us with more deployment options than more traditional approaches. Development and testing can be streamlined, compressing development and testing time, while deployed applications can be migrated to the cloud instance which makes the most sense. We might choose to use public cloud for development and testing, while deploying to a private cloud under our own control to address privacy or political concerns. We might develop, test and deploy all into the public cloud. Or we might even use a hybrid strategy, retaining some business functionality in a private cloud, while using one or more public clouds as a flex resource to cope with peak loads.

Second, we can use cloud and SaaS as tools to increase the agility of our IT estate. By externalising the the management of our infrastructure (via cloud), or even the management of entire applications (via SaaS), we can create time and space to worry about more important problems. This enables us to focus on what needs to happen, rather than how to make it happen, and rely on the greater scale of our SaaS or cloud provider to respond more rapidly than we could if we were maintaining a traditional on-premises solution.

And finally, we can use cloud as the basis of an incubator strategy where an organisation may test a new idea using externalised resources, proving the business case before (potentially) moving to a more traditional internal deployment model.

One problem I’ve been thinking about recently is how to make our incredibly stable and reliable IT estates respond better to business change. Cloud and SaaS, with the ability to shape the flexibility and agility of our IT estate to meet what the business needs, might just be the tools we need to do this.

The price of regret

I learnt a new term at lunch the other day: regret cost. Apparently this is the cost incurred to re-platform or replace a tactical solution when it can no longer scale to support current demand. If we’d just built the big one in the first place, then we wouldn’t need to write of the investment in the tactical solution. An investment we now regret, apparently.

This attitude completely misses the point. The art of business is not to take the time to make a perfect decision, but to make a timely decision and make it work. Business opportunities are often only accessible in a narrow time window. If we miss the window then we can’t harvest the opportunity, and we might as well have not bothered.

Building the big, scalable perfect solution in the first place might be more efficient from an engineering point of view.  However, if we make the delivery effort so large that we miss the window of opportunity, then we’ve just killed any chance of helping the business to capitalise on the opportunity. IT has positioned itself as department that says no, which does little to support a productive relationship with the business.

Size the solution to match the business opportunity, and accept that there may need to be some rework in the future. Make the potential need for rework clear to the business so that there are no surprises. Don’t use potential rework in the future as a reason to do nothing. Or to force approval of a strategic infrastructure project which will deliver sometime in the distant future, a future which may never come.

While rework is annoying and, in an ideal world, a cost to be avoided, sometimes the right thing to do is to build tactical solution that will need to be replaced. After all, the driver to replacing it is the value it’s generating for the business. What is there to regret? That we helped the business be successful? Or that we’re about to help the business be even more successful?

Posted via email from PEG @ Posterous

Extreme Competition

I’ve uploaded another presentation to SlideShare. (Still trying to work through the backlog.) This is something that I had been doing for banks and insurance companies as part of their “thought leadership” sessions.

A new company enters the market in late 2008, LGM Wealth Management, who have found a new way of spinning existing solutions and technologies to provide it with capabilities an order of magnitude better than anyone else.

  • Time to Revenue < 5 days
  • Cost to Serve < ½ industry average
  • New Product Introduction < 5 days
  • Infinite customization

How do you react?

What we’re doing today is not what we did yesterday

Telxon
Telxon hand unit

The business of IT has changed radically in the last few years. Take Walmart for example. In the 80s Walmart laid the foundations for its future growth by fielding a supply chain data warehouse. The insight the data warehouse fueled their amazing growth to become the largest retailer in the world. However, our focus has moved on from developing applications. More recently Walmart fielded the Telxon, a barcode scanner with a wireless link to the corporate back-end. This device is the front end of a distributed solution which has let Walmart devolve buying decisions to the team walking the shop floor.

For a long time IT departments have defined themselves by their ability to deliver major applications into the enterprise. CRM, MRP, even ERP; all the three letter acronyms. For a long time this has been the right thing to do. Walmart’s data warehouse, to return to our example, was a large application which was a significant driver in the company’s outlier performance for the next couple of decades.

The world has changed a lot since that data warehouse went operational. First the market for enterprise applications grew into the mature market we see today. If you have a well defined problem—an unsupported business activity—then a range of vendors will line up to provide you with off-the-shelf solutions. Next we saw a range of non-technology options emerge, from business process outsourcing (BPO) and leveraging partnerships, through to emerging software-as-a-service (SaaS) solutions.

What used to be a big problem—fielding a large bespoke (or even off-the-shelf) application—has become a (relatively) small one. Take CRM (customer relationship management) as one example. What was a multi-year project requiring an investment of tens of millions of dollars to deploy a best-of-breed on-premises solution, has become a few million dollar and a matter of months to field SaaS solution. And the SaaS solutions seem to be pulling ahead in the feature-function war; Salesforce.com (one of the early SaaS CRM solutions) is now seen as the market leader (check with your favorite analyst).

Nor has business been standing still while technology has been marching forward. The productivity improvements provided by the last generation of enterprise applications have created the time and space for business stakeholders to solve more difficult problems. That supply chain solution Walmart deployed that was the first of many, automating most (if not all) of the mundane tasks across the supply chain. Business process methodologies such as LEAN (derived from the Toyota Production System) and Six Sigma (from GE) then rolled through the business, ripping all the fat from our supply chains as they went past. The latest focus has been category management: managing groups of product as separate businesses and, in many chases, handing responsibility for managing the category back to the supplier.

Which brings us back to the Telxon. If we’ve all been on the same journey—fielding a complete set of applications, optimizing our business processes, and deploying the latest, best practice, management techniques—then how do we differentiate? Walmart realized that, all things being equal, it was their ability to respond to supply chain exceptions that would provide them with an edge. As a retailer, this means responding to stock-outs on the shop floor. The only way to do this in a timely manner is to empower the people walking the floor to make a procurement decision when they see fit. Walmart’s solution was the Telxon.

The Telxon is an interesting device as it reveals an astonishing amounts of information: the quantity that should be on the shelf, the availability from the nearest warehouse, the retail price, and even the markup. It also empowers the employee to place an order for anything from a pallet to a truck-load.

Writer
Writer Charles Platt during his stint as a Wal-Mart employee in Flagstaff, Ariz.

As one journalist found:

We received an inspirational talk on this subject, from an employee who reacted after the store test-marketed tents that could protect cars for people who didn’t have enough garage space. They sold out quickly, and several customers came in asking for more. Clearly this was a singular, exceptional case of word-of-mouth, so he ordered literally a truckload of tent-garages, “Which I shouldn’t have done really without asking someone,” he said with a shrug, “because I hadn’t been working at the store for long.” But the item was a huge success. His VPI was the biggest in store history—and that kind of thing doesn’t go unnoticed in Arkansas.

Charles Platt, Fly on the Wall (7th Feb 2009), New Your Post

Clearly the IT world has moved on since that first data warehouse went live in Arkansas. Enterprise applications have been transformed from generators of competitive advantage into efficient sources of commodity functionality. Technology’s ability to create value should be focused on how we effectively support knowledge workers and the differentiation they create. These solutions only have a passing resemblance to the application monoliths of the past. They’re distributed, rather than centralized, pulling information from a range of sources, including partner and public sources. They’re increasingly real time, in the Twitter sense of the term, pulling current transactional data in as needed rather than working from historical data and relying on overnight ETLs. They’re heterogeneous, integrating a range of technologies as well as changes in business processes and employee workplace agreements, all brought together for delivery of the final solution. And, most importantly, they’re not standalone n-tier applications like we built in the past.

But while the IT world has moved on, it seems that many of our IT departments haven’t. Our heritage as application factories has us focused on managing applications, rather than technology, actively preventing us from creating this new generation of solutions. This behavior is ingrained in our organizations, with a large number of architects through project managers to senior management measuring their worth by the size of the project (in terms of CAPEX and OPEX required, or head count) that they are involved in, with the counter productive behavior that this creates.

In a world where solutions are shrinking and becoming more heterogeneous (even to the extent of becoming increasingly cross discipline) our inability to change ourselves is the biggest thing holding us back

We’re making our lives too complicated

Has SOA (Service Oriented Architecture) finally jumped the shark? After years of hype and failed promises, SOA seems to be in trouble. In a few short months it’s gone from IT’s great savour to something some people think is better forgotten.

The great promise of SOA was to deliver an IT estate which is more agile and cost effective than was possible with other, more conventional, approaches. By breaking our large problems into a set of much smaller ones, we would see more agility and a low total cost of ownership. The agility would come from the more flexible architecture provided by SOAs many moving parts. A lower cost of ownership would come from reuse of many of these moving parts. Many companies bought into this promise, and started major SOA transformation programs to “SOA enable their business”. Once the program of work was delivered they would have a shiny new flexible, and cost effective IT estate. The business would be thrilled, and the old tensions between business and IT would just melt away. More often than not the business wasn’t thrilled as the program failed to deliver the promised benefits.

The problem, it seems, is that we’re focused on creating cathedrals of technology. Cathedrals were the result of large bespoke development efforts. The plans often consisted of only a rough sketch on a scrap of paper, before a large number of skilled craftsmen were engaged. The craftsmen broke the problem into many small parts that were then laboriously assembled into the final structure, often adjusting the parts to fit in the process. While this process created a number of spectacular buildings, the journey from initial conception to completed build was long and challenging.

The lack of engineering pragmatism frequently resulted in cathedrals collapsing before they were finished, often multiple times. The reason we know that a flying buttress worked was because it hadn’t failed, yet. People died when a structure collapsed, and there was no way of telling if the latest version of the structure was about to collapse. The lengthy development process often lasted generations, passing through the stewardship of multiple architects with no clear end in sight. Many cathedrals, such as the one in New York, are still considered unfinished.

A lot of SOA projects give off a strong smell of cathedral. They are being constantly re-architected—while still in development—to cope with the latest business exception or demand. When they’re introduced to the hard reality of supporting solutions bits of them collapse and need to be rebuilt to support our new (and improved) understand of what will be demanded of them. And, finally, many of them are declared “finished” even though they are never fully baked, just so we can close that chapter in our company’s history and move onto the next problem.

Modern approaches to building construction take a different approach. A high level plan is created to capture the overall design for the building. The design is then broken into a small number of components, with the intention for using bespoke craftsmen for the fine details that will make the building different, while leveraging large, commoditized, pre-fabricated components for the supporting structures that form the majority of the building. Construction follows a clear timetable, with each component—from the largest pre-fabricated panel through to the smallest detail—integrated into the end-to-end solution as it is delivered. Complexity and detail were added only where needed, with cost effective commoditized approaches minimizing complexity elsewhere. A clear focus on the end goal is maintained throughout the effort, while clear work practices focused on delivering to the deadline ensure that the process was carried out with a minimum of fuss (and no loss of life).

The problem, it seems, is that we’re confusing agility with flexibility. The business is asking for agility; the world is changing faster than ever and the business needs to be in a position to react to these changes. Agility, or so our thinking goes, requires flexibility, so to provide a lot of agility we need to provide a lot of flexibility. Very soon we find ourselves breaking the IT estate (via our favorite domain model) into a large number of small services. These small parts will provide a huge amount of flexibility, therefore problem solved!

This misses the point. Atomizing the business in this way creates overhead, and the overhead soon swamps any benefit. The effort to define all these services is huge. Add a governance process—since we need governance to manage all the complexity we created—and we just amplify the effect of this overhead. Our technically pure approach to flexibly is creating too much complexity, and our approach to managing this complexity is just making the problem worst.

We need to think more like the architect of the modern prefabricated building. Have a clear understanding of how the business will use our building. Leverage prefabricated components (applications or SaaS) where appropriate; applications are still the most efficient means of delivering large, undifferentiated slabs of functionality. And add complexity only in those differentiating areas where it is justified, providing flexibility only where the business needs. In the end, creating good software is about keeping it simple.  If it’s simple, it gets done quickly and can be maintained more readily.

Above all, favor architectural pragmatism over architectural purity. The point of the architecture is to support the business, not to be an object of beauty.