Tag Archives: telecommunications

Having too much SOA is a bad thing (and what we might do about it)

SOA enablement projects (like a lot of IT projects) have a bad name. An initiative that starts as a good idea to create a bit more flexibility in the IT estate often seems to end up mired in its own complexity. The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software. Without any solid guidance on how much flexibility to create (and where to create it) most SOA initiatives simply keep creating flexibility until either the project collapses under its own weight, or the projected development work to create all the services exceeds the available CAPEX budget. A little flexility is good, but too much is bad. How can we scope the flexibility, pointing it where it’s most needed while preventing it from becoming a burden?

The challenge with SOA enablement is in determining how much flexibility to build into the IT estate. Some flexibility is good – especially if it’s focused on where the business needs it the most – but too much flexibility is simply another unnecessary cost. The last decade or so is littered with stories of companies who’s SOA initiatives were either brought to an early close or canned as they had consumed all the cash the business was prepared to invest into a major infrastructure project. Finance and telecoms seem particularly prone of creating these gold-plated SOA initiatives. (How many shelf-ware SDFs – service delivery frameworks – do you know of?)

The problem seems to be a lack of guidance on how much flexibility to build, or where to put it. We sold the business on the idea that a flexible, service-oriented IT estate would be better then the evil monolithic applications of old, but the details of just how flexible the new estate would be were a little fuzzy. Surely these details can be sorted out in service discovery? And governance should keep service discovery on track! We set ourselves up by over-promising and under-delivering.

Mario Batali: Too much is never enough!
Mario Batali

This much was clear: the business wanted agility, and agility requires flexibility. As flexibility comes from having more moving parts (services), we figured that creating more moving parts will create more agility. Service discovery rapidly became a process of identifying every bit of (reusable) functionality that we can pack into a service. More is better, or, as the man with the loud shoes says:

Too much is never enough!
Mario Batali

The problem with this approach is that it confuses flexibility and agility. It’s possible to be very flexible without being agile, and vica versa. Think of a formula one car: they’re fast and they’re agile (which is why driving them tends to be a young mans game), and they’re very stiff. Agility comes from keeping the weight down and being prepared to act quickly. This means keeping things simple, ensuring that we have minimum set of moving parts required. They might have an eye for detail, such as nitrogen in the tyres, but unnecessary moving parts that might reduce reliability or performance are eliminated.

This gold plated approach to SOA creates a lot of unrequired flexibility, this additional flexibility increases complexity, and the complexity becomes the boat anchor that slows you down and stops you from being agile. Turning the car is no longer a simple of tugging on the steering wheel, as we need governance to stop us from pulling the wrong lever in the bank of 500 identical levers in front of us.

It's really that simple!
It's really that simple!

We’ve made everything too complicated. Mario was wrong: too much is too much.

What we need is some guidance – a way of scoping and directing the flexibility we’re going to create. Governance isn’t enough, as governance is focused on stopping bad things from happening. We have a scoping problem. Our challenge is to understand what flexibility will be required in the future, and agreeing on the best way to support it.

To date I’ve been using a very fuzzy “business interest” metric for this, where services are decomposed until the business is no longer interested. The rational is that we put the flexibility only were the business thinks it needs to focus. This approach works fairly well, but it relies too much on the tacit judgement of a few skilled business analysts and architects, making it too opaque and hard to understand for the people not involved in the decision making process. It’s also hard to scale. We need something more deterministic and repeatable.

Which brings me to a friend’s MBA thesis, which he passed to me the other week. It’s an interesting approach to building business cases for IT solutions, one based on real options.

The problem with the usual approaches to building a business case, using tools like net present value (NPV) and discounted cash flow, is that we assume that the world doesn’t change post the decision to build the solution (or not). They don’t factor in the need to change a solution once it’s in the field, or even during development.

The world doesn’t work this way: the solution you approved in yesterday’s business environment will be deployed into a radically different business environment tomorrow. This makes it hard to justify the additional investment required for a more flexible SOA based solution, when compared to a conventional monolithic solution. The business case doesn’t include flexibility as a factor, so more flexible (and therefore complex and expensive) solutions lose to the cheaper, monolithic approach.

Real options address this by pushing you down a scenario planning based approach. You estimate the future events that you want to guard against, and their probabilities, creating a set of possible futures. Each event presents you with options to take action. The action, for example, might be to change, update or replace components in the solution to bring them in line with evolving business realities. The options are – in effect – flex-points that we might design into our solutions SOA. The real options methodology enables us to ascribe costs to these future events and the create a decision tree that captures the benefits of investing in specific flex points, all in a clear and easily understandable chain of reasoning.

The decision tree and options provide us with a way to map out where to place flex points in the SOA solution. They also provide us with strong guidance on how much flexibility to introduce. And this is the part I found really interesting about the approach. It also provides us with a nice framework to govern the evolution of the SOA solution, as changes are (generally) only made when an option is taken: when it’s business case is triggered.

It’s a bit like those formula one cars. A friend of mine used to work for one F1 manufacturer designing and testing camshafts. These camshafts had to fall within a 100,000 lifetime revolution window. An over-designed camshaft was unnecessary weight, while an under-designed one means that you wouldn’t win (or possibly even finish) the race. Work it out: a 100,000 revolutions is a tiny window for an F1 car, given the length of a race.

An approach like real options helps us ensure that we only have the flexibility required in the solution, and that it is exactly where it is required. Not too much, and not too little. Just enough to help us win the race.

With cloud computing, the world is not flat

Does location matter? Or, put another way, is the world no longer flat? Many cloud and SaaS providers work under the assumption that where we store data where it is most efficient from an application performance point of view, ignoring political considerations. This runs counter to many company and governments who care greatly where their data is stored. Have we entered a time where location does matter, not for technical reasons, but for political reasons? Is globalisation (as a political thing) finally starting to impact IT architecture and strategy?

Just who is taking your order?
Just who is taking your order?

Thomas Friedman‘s book, The World is Flat, contained a number of stories which where real eye openers. The one I remember the most was the McDonald’s drive through. The idea was simple: once you’ve removed direct physical contact from the ordering process, then it’s more efficient to accept orders from a contact centre than from within the restaurant itself. We could event locate that contact centre in a cheaper geography such as another state, or even another country.

Telecommunications made the world flat, as cheap telecommunications allows us to locate work wherever it is cheapest. The opportunity for labour arbitrage this created drove offshoring through the late nineties and into the new millenium. Everything from call centres to tax returns and medical image diagnosis started to migrate to cheaper geographies. Competition to be the cheapest and most efficient service provider, rather than location, determines who does the work. The entire world would compete on a level playing field.

In the background, whilst this was happening, enterprise applications went from common to ubiquitous. Adoption was driven by the productivity benefits the applications brought, which started of as a source of differentiation, but has now become one of the many requirements of being in business. SaaS and cloud are the most recent step in this evolution, leveraging the global market to create solutions operating at such a massive scale that they can provide price points and service levels which are hard, if not impossible, for most companies to achieve internally.

The growth of the U.S. enterprise application market
The growth of the U.S. enterprise application market (via INPUT)

Despite the world being laser levelled within an inch of its life, many companies are finding it difficult to move their operations to the cost-effective nirvana that is cloud and SaaS services. Location matters, it seems. Not for technical reasons, but for political ones.

Where we store our assets is important. Organisations want to put their assets somewhere safe, because without assets these the organisations don’t amount to much. Companies want to keep their information — their confidential trade secrets — hidden from prying eyes. Governments need to ensure they have the trust of their citizens by respecting their privacy. (Not to mention the skullduggery this is international relations.) While communications technology has made it incredibly easy to move this information around and keep it secure, it has yet to solve the political problem of ensuring that we can trust the people responsible for safeguarding our assets. And all these applications we have created — both the traditional on-premesis, hosted or SaaS and cloud versions — are really just asset management tools.

We’re reached a point where one of the a larger hidden assumptions of enterprise applications has been exposed. Each application was designed to live and operate within a single organisation. This organisation might be a company, or it might be a country, or it might be some combination of the two. The application you select to manage your data determines the political boundary it lives within. If you use any U.S. SaaS or cloud solution provider to manage your data, then your data falls under U.S. judicial discovery laws, irregardless of where you yourself are located. If your data transits through the U.S., then assume that the U.S. government has a copy. The world might be flat, but where you store your assets and where you send them still matters.

Country-specific regulations governing privacy and data protection vary greatly.
Global data protection heat map (via Forrester)

We can already see some moves by the vendors to address this problem. Microsoft, for example, has developed a dedicated cloud for the U.S. government, known as BPOS Federal, which is designed to meet the government’s stringent security and privacy standards. Amazon has also taken a portion of the cloud it runs and dedicated it to, and located it in, the EU, for similar reasons.

If we consider enterprise applications to be asset management tools rather than productivity tools, then ideas like private clouds start to make a lot of sense. Cloud technology reifies a lot of the knowledge required to configure and manage a virtualised environment in software, eliminating the data centre voodoo and empowering the development teams to manage the solutions themselves. This makes cloud technology simply a better asset management tool, but we need to freedom to locate the data (and therefore the application) where it makes the most sense from an asset management point of view. Sometimes this might imply a large, location agnostic, public cloud. Other times it might require a much smaller private cloud located within a specific political boundary. (And the need to prevent some data even transiting through a few specific geographies – requiring us to move the code to the data, rather than the data to the code – might be the killer application that mobile agents have been waiting for.)

What we really need are meta-clouds: clouds created by aggregating a number of different clouds, just as the Internet is a network of separate networks. While the clouds would all be technically similar, each would be located in a different political geography. This might be inside vs. outside the organisation, or in different states, or even different countries. The data would be stored and maintained where it made the most sense from an asset management point of view, with few technical considerations, the meta-cloud providing a consistent approach to locating and moving our assets within and across individual clouds as we see fit.

Product Meta-Models

Imagine the future. Not the distant future, we’re talking about next week or maybe the week after rather than an eventual future where we all have flying cars. A new business competitor has emerged on the market, coming out of nowhere with a business model that makes it impossible for your company to compete. They have half the cost to serve of their competitors, half the time to revenue, they seem to be able to introduce a new product in a matter of days rather than weeks, and their products are incredibly customisable. They seem to have halved the business metrics that you want to go down, doubled the ones you want to go up, while as the same time supporting a product portfolio of impressive depth and complexity. And they claim to be able to do this with conventional technology. How did they do it? And how are you going to respond?

A version was published in Align Journal as Product Meta-Models:
Delivering business agility through a new perspective on technology
.

Link to complete article.