Tag Archives: SOA

Some new rules for IT

The other week I had a go at capturing the rules of enterprise IT{{1}}. The starting point was a few of those beery discussions we all have after work, where we came to wonder how the game of enterprise IT was changing. It’s the common refrain of big-to-small, the Sieble to Saleforce.com transition which sees the need for IT services (internal or external) change dramatically. The rules of IT are definitely changing. Now that I’ve had a go at old rules, I thought I’d have a go at seeing what the new rules might be.

As I mentioned before, enterprise IT has historically been seen as an asset management function, a production line for delivering large IT assets into the IT estate and then maintaining them. The rules are the therefore rules of business operations. My attempt at capturing 4 ± 2 rules (with friends) produced the following (in no particular order):

[[1]]The rules of Enterprise IT @ PEG[[1]]

  • Keep the lights on. Much like being a trucker, the trick is to keep the truck rolling (and avoid spending money on tyres). Otherwise known as smooth running applications are the ticket to the strategy table.
  • Save money. Business IT was born as a cost saving exercise (out with the rooms full of people, in with the punch card machines), and most IT business cases are little different.
  • Build what you need. I wouldn’t be surprised if the team building LEO{{2}} blew their own valve tubes. You couldn’t buy parts of the shelf so you had to make everything. This is still with us in some organisations’ strong desire to build – or at least heavily customise – solutions.
  • Keep the outside outside. We trust whatever’s inside our four walls, while deploying security measures to keep the evil outside. This creates an us (employees) and them (customers, partners, and everyone else) mentality.

[[2]]LEO: Lyons Electronic Office. The first business computer. @ Wikipedia[[2]]

Things have changed since these rules were first laid down. From another post of mine on a similar topic{{3}} (somewhat trimmed and edited):

[[3]]The IT department we have today is not the IT department we’ll need tomorrow @ PEG[[3]]

The recent global financial criss has fundamentally changed the business landscape, with many are even talking about the emergence of a new normal{{4}}. We’ve also seen the emergence of outsource, offshore, cloud computing, SaaS, Enterprise 2.0 and so much more.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building their own internal capabilities.

We’re also seeing more rapid business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

[[4]]The new normal @ McKinsey Quarterly[[4]]

So what are the new 4 ± 2 rules? They’re not the old rules of asset management. We could argue that they’re the rules of modern manoeuvre warfare{{5}} (which would allow me to sneak in one of my regular John Boyd references{{6}}), but that would be have the tail wagging the dog as it’s business, and not IT that has that responsibility.

[[5]]Maneuver warfare @ Wikipedia[[5]]
[[6]]John Boyd @ Wikipedia[[6]]

I think that the new rules cast IT as something like that of a pit crew. IT doesn’t make the parts (though we might lash together something when in a pinch), nor do we steer the car. Our job is to swap the tyres, pump the fuel, and straighten the fender, all in that sliver of time available to us, so that the driver can focus on their race strategy and get back out on track as quickly as possible.

With that in mind, the following seems to be a fair (4 ± 2) minimum set to start with.

  • Timeliness. A late solution is often worse than no solution at all, as you’ve spent the money without realising any benefit. Or, as a wise sage once told me, management is the art of making a timely decision, and then making it work. Where before we could take the time to get it right (after all, the solution will be in the field for a long time and needs to support a lot of people, so better to discover problems early rather than later), now we just need to make sure the solution is good enough in the time available, and has the potential to grow to meet future demand. The large “productionisation” efforts of the past need to be broken into a series of incremental improvements (à la Gmail and the land of perpeputal beta), aligning investment with both opportunity and realised value.
  • Availability. Not just up time, but ensuring that all stakeholders (both in and outside the company, including partners and clients) can get access to the solutions and data they need. There’s little value in a sophisticated knowledge base solution if the sales team can’t use it in the field to answer customer questions in real time. Once they’ve had to fire up the laptop, and the 3G card, and the VPN, the moment has passed and the sale lost. Or worse, forcing them to head back to the bricks and mortar office. As I pointed out the other week, decisions are more important than data{{7}}, and success in this environment means empowering stakeholders to make the best possible decisions by ensuring that the have the data and functions they need, where they need, when they need it, and in a format that make it easy to consume.
  • Agility. Agility means creating an IT estate that meet the challenges we can see coming down the road. It doesn’t mean creating an infinitely flexible IT estate. Every bit of flexibility we create, every flex point we add, comes at a cost. Too much flexibility is a bad thing{{8}}, as it weighs us down. Think of formula one cars: they’re fast and they’re agile (which is why driving them tends to be a young mans game), and they’re very stiff. Agility comes from keeping the weight down and being prepared to act quickly. This means keeping things simple, ensuring that we have minimum set of moving parts required. The F1 crowd might have an eye for detail, such as putting nitrogen{{9}} in the tyres, but unnecessary moving parts that might reduce reliability or performance are eliminated. Agility is the cross product of weight, speed, reliability and flexibility, and we need to work to get them all into balance.
  • Sustainability. Business is not a sprint (ideally), and this means that cost and reliability remain important factors, but not the only factors. While timeliness, availability and agility might be what drive us forward, we need still need to ensure that IT is still a smooth running operation. The old rules saw cost and reliability as absolutes, and we strived to keep costs as low, and reliability as high, as possible. The new rules see us balancing sustainability with need, accepting (slightly) higher costs or lower reliability to provide a more timely, available or agile solution while still meeting business requirements. (I wonder if I should have called this one “balance”.)

[[7]]Decisions are more important than data @ PEG[[7]]
[[8]]Having too much SOA is a bad thing (and what we might do about it) @ PEG[[8]]
[[9]]Understanding the sport: Tyres @ formula1.com[[9]]

While by no mean complete or definitive, I think that’s a fair set of rules to start the discussion.

Having too much SOA is a bad thing (and what we might do about it)

SOA enablement projects (like a lot of IT projects) have a bad name. An initiative that starts as a good idea to create a bit more flexibility in the IT estate often seems to end up mired in its own complexity. The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software. Without any solid guidance on how much flexibility to create (and where to create it) most SOA initiatives simply keep creating flexibility until either the project collapses under its own weight, or the projected development work to create all the services exceeds the available CAPEX budget. A little flexility is good, but too much is bad. How can we scope the flexibility, pointing it where it’s most needed while preventing it from becoming a burden?

The challenge with SOA enablement is in determining how much flexibility to build into the IT estate. Some flexibility is good – especially if it’s focused on where the business needs it the most – but too much flexibility is simply another unnecessary cost. The last decade or so is littered with stories of companies who’s SOA initiatives were either brought to an early close or canned as they had consumed all the cash the business was prepared to invest into a major infrastructure project. Finance and telecoms seem particularly prone of creating these gold-plated SOA initiatives. (How many shelf-ware SDFs – service delivery frameworks – do you know of?)

The problem seems to be a lack of guidance on how much flexibility to build, or where to put it. We sold the business on the idea that a flexible, service-oriented IT estate would be better then the evil monolithic applications of old, but the details of just how flexible the new estate would be were a little fuzzy. Surely these details can be sorted out in service discovery? And governance should keep service discovery on track! We set ourselves up by over-promising and under-delivering.

Mario Batali: Too much is never enough!
Mario Batali

This much was clear: the business wanted agility, and agility requires flexibility. As flexibility comes from having more moving parts (services), we figured that creating more moving parts will create more agility. Service discovery rapidly became a process of identifying every bit of (reusable) functionality that we can pack into a service. More is better, or, as the man with the loud shoes says:

Too much is never enough!
Mario Batali

The problem with this approach is that it confuses flexibility and agility. It’s possible to be very flexible without being agile, and vica versa. Think of a formula one car: they’re fast and they’re agile (which is why driving them tends to be a young mans game), and they’re very stiff. Agility comes from keeping the weight down and being prepared to act quickly. This means keeping things simple, ensuring that we have minimum set of moving parts required. They might have an eye for detail, such as nitrogen in the tyres, but unnecessary moving parts that might reduce reliability or performance are eliminated.

This gold plated approach to SOA creates a lot of unrequired flexibility, this additional flexibility increases complexity, and the complexity becomes the boat anchor that slows you down and stops you from being agile. Turning the car is no longer a simple of tugging on the steering wheel, as we need governance to stop us from pulling the wrong lever in the bank of 500 identical levers in front of us.

It's really that simple!
It's really that simple!

We’ve made everything too complicated. Mario was wrong: too much is too much.

What we need is some guidance – a way of scoping and directing the flexibility we’re going to create. Governance isn’t enough, as governance is focused on stopping bad things from happening. We have a scoping problem. Our challenge is to understand what flexibility will be required in the future, and agreeing on the best way to support it.

To date I’ve been using a very fuzzy “business interest” metric for this, where services are decomposed until the business is no longer interested. The rational is that we put the flexibility only were the business thinks it needs to focus. This approach works fairly well, but it relies too much on the tacit judgement of a few skilled business analysts and architects, making it too opaque and hard to understand for the people not involved in the decision making process. It’s also hard to scale. We need something more deterministic and repeatable.

Which brings me to a friend’s MBA thesis, which he passed to me the other week. It’s an interesting approach to building business cases for IT solutions, one based on real options.

The problem with the usual approaches to building a business case, using tools like net present value (NPV) and discounted cash flow, is that we assume that the world doesn’t change post the decision to build the solution (or not). They don’t factor in the need to change a solution once it’s in the field, or even during development.

The world doesn’t work this way: the solution you approved in yesterday’s business environment will be deployed into a radically different business environment tomorrow. This makes it hard to justify the additional investment required for a more flexible SOA based solution, when compared to a conventional monolithic solution. The business case doesn’t include flexibility as a factor, so more flexible (and therefore complex and expensive) solutions lose to the cheaper, monolithic approach.

Real options address this by pushing you down a scenario planning based approach. You estimate the future events that you want to guard against, and their probabilities, creating a set of possible futures. Each event presents you with options to take action. The action, for example, might be to change, update or replace components in the solution to bring them in line with evolving business realities. The options are – in effect – flex-points that we might design into our solutions SOA. The real options methodology enables us to ascribe costs to these future events and the create a decision tree that captures the benefits of investing in specific flex points, all in a clear and easily understandable chain of reasoning.

The decision tree and options provide us with a way to map out where to place flex points in the SOA solution. They also provide us with strong guidance on how much flexibility to introduce. And this is the part I found really interesting about the approach. It also provides us with a nice framework to govern the evolution of the SOA solution, as changes are (generally) only made when an option is taken: when it’s business case is triggered.

It’s a bit like those formula one cars. A friend of mine used to work for one F1 manufacturer designing and testing camshafts. These camshafts had to fall within a 100,000 lifetime revolution window. An over-designed camshaft was unnecessary weight, while an under-designed one means that you wouldn’t win (or possibly even finish) the race. Work it out: a 100,000 revolutions is a tiny window for an F1 car, given the length of a race.

An approach like real options helps us ensure that we only have the flexibility required in the solution, and that it is exactly where it is required. Not too much, and not too little. Just enough to help us win the race.

Balancing our two masters

We seem to be torn between two masters. On one hand we’re driven to renew our IT estate, consolidating solutions to deliver long term efficiency and cost savings. On the other hand, the business wants us to deliver new, end user functionality (new consumer kiosks, workforce automation and operational excellence solutions …) to support tactical needs. But how do we balance these conflicting demands, when our vertically integrated solutions tightly bind user interaction to the backend business systems and their multi-year life-cycle? We need to decouple the two, breaking the strong connection between business system and user interface. This will enable us to evolve them separately, delivering long term savings while meeting short term needs.

Business software’s proud history is the story of managing the things we know. From the first tabulation systems through enterprise applications to modern SaaS solutions, the majority of our efforts have been focused data: capturing or manufacturing facts, and pumping them around the enterprise.

We’ve become so adept at delivering these IT assets into the business, that most companies’ IT estates a populated with an overabundance of solutions. Many good solutions, some no so good, and many redundant or overlapping. Gardening our IT estate has become a major preoccupation, as we work to simplify and streamline our collection of applications to deliver cost savings and operational improvements. These efforts are often significant undertakings, with numbers like “5 years” and “$50 million” not uncommon.

While we’ve become quite sophisticated at delivering modular business functionality (via methods such as SOA), our approach to supporting users is still dominated by a focus on isolated solutions. Most user interfaces are slapped on as nearly an after thought, providing stakeholders with a means to interact with the vast, data processing monsters we create. Tightly coupled to the business system (or systems) they are deployed with, these user interfaces are restricted to evolving at a similar pace.

Business has changed while we’ve been honing our application development skills. What used to take years, now takes months, if not weeks. What used to make sense now seems confusing. Business is often left waiting while we catch up, working to improve our IT estate to the point that we can support their demands for new consumer kiosks, solutions to support operational excellence, and so on.

What was one problem has now become two. We solved the first order challenge of managing the vast volumes of data an enterprise contains, only to unearth a second challenge: delivering the right information, at the right time, to users so that they can make the best possible decision. Tying user interaction to the back end business systems forces our solutions for these two problems to evolve at a similar pace. If we break this connection, we can evolve users interfaces at a more rapid pace. A pace more in line with business demand.

We’ve been chipping away at this second problem for a quite a while. Our first green screen and client-server solutions were over taken from portals, which promised to solve the problem of swivel-chair integration. However, portals seem to be have been defeated by browser tabs. While these allowed us to bring together the screens from a collection of applications, providing a productivity boost by reducing the number of interfaces a user interacted with, it didn’t break the user interfaces explicit dependancy on the back end business systems.

We need to create a modular approach to composing new, task focused user interfaces, doing to user interfaces what SOA has done for back-end business functionality. The view users see should be focused on supporting the decision they are making. Data and function sourced from multiple back-end systems, broken into reusable modules and mashed together, creating an enterprise mash-up. A mashup spanning multiple screens to fuse both data and process.

Some users will find little need an enterprise mash-up—typically users who spend the vast majority of their time working within a single application. Others, who work between applications, will see a dramatic benefit. These users typically include the knowledge rich workers who drive the majority of value in a modern enterprise. These users are the logistics exception managers, who can make the difference between a “best of breed” supply chain and a category leading one. They are the call centre operators, whose focus should be on solving the caller’s problem, and not worrying about which backend system might have the data they need. Or they could be field personnel (sales, repairs …), working between a range of systems as they engage with you customer’s or repair your infrastructure.

By reducing the number of ancillary decisions required, and thereby reducing the number of mistakes made, enterprise mash-ups make knowledge workers more effective. By reducing the need to manually synchronise applications, copying data between them, we make them more efficient.

But more importantly, enterprise mash-ups enable us to decouple development of user interfaces from the evolution of the backend systems. This enables us to evolve the two at different rates, delivering long term savings while meeting short term need, and mitigating one of the biggest risks confronting IT departments today: the risk of becoming irrelevant to the business.

We’re making our lives too complicated

Has SOA (Service Oriented Architecture) finally jumped the shark? After years of hype and failed promises, SOA seems to be in trouble. In a few short months it’s gone from IT’s great savour to something some people think is better forgotten.

The great promise of SOA was to deliver an IT estate which is more agile and cost effective than was possible with other, more conventional, approaches. By breaking our large problems into a set of much smaller ones, we would see more agility and a low total cost of ownership. The agility would come from the more flexible architecture provided by SOAs many moving parts. A lower cost of ownership would come from reuse of many of these moving parts. Many companies bought into this promise, and started major SOA transformation programs to “SOA enable their business”. Once the program of work was delivered they would have a shiny new flexible, and cost effective IT estate. The business would be thrilled, and the old tensions between business and IT would just melt away. More often than not the business wasn’t thrilled as the program failed to deliver the promised benefits.

The problem, it seems, is that we’re focused on creating cathedrals of technology. Cathedrals were the result of large bespoke development efforts. The plans often consisted of only a rough sketch on a scrap of paper, before a large number of skilled craftsmen were engaged. The craftsmen broke the problem into many small parts that were then laboriously assembled into the final structure, often adjusting the parts to fit in the process. While this process created a number of spectacular buildings, the journey from initial conception to completed build was long and challenging.

The lack of engineering pragmatism frequently resulted in cathedrals collapsing before they were finished, often multiple times. The reason we know that a flying buttress worked was because it hadn’t failed, yet. People died when a structure collapsed, and there was no way of telling if the latest version of the structure was about to collapse. The lengthy development process often lasted generations, passing through the stewardship of multiple architects with no clear end in sight. Many cathedrals, such as the one in New York, are still considered unfinished.

A lot of SOA projects give off a strong smell of cathedral. They are being constantly re-architected—while still in development—to cope with the latest business exception or demand. When they’re introduced to the hard reality of supporting solutions bits of them collapse and need to be rebuilt to support our new (and improved) understand of what will be demanded of them. And, finally, many of them are declared “finished” even though they are never fully baked, just so we can close that chapter in our company’s history and move onto the next problem.

Modern approaches to building construction take a different approach. A high level plan is created to capture the overall design for the building. The design is then broken into a small number of components, with the intention for using bespoke craftsmen for the fine details that will make the building different, while leveraging large, commoditized, pre-fabricated components for the supporting structures that form the majority of the building. Construction follows a clear timetable, with each component—from the largest pre-fabricated panel through to the smallest detail—integrated into the end-to-end solution as it is delivered. Complexity and detail were added only where needed, with cost effective commoditized approaches minimizing complexity elsewhere. A clear focus on the end goal is maintained throughout the effort, while clear work practices focused on delivering to the deadline ensure that the process was carried out with a minimum of fuss (and no loss of life).

The problem, it seems, is that we’re confusing agility with flexibility. The business is asking for agility; the world is changing faster than ever and the business needs to be in a position to react to these changes. Agility, or so our thinking goes, requires flexibility, so to provide a lot of agility we need to provide a lot of flexibility. Very soon we find ourselves breaking the IT estate (via our favorite domain model) into a large number of small services. These small parts will provide a huge amount of flexibility, therefore problem solved!

This misses the point. Atomizing the business in this way creates overhead, and the overhead soon swamps any benefit. The effort to define all these services is huge. Add a governance process—since we need governance to manage all the complexity we created—and we just amplify the effect of this overhead. Our technically pure approach to flexibly is creating too much complexity, and our approach to managing this complexity is just making the problem worst.

We need to think more like the architect of the modern prefabricated building. Have a clear understanding of how the business will use our building. Leverage prefabricated components (applications or SaaS) where appropriate; applications are still the most efficient means of delivering large, undifferentiated slabs of functionality. And add complexity only in those differentiating areas where it is justified, providing flexibility only where the business needs. In the end, creating good software is about keeping it simple.  If it’s simple, it gets done quickly and can be maintained more readily.

Above all, favor architectural pragmatism over architectural purity. The point of the architecture is to support the business, not to be an object of beauty.

The problems we’re facing

Companies are engaged in an arms race. For years they have been rushing to beat competitors to market with applications designed to automate a previously manual area of the business, making them more efficient and thereby creating a competitive advantage.

Today, enterprise applications are so successful that it is impossible to do business without them. The efficiencies they deliver have irrevocably changed the business environment, with an industry developing around them a range of vendors providing products to meet most needs. It is even possible to argue that many applications have become a commodity (as Nicholas Carr did in his HBR article “IT Doesn’t Matter”), and in the last couple of years we have seen consolidation in the market as larger vendors snap up smaller niche players to round out their product portfolio.

This has levelled the playing field, and it’s no longer possible to use an application in the same way to create competitive advantage. Now that applications are ubiquitous, they’re simply part of the fabric of business.

Today, how we manage the operation of a business process is becoming more important that the business process itself. Marco Iansiti brought this into sharp relief through his work at Harvard Business Review when he measured the efficiency of deployment of IT, and not cost, and correlated upper quartile efficiency with upper quartile sales revenue growth. Efficiently dealing with business exceptions, optimizing key decisions and ensuring end-to-end consistency and efficiency will have a greater impact than replacing an existing application.

We are finished the big effort: applications are available from multiple vendors to support the majority of a business’s supporting functionality. The law of diminishing returns has taken effect, and owning or creating new IT asset today will not simply confer a competitive advantage. Competitive advantage now lives in the gaps between our applications. Exception handling is becoming increasingly important as good exception handling can have a dramatic impact on both the bottom- and top-line. If we can deal with stock-outs more efficiently then we can keep less stock on hand and operate a leaner supply chain. Improving how we determine financial adequacy allows us to hold lower capital reserves, freeing up cash that we can put to other more productive uses. Extending our value-chain beyond the confines of our organisation to include partners, suppliers and channels, allows us to optimize end-to-end processes. Providing joined-up support for our mortgage product model allows us to put the model directly in the hands of our clients, letting them configure their own, personal, home loan.

Link to the complete article.

Product Meta-Models

Imagine the future. Not the distant future, we’re talking about next week or maybe the week after rather than an eventual future where we all have flying cars. A new business competitor has emerged on the market, coming out of nowhere with a business model that makes it impossible for your company to compete. They have half the cost to serve of their competitors, half the time to revenue, they seem to be able to introduce a new product in a matter of days rather than weeks, and their products are incredibly customisable. They seem to have halved the business metrics that you want to go down, doubled the ones you want to go up, while as the same time supporting a product portfolio of impressive depth and complexity. And they claim to be able to do this with conventional technology. How did they do it? And how are you going to respond?

A version was published in Align Journal as Product Meta-Models:
Delivering business agility through a new perspective on technology
.

Link to complete article.