Category Archives: Technology and its malcontents

Open Data might have failed, but Open Government is still going strong.

It would seem that the shine is starting to wear off the Open Government movement, with a recent report to the US congress challenging some of the assumptions which drove the dictate out of the U.S. Open Government Office1)The Obama Administration’s Open Government Initiative: Issues for Congress [PDF], forcing U.S. departments to publish their data sets. The report found that simply pushing out data has negative outcomes as well as positive ones (which should be no surprise), and that often the cost of pushing out (and maintaining) a data set didn’t outweigh the benefits. Most importantly, it raised the question of whether or not publishing these data sets was a good use of the public’s money.

So, has the business case behind Open Government been found lacking in the harsh light of day? Or is this one of those cases where some faith is – similar as with the investment in the U.S. highway network – because the benefits of stepping into the unknown are not calculable with the crude mechanism of ROI. The truth seems to lie somewhere between the two.

I wouldn’t confuse the investment in the US road network post WWII (or AU’s current investment in a NBN) with Open Government. The former was an investment in an asset which the U.S. government of the time made largely on faith, an investment which is currently seen to be returning $14 billion to the U.S. economy annually. (Australia’s NBN might be heading on a similar journey2)The NBN wants to be free @ PEG.) The latter is actually a philosophical point of view about an approach to government.

The problem is that we confuse “Open Data” with “Open Government”. They’re related, but not the same. Open Government is a move to streamline service acquisition and delivery by exposing the bureaucracy of government and integrating it more tightly with other service providers, and has been progressing nicely for a decade or more now. Open Data is a desire to change the relationship between government and the population, reducing the government to a simple data conduit between the public (or corporations) providing services and the public consuming them.

Open Government has made government easier to deal with by making it easier to find and consume the services you need, and by fostering community. Everything from applying for the dole, getting a grant through to organising a council supported street party is orders of magnitude easier than it was a few decades ago, mainly due to increased transparency. This has been delivered via a range of means, from publishing information on line, through providing better explanations for the services offered and promoting multi-channel access and self service delivery. The latest wave of Open Government is seeing departments integrating external services with their own, putting even more data out in public in the process, as they move from a service-provider to a service-enabler. Ultimately though, if government (as separate from politics) is focused on keeping folk feed and feeling safe then it’s doing it’s job. It’s basic Maslow3)Maslow’s hierarchy of needs @ Changing Minds.

Open Data, though, is based on the view that government should do as little as possible, hand over the data, and let individuals in the public get on with doing what they want. It’s claimed that this will provide transparency (the public has all the data, after all) as well as fostering entrepreneurs to provide innovative solutions to the many problems that confront us today.

It’s quite possible to have transparency and Open Government without the need to publish all your data, and maintain these published versions, as claimed by the Open Data proponents. People need to understand how the wheels of government turn if they want to trust it, and the best way of doing this is usually through key figures and analysis which builds a story and names the important players. Drowning people in data has the opposite effect, hiding government operation behind a wall of impenetrable details. Wikileaks was a great study in this effect, as it was only when the traditional journalists became involved, with their traditional analysis and publication weaving together a narrative the broader public could consume, that the leaks started to have a real impact. (It’s also interesting that the combination of the anonymous drop boxes being created by conventional media, and Open Leaks‘ anonymous mass distribution to conventional media, looks to be a more potent tool than the ideologically pure Wikileaks.)

Nor is treating government as an integration medium the only way to solve the world’s problems. While entrepreneurs and VCs might be the darlings of the moment, there’s many other organisations and governments which are also successfully chipping away at these problems. For every VC backed Bloom Box{{5}} who has mastered marketing hype, there’s a more boring organisation that might have already overtaken them4)New Solid Oxide Fuel Cell System Provides Cheap Grid Energy From CNG and Biogas @ IB Time UK. The entrepreneur model will be part of the solution, but it’s not the silver bullet many claim it to be.

The problem is that Open Data is the result of a libertarian political mindset rather, rather than being a solution to a pressing need. Forcing government to publish all its data sets does not provide or guarantee transparency, nor does it have a direct impact on the services offered by the government. It can also consume significant government resources that might be better spent providing services that the community needs. Publish a data set of no obvious value, or build a homeless shelter? Invest in Semantic Web enabling another data set few use, or pay for disaster relief? These are the tradeoffs that people responsible for the day-to-day operation of government are forced to make. Claims by folk like Tim Berners-Lee that magic will happen once data is out there and ontology enabled have proven to be largely wrong.

However, Open Data does align with a particular political point view. Open Data assumes that we, as a population, want such a small government model, an assumption which is completely unjustified. Some people trust, and want, the government to take responsibility for a lot of these services. Some want to meet the government somewhere in the middle. Open Data tries to force a world that works in shades of grey into a black-or-white choice that driven by a particular world view.

Deciding what and how much the government should be responsible for is a political decision, and it’s one that we revisit every time we visit the ballot box. Each time we vote we evolve, by a small amount, the role government plays in our lives5)What is the role of Government in a Web 2.0 world? @ PEG. (Occasionally we avoid the ballot box and revolt instead.) Should government own the roads? The answer appears to still be yes. Should government own power stations? Generally, no. Should they own the dams? We’re still deciding that one.

It’s in the context of the incremental and ongoing evolution of government’s role in our lives that we can best understand Open Data. Forcing Open Data onto government through mandate (as Obama did) was a political act driven by a desire to force one group’s preferred mode of operation on everyone else. You might want Open Data, but other people have differing priorities. Just because they disagree doesn’t make them wrong. The U.S. congressional report is the mechanism of government responding by documenting the benefits Open Data brought, the problems it caused, and the cost. The benefits (or not) will now be debated, and its future decided at the ballot box.

Open Government is alive and well, and is driving the evolution of government as we know it. Services are being improved, governments are increasingly their integrating services with those of the private sector, and more data will be released to support this. The assumption that all government data should remain secret unless proven otherwise has been flipped, and many public servants now assume that data should be made public unless there’s a good reason not to publish. Government is investing in moving specific information assets online, were it makes sense, and departments are opening up to social media and much closer involvement (and scrutiny) with the public sector. The mechanism of government is evolving, and this is a good thing.

Open Data, though, as an expression of a political point of view, looks like it’s in trouble.

References   [ + ]

Social media: bubble, definitely not; revolution, probably not; evolution, absolutely

Is Social Media in general (and mobility in particular) a bubble or revolution? Is it a a powerful and disruptive force that will transform governments and social organisations? Or is it no? There seems to be a few{{1}} people{{2}} pondering this question

[[1]]The video above is less than a minute long. Please … @ bryan.vc[[1]]
[[2]]Is The Mobile Phone Our Social Net? @ AVC[[2]]

Mobile phones are interesting as they are addressable. Two-way radios made communication mobile a long time ago, but it wasn’t until mobile phones (and cheap mobile phones, specifically) that we could address someone on the move, or someone on the move could address a stationary person or service.

The second and third world showed us the potential of this technology over ten year ago, from the fishermen using their phones to market and sell their catch while still on the boat, through to the distributed banking based on pre-paid mobile phone cards. Image/video sharing is just the latest evolution in this.

The idea that this might be a revolution seems to be predicated on the technology’s ability to topple centrally planned and controlled organisations. Oddly enough, central planning is a bad enough idea to fall over on its own in many cases, and the only effect of mobile technology is to speed up a process which is already in motion. The Soviet Union might well be the poster child for this: collapsing under the weight of it’s own bureaucracy with no help from social media (or mobile phones, for that matter). Even modern democracies are not immune, and the US energy regulation policies leading up to deregulation in the late 70s is a great example of the failures of central planning{{3}}. The (pending) failure of some of today’s more centralised, and authoritarian regimes, would be more accurately ascribed to the inability of slow moving, centrally managed bureaucracies to adapt to a rapidly changing environment. Distributed planning always trumps central planning in a rapidly changing environment.

[[3]]The Role of Petroleum Price and Allocation Regulations in Managing Energy Shortages @ Annual Review of Energy[[3]]

If we pause for a moment, we can see that governments do a few distinct things for us.

  • They provide us with what is seen as essential services.
  • They create a platform to enforce social norms (policies and laws).
  • They engage with the rest of the world on our behalf.

The reality is that many of the essential services that government provides are provided by the government because it’s too difficult or expensive for citizens (and to some extent, corporations) to access the information they need to run these services themselves. Mobile phones (and social media) are just the latest in a series of technologies that have changed these costs, enabling companies and citizens to take responsibility for providing services which, previously, were the sole domain of government. From energy, water and telecoms, through FixMyStreet and the evolving use of social media in New Orleans, Haiti and then Queensland during their respective natural disasters, we can see that this is a long running and continuing trend. Government is migrating from a role of providing all services, to one where government helps facilitate our access to the services we need. Expect this to continue, and keep building those apps.

As a platform for agreeing and enforcing social norms, then it’s hard to see anything replacing government in the short to mid term. (As always, the long term is completely up for grabs.) These social norms are geographical – based on the people you interact with directly on a day-to-day basis – and not virtual. Social media provides a mechanism for government to broaden the conversation. Some governments are embracing this, others, not so much. However, while people like to be consulted, they care a lot more about results. (Think Maslow’s Hierarchy of Needs{{4}}.) Singapore has a fairly restrictive and controlling government, which has (on the whole) a very happy population. China is playing a careful game of balancing consultation, control and outcomes, and seems to doing this successfully.

[[4]]Maslow’s Hierarchy of Needs @ Abraham-Maslow[[4]]

Finally we come to the most interesting question: government as a means for us to engage with the rest of the world. In this area, government’s role has shrunk in scope but grown in importance. Globalisation and the Internet (as a communication tool) has transformed societies, making it cheaper to call friends across the globe than it is to call them around the corner. We all have friends in other countries, cross-border relationships are common, and many of us see ourselves as global citizens. At the same time, the solutions to many of today’s most pressing issues, such as global warming, have important aspects which can only be addressed by our representatives on the global stage.

So we come back to the question at hand: is social media a bubble, a revolution, or an evolution of what has come before.

It’s hard to see it as a bubble: the changes driven by social media are obviously providing real value so we can expect them to persist and expand. I was particularly impressed by how the Queensland government had internalised a lot of the good ideas from the use of social media{{5}} in the Victorian fires, Haiti et al.

[[5]]Emergency services embrace Social Media @ Social Media Daily[[5]]

We can probably discount revolution too, as social media is (at most) a better communication tool and not a new theory of government. (What would Karl Marx think?) However, by dramatically changing the cost of communication it is having a material impact of the role government in our lives{{6}}. Government, and the society it represents is evolving in response.

[[6]]The changing role of government @ PEG[[6]]

The challenge is to keep political preference separate from societal need. While you might yearn for the type of society that Ayn Rand only ever dreamed about, other people find your utopia more akin to one of Dante’s seven circles of hell. Many of the visions for Gov 2.0 are political visions – individuals’ ideas for how they would organise an ideal society – rather than views of how technology can best be used to support society as a whole.

China is the elephant in this room. If social media is a disruptive, revolutionary force, then we can expect China’s government to topple. What appears more likely is that China will integrate social media into its toolbox while it focuses on keeping its population happy, evolving in the process. As long as they deliver the lower half of Maslow’s Hierarchy, they’ll be fairly safe. After all, the expulsion of governments and organisations – the revolution that social media is involved in – is due to these organisations’ inability to provide for the needs of their population, rather than any revolutionary compulsion inherent in the technology itself.

A prediction: many companies will start shedding IT architects in the next six to eighteen months

Business is intensely competitive these days. Under such intense pressure strategy usually breaks down into two things: do more of whatever is creating value, and do less of anything that doesn’t add value. This has put IT architecture in the firing line, as there seems to be a strong trend for architects to focus on technology and transformation, rather than business outcomes. If architects are not seen as taking responsibility for delivering a business outcome, then why does the business need them? I predict that business will start shedding the majority of their architects, just as they did in the eighties. Let’s say in six to eighteen months.

I heard a fascinating distinction the other day at breakfast. It’s the difference between “Architects” and “architects”. (That’s one with a little “a”, and the other with a large one.) It seems that some organisations have two flavours of architect. Those with the big “A” do the big thinking and the long meetings, they worry about the Enterprise, Application and Technology Architectures, and are skilled in the use of whiteboards. And those with the little “a” do the documenting and some implementation work, with Microsoft Visio and Word their tool of choice.

When did we starting trying to define an “Architect” as someone who doesn’t have some responsibility for execution? That’s a new idea for me. I thought that this Architect-architect split was a nice nutshell definition of what seems to be wrong with IT architecture at the moment.

We know that the best architects engage directly with the business and take accountability in providing solutions and outcomes the business cares about. However, splitting accountability between “Architects” and “architects” creates a structure and operation we know is potentially inefficient and disconnected from what’s really important. If the business sees architects (with either a big or little “a”) as not being responsible for delivering an outcome, then why does the business need them?

There’s a lot of hand wringing around the IT architecture community as proponents try to explain the benefits of architecture, and then communicate these benefits to the business. More often than not these efforts fall flat, with abstract arguments about governance, efficiency and business-technology alignment failing to resonate with the business.

“Better communication” might be pragmatic advice, but it ignores the fact that you need to be communicating something the audience cares about. And the business doesn’t care about governance, efficiency of the IT estate or business-technology alignment. You might: they don’t.

In my experience there are only three things that business does care about (and I generally work for the business these days).

  • Create a new product, service or market
  • Change the cost of operations or production
  • Create new interactions between customers and the company

And this seems to be the root of the problem. Neither IT efficiency, nor or governance or business-technology alignment are on that list. Gartner even highlighted this in a recent survey when they queried more than 1,500 business and technology executives to find out their priorities going forward.

Top 10 Business and Technology Priorities in 2010
Top 10 Business and Technology Priorities in 2010

Business need their applications — and are willing to admit this — but do they need better technical infrastructure or SOA (whatever that is)? How does that relate to workforce effectiveness? Will it help sell more product? Eventually the business will reach a point where doing nothing with IT seems like the most pragmatic option.

There’s a few classic examples of companies who get by while completely ignoring the IT estate. They happily continue using decades old applications, tweaking operational costs or worrying about M&A, and making healthy profits all the while. Their IT systems were good enough and fully depreciated, so why bother doing anything?

So what is the cost of doing nothing? Will the business suffer if the EA team just up and left? Or if the business let the entire architecture team go? The business will only invest in an architecture function if having one provides a better outcome than doing nothing. The challenge is that architecture has become largely detached from the businesses they are supposed to support. Architecture have forgotten that they work in logistics company, a bank or a government department, and not for “IT”. The tail is trying to wag the dog.

Defining Architecture (that’s the one with a big “A”) as a group who think the big technological thoughts, and who attend the long and very senior IT vendor meetings just compounds the problem. It sends a strong message to the business that architecture is not interested in the helping the business with the problems that it is facing. Technology and transformation are seen as more important.

It also seems that the business is starting to hear this message, which means that action can’t be far behind. Unless architecture community wakes up and reorganises around to what’s really important — the things that business care about — then we shouldn’t be surprised if business starts shedding these IT architecture functions that the business sees as adding no value. I give it six to eighteen months.

Michelangelo’s approach to workflow discovery

Take any existing workflow — any people driven business process — and I expect that most of the tasks within it could best be described as cruft.

cruft: /kruhft/
[very common; back-formation from crufty]

  1. n. An unpleasant substance. The dust that gathers under your bed is cruft; the TMRC Dictionary correctly noted that attacking it with a broom only produces more.
  2. n. The results of shoddy construction.
  3. vt. [from hand cruft, pun on ‘hand craft’] To write assembler code for something normally (and better) done by a compiler (see hand-hacking).
  4. n. Excess; superfluous junk; used esp. of redundant or superseded code.
  5. [University of Wisconsin] n. Cruft is to hackers as gaggle is to geese; that is, at UW one properly says “a cruft of hackers”.

The Jargon File, v4.4.7

Capturing and improving a workflow (optimising it even) is a processes of removing cruft to identify what really needs to be there. This is remarkably like Michalangelo{{1}}’s approach to carving David{{2}}. When asked how he created such a beautiful sculpture, everything just as it should be, Michalangeo responded (and I’m paraphrasing):

[[1]]Michelangelo Buonarroti[[1]]
[[2]]Michelangelo’s David[[2]]

Michelangelo's David
Michelangelo’s David

David was always there in the limestone; I just carved away the bits that weren’t David.

Cruft is the result of the people — the knowledge workers engaged in the process — dealing with the limitations of last decade’s technology. Cruft is the work-arounds and compensating actions for a fragmented and conflicting IT environment, an environment which gets in the road more often than it supports the knowledge workers. Or cruft might be the detritus of quality control and risk management measures put in place some time ago (decades in many instances) to prevented an expensive mistake that is no longer possible.

Most approaches to workflow automation are based on some sort of process improvement methodology, such as LEAN or Six Sigma. These methods work: I’ve often heard is stated that pointing Six Sigma at a process results in a 30% saving, each and every time. They do this by aggressively removing variation in the process — slicing away unnecessary decisions, as each decisions is an opportunity for a mistake. These decisions might represent duplicated decisions, redundant process steps, or unnecessarily complicated handoffs.

There’s a couple of problems with this though, when dealing with workflow. Looking for what’s redundant doesn’t create an explicit link between business objectives and the steps in the workflow, explicitly justifying each step’s existence, making it hard to ensure that we caught all the cruft. And the aggressive removal of variation can strip a process’s value along with its cost.

Much of the cruft in a workflow process is there for historical reasons. These reasons can range from something bad happened a long time in the past through to we don’t know why, but if we don’t do that then the whole thing falls over. A good facilitator will challenge seemingly obsolete steps, identifying those steps who have served their purpose and should be removed. However, it’s not possible to justify every step without quickly wearing down subject matter experts. Some obsolete steps will always leak through, no matter how many top-down and bottom-up iterations we do.

We can also find that we reach the end of the processes improvement journey only to find that much of the process’s value — the exceptions and variation that make the process valuable — has been cut out to make the process more efficient or easier to implement. In the quest for more science in our processes, we’ve eliminated the art that we relied on.

If business process management isn’t a programming challenge{{3}}, then this holds even truer for human driven workflow.

[[3]]A business process is not a programming challenge @ PEG[[3]]

What we need is a way to chip away the cruft and establish a clear line of traceability between the goals of each stakeholder involved in the process, and each step and decision in the workflow. And we need to do this in a way that allows us to balance art and science.

I’m pretty sure that Michalangeo had a good idea of what he wanted to create when he started belting on the chisel. He was looking for something in the rock, the natural seems and faults, that would let him find David. He kept the things that supported his grand plan, while chipping away those that didn’t.

For a workflow processes, these are the rules, tasks and points of variation that knowledge workers use to navigate their way through the day. Business rules and tasks are the basic stuff of workflow: decisions, data transformation and hand-offs between stakeholders. Points of variation let us identify those places in a workflow where we want to allow variation — alternate ways of achieving the one goal — as a way of balancing art and science.

Rather than focus on programming the steps of the process, worrying if we should send an email or a fax, we need to make this (often) tacit knowledge explicit. Working top-down, from the goals of the business owners, and bottom-up, from the hand-offs and touch-points with other stakeholders, we can chip away at the rock. Each rule, task or point of variation we find is measured against our goals to see if we should chip it away, or leave it to become part of the sculpture.

That which we need stays, that which is unnecessary is chipped away.

Taxonomies 1, Semantic Web (and Linked Data) 0

I’m not a big fan of Semantic Web{{1}}. For something that has been around for just over ten years — and which has been aggressively promoted by the likes of Tim Berners-Lee{{2}} — very little real has come of it.

Taxonomies, on the other hand, are going gangbusters, with solutions like GovDirect{{3}} showing that there is a real need for this sort of data-relationship driven approach{{4}}. Given this need, if the flexibility provided by Semantic Web (and more recently, Linked Data{{5}}) was really needed, then we would have expected someone to have invested in building significant solutions which use the technology.

While the technology behind Semantic Web and Linked Data is interesting, it seems that most people don’t think it’s worth the effort.

All this makes me think: the future of data management and standardisation is ad hoc, with communities or vendors scratching specific itches, rather than formal, top-down, theory driven approaches such as Semantic Web and Linked Data, or even other formal standardisation efforts of old.

[[1]]SemanticWeb.org[[1]]
[[2]]Tim Berners-Lee on Twitter[[2]]
[[3]]GovDirect[[3]]
[[4]]Peter Williams on the The Power of Taxonomies @ the Australian Government’s Standard Business Reporting Initiative[[4]]
[[5]]LinkedData.org[[5]]

The technologies behind the likes of Semantic Web and Linked Data have a long heritage. You can trace them back to at least the seventies when ontology and logic driven approaches to data management faced off against relational methodologies. Relational methods won that round — just ask Oracle or the nearest DBA.

That said, there has been a small number of interesting solutions built in the intervening years. I was involved in a few in one of my past lives{{6}}, and I’ve heard of more than a few built by colleagues and friends. The majority of these solutions used ontology management as a way to streamline service configuration, and therefor ease the pain of business change. Rather than being forced to rebuild a bunch of services, you could change some definitions, and off you go.

[[6]]AAII[[6]]

What we haven’t seen is a well placed Semantic Web SPARQL{{7}} query which makes all the difference. I’m still waiting for that travel website where I can ask for a holiday, somewhere warm, within my budget, and without too many tourists who use beach towels to reserve lounge chairs at six in the morning; and get a sensible result.

[[7]]SPARQL @ w3.org[[7]]

The flexibility which we could justify in the service delivery solutions just doesn’t appear to be justifiable in the data-driven solution. A colleague showed my a Semantic Web solution that consumed a million or so pounds worth of tax payer money to build a semantic-driven database for a small art collection. All this sophisticated technology would allow the user to ask all sorts of sophisticated questions, if they could navigate the (necessarily) complicated user interface, or if they could construct an even more daunting SPARQL query. A more pragmatic approach would have built a conventional web application — one which would easily satisfy 95% of users — for a fraction of the cost.

When you come down to it, the sort of power and flexibility provided by Semantic Web and Linked Data could only be used by a tiny fraction of the user population. For most people, something which gets them most of the way (with a little bit of trial and error) is good enough. Fire and forget. While the snazzy solution with the sophisticated technology might demo well (making it good TED{{8}} fodder), it’s not going to improve the day-to-day travail for most of the population.

[[8]]TED[[8]]

Then we get solutions like GovDirect. As the website puts it:

GovDirect® facilitates reporting to government agencies such as the Australian Tax Office via a single, secure online channel enabling you to reduce the complexity and cost of meeting your reporting obligations to government.

which make it, essentially, a Semantic Web solution. Except its not, as GovDirect is built on XBRL{{9}} with a cobbled together taxonomy.

[[9]]eXtensible Business Reporting Language[[9]]

Taxonomy driven solutions, such as GovDirect might not offer the power and sophistication of a Semantic Web driven solution, but they do get the job done. These taxonomies are also more likely to be ad hoc — codifying a vendor’s solution, or accreted whilst on the job — than the result of some formal, top down ontology{{10}} development methodology (such as those buried in the Semantic Web and Linked Data).

[[10]]Ontology defined in Wikipedia[[10]]

Take Salesforce.com{{11}} as an example. If we were to develop a taxonomy to exchange CRM data, then the most likely source will be other venders reverse engineering{{12}} whatever Salesforce.com is doing. The driver, after all, is to enable clients to get their data out of Salesforce.com. Or the source might be whatever a government working group publishes, given a government’s dominant role in its geography. By extension we can also see the end of the formal standardisation efforts of old, as they devolve into the sort of information frameworks represented by XBRL, which accrete attributes as needed.

[[11]]SalesForce.com[[11]]
[[12]]Reverse engineering defined in Wikipedia[[12]]

The general trend we’re seeing is a move away from top-down, tightly defined and structured definitions of data interchange formats, as they’re replaced by bottom-up, looser definitions.

Vacuum flasks: fulfilling a need

As seen on a plaque at Scienceworks in the House Secrets exhibit.

James Dewar invented the vacuum flask in 1892 to keep laboratory gases cold. Twelve years later, Reinhold Burger manufactured the Thermos to keep our picnic drinks hot.

A nice demonstration of the third of Peter Drucker’s seven sources of innovation.

Innovation based on process need.

Or, put another way, James Dewar scratched an itch; though he did play Edison to Reinhold Burger’s Sameul Insull.

Posted via web from PEG @ Posterous

What I like about jet engines

Rolls-Royce{{1}} (the engineering company, not the car manufacturer) is an interesting firm. From near disaster in the 70s, when the company was on the brink of failure, Rolls-Royce has spent the last 40 years reinventing itself. Where it used to sell jet engines, now the company sells hot air out the back of the engines, with clients paying only for the hours an engine is in service. Rolls-Royce is probably the one of the cleanest examples of business-technology{{2}} that I’ve come across; with the company picking out the synergies between business and technology to solve customer problems, rather than focusing on trying to align technology delivery with a previously imagined production process to push products at unsuspecting consumers. I like this for a few reasons. Firstly, because it wasn’t a green fields development (like Craig’s List{{3}} et al), and so provides hope for all companies with more than a few years under their belt. And secondly, as the transformation seems to have be the result of many incremental steps as the company felt its way into the future, rather than as the result of some grand, strategic plan.

[[1]]Rolls Royce[[1]]
[[2]]Business-Technology defined @ Forrester[[2]]
[[3]]Craig’s list[[3]]

A Rolls-Royce jet engine

I’ve been digging around for a while (years, not months), looking for good business-technology case studies. Examples of organisations which leverage the synergies between business and technology to create new business models which weren’t possible before, rather than simply deploying applications to accelerate some pre-imagined human process. What I’m after is a story that I can use in presentations and the like, and which shows not just what business-technology is, but also contrasts business-technology with the old business and technology alignment game while providing some practical insight into how the new model was created.

For a while I’ve been mulling over the obvious companies in this space, such as Craig’s List or Zappos{{4}}. While interesting, their stories don’t have the impact that they could as they were green fields developments. What I wanted was a company with some heritage, a history, to provide the longitudinal view this needs.

[[4]]Zappos[[4]]

The company I keep coming back to is Rolls-Royce. (The engineering firm, not the car manufacturer). I bumped into a story in The Economist{{5}}, Britain’s lone high-flier{{6}}, which talks about the challenge of manufacturing in Britain. (Which is, unfortunately, behind the pay wall now.) As The Economist pointed out:

A resurgent Rolls-Royce has become the most powerful symbol of British manufacturing. Its success may be hard to replicate, especially in difficult times.

[[5]]The Economist[[5]]
[[6]]Britain’s lone high-flier @ The Economist[[6]]

With its high costs and (relatively) inflexible workforce, running an manufacturing business out of Britain can be something of a challenge, especially with China breathing down your neck. Rolls-Royce’s solution was not to sell engines, but to sell engine hours.

This simple thought (which is strikingly similar to the tail of the story in Mesh Collaboration{{7}}) has huge ramifications, pushing the company into new areas of the aviation business. It also created a company heavily dependent on technology, from running realtime telemetry around the globe through to knowledge management. The business model — selling hot air out the back of an engine — doesn’t just use technology to achieve scale, but has technology woven into its very fabric. And, most interestingly, it is the result of tinkering, small incremental changes rather than being driven by some brilliant transformative idea.

[[7]]Mash-Up Corporations[[7]]

As with all these long term case studies, the Rolls-Royce story does suffer from applying new ideas to something that occurred yesterday. I’m sure that no one in Rolls-Royce was thinking “business-technology” when the company started the journey. Nor would they have even thought of the term until recently. However, the story still works for me as, for all it’s faults, I think there’s still a lot we can learn from it.

The burning platform was in the late 60s, early 70s. Rolls-Royce was in trouble. The company had 10% market share, rising labour costs, and was facing fierce competition from companies in the U.S. Even worse, these competitors did not have to worry about patents (a hangover from the second world war), they also had a large domestic market and a pipeline of military contracts which put them in a much stronger financial position. Rolls-Royce had to do something radical, or facing being worn down by aggressive competitors who had more resources behind them.

Interestingly, Roll-Royce chose to try and be smarter than the competition. Rather than focus on incremental development, the company decided to designed a completely new engine. Using carbon composite blades and a radical new engine architecture (three shafts rather than two, for those aeronautical engineers out there) their engine was going to be a lot more complex to design, build and maintain. It would also be a lot more fuel efficient and suffer less wear and tear. And it would be more scalable to different aircraft sizes. This approach allows Rolls-Royce to step out of the race for incremental improvements in existing designs (designing a slightly better fan blade) and create a significant advantage, one which would take the company’s competitors more than the usual development cycle or two to erase.

Most of the margin for jet engines, however, is in maintenance. Some pundits even estimate that engines are sold at a loss (though the manufactures claim to make modest margins on all the engines they sell), while maintenance can enjoy a healthy 35%. It’s another case of give them the razor but sell them the razor blades. But if you give away the razors, there’s always the danger that someone else may make blades to fit your razor. Fat margins and commoditized technology resulted in a thriving service market, with the major engine makers chasing each other’s business, along with a horde of independent servicing firms.

Rolls-Royce’s interesting solution was to integrate the expertise from the two businesses: engine development and servicing. Rather than run them as separate businesses, the company convinced customers to pay a fee for every hour an engine was operational. Rather than selling engines, the company sells hot air out the back of an engine. This provides a better deal for the customers (pay for what you use, rather than face a major capital expense), while providing Rolls-Royce with a stronger hold on its customer base.

Integrating the two business also enabled Rolls-Royce to become better at both. Maintenance data helps the company identify and fix design flaws, driving incremental improvements in fuel efficiency while extending the operating life (and time between major services) tenfold over the last thirty years. It also helps the company predict engine failures, allowing maintenance to be scheduled at the most opportune time for Rolls-Royce, and their customers.

Rolls-Royce leveraged this advantage to become the only one of the three main engine-makers with designs to fit the three newest airliners in the market: the Boeing 787 Dreamliner, the Airbus A380 and the new wide-bodied version of the Airbus A350. Of the world’s 50 leading airlines, 45 use its engines.

Today, an operations centre in Derby assess, in real time, the performance of 3,500 jet engines enabling to Rolls-Royce to spot issues before they become problems and schedule just-in-time maintenance. This means less maintenance and more operating hours, fewer breakdowns (and, I expect, happier customers), and the operational data generated is fed back into the design process to help optimise the next generation of engines.

This photograph is reproduced with the permission of Rolls-Royce plc, copyright © Rolls-Royce plc 2010
Rolls-Royce civil aviation operations in Derby

This service-based model creates a significant barrier to competitors for anyone who wants to steal Rolls-Royce’s business. Even if you could clone Rolls-Royce’s technology infrastructure (hard, but not impossible), you would still need to recreate all the tacit operational knowledge the company has captured over the years. The only real option is to recreate the knowledge yourself, which will take you a similar amount of time as it did Rolls-Royce, while Rolls-Royce continues to forge ahead. Even poaching key personnel from Rolls-Royce would only provide a modest boost to your efforts. As I’ve mentioned before{{8}}, this approach has the potential to create a sustainable competitive advantage.

[[8]]One of the only two sources of sustainable competitive advantage available to us today @ PEG[[8]]

While other companies have adopted some aspects of Rolls-Royce’s model (including the Joint Strike Fighter{{9}}, which is being procured under a similar model), Rolls-Royce continues to lead the pack. More than half of its existing engines in service are covered by such contracts, as are roughly 80% of those it is now selling.

[[9]]The Joint Strike Fighter[[9]]

I think that this makes Rolls-Royce a brilliant example of business-technology in action. Rolls-Royce found, by trial and error, a new model that wove technology and business together in a way that created an “outside in” business model, focused on what customers what to buy, rather than on a more traditional “inside out” model based on pushing products out into the market that the company wants to sell. You could even say that it’s an “in the market” model rather than a “go to market” model. And they did this with a significant legacy, rather than as a green fields effort.

In some industries and companies this type of “outside in” approach was possible before advent of the latest generation of web technology, particularly if it was high value and the company already had a network in place (such as Rolls-Royce success). For most companies it is only now becoming possible with business-technology along with some of the current trends, such as cloud computing, which erase many of the technology barriers.

The challenge is to figure out the “in the market” model you need, and then shift management attitude. Given constant change in the market, this means an evolutionary approach, rather than a revolutionary (transformative) one.

BPM is not a programming challenge

Get a few beers into a group of developers these days and it’s not uncommon for the complaints to start flowing about BPM (Business Process Management). BPM, they usually conclude, is more pain than it’s worth. I don’t think that BPM is a bad technology, per se, but it does appear to be the wrong tool for the job. The root of the problem is that BPM is a handy tool for programming distributed systems, but the challenge of creating distributed systems is orthogonal to business process execution and management. We’re using a screw driver to belt in a nail. It’s more productive to think business process execution and management as a (realtime) planning problem.

Programming is the automation of the known. Take as stable, repeatable process and automate it; bake the process into silicone to make it go fast. This is the same tactic that I was using back in my image processing days (and that was a long time ago). We’d develop the algorithms in C, experiment and tweak until they were right, and once they were stable we’d burn them into an ASIC (Application-Specific Integrated Circuit) to provide a speed boost. The ASICs were a lot faster than the C version: more than an order of magnitude faster.

Programmers, and IT folk in general, have a habit of treating the problems we confront as programming challenges. This has been outstandingly successful to date; just try and find a home appliance or service that doesn’t have a programme buried in it somewhere. (It’s not an unmitigated success though, such as our tumble drier is driving us nuts if its overly frequent software errors.) It’s not surprising that we chose to treat business processes automation and management as a programming problem once it appeared on our radar.

Don’t get me wrong: BPM is a solid technology. A friend of mine once showed my how he’d used his BPM stack to test its BPEL engine. As side from being a nice example of eating your own dog food, it was a great example of using BPEL as a distributed programming tool to solve a small but complex problem.

So why do we see so many developers complaining about BPM? It’s not the technology itself: the technology works. The issue is that we’re using it to solve problems that it’s not suited for. The most obvious evidence of this is the current poor state of BPM support for business exception management. We’ve deployed a lot of technology to support exception management in business processes without really solving the problem.

Managing business exceptions is driving the developers nuts. I know of one example where managing a couple of not infrequent business exceptions was the major technical problem in a very significant project (well into eight figures). The problem is that business exceptions are not from the same family of beasts as programming exceptions. Programming exceptions are exceptional. Business exceptions are just a (slightly) different way to achieve the same goal. All our compensating actions and exception stacks just get in the way of solving the problem.

On PowerPoint, anything can look achievable. The BPMN diagram we shared with the business was extremely elegant: nice sharp angles and coloured bubbles. Everyone agreed that it was a good representation of what the business does. The devil is in the details though. The development team quickly becomes frustrated as they have to deal with the realities of implementing a dynamic and exception rich business processes. Exceptions pile up on top of exceptions, and soon that BPMN diagram covers a wall, littered as it is with branch and join operations. It’s not a complex process, but we’ve made it incredibly complicated.

Edward Tufte's take on explaining complex concepts with PowerPoint
A military parade explained, a la PowerPoint

We can’t program our way out of this box, trying to pile on more features and patches. We can rip the complications out – simplifying the process to the point that it becomes tractable with our programming tools (which is what happened in my example above). But this removes all the variation which which makes the processes so valuable. (This, of course, the dirty secret of LEAN et al: you’re trading flexibility for cost saving, making your processes very efficient but also very fragile.)

Or we can try solving the problem a different way.

Don’t treat the automation of a business processes as a programming task (and I by this I mean the capture of imperative instructions for a computer to execute, no matter how unstructured or parallel). Programming is the automation of the known. Business processes, however, are the management and anticipation of the unknown. Modelling business processes should be seen as a (realtime) planning problem.

Which comes back to one of my common themes: push vs pull models, or the importance of what over how. Or, as a friend of mine with a better turn of phrase puts it, we need to stop trying to invent new technologies and work out how to use what we already have more effectively. Rather than trying to invent new technologies to solve problems that are already well understood elsewhere, pushing the technology into the problem, a more pragmatic approach is to leverage that existing understanding and then pull in existing technologies as appropriate.

Planning and executing in a rapidly changing environment is a well understood problem. Just ask anyone who’s been involved with the military. If we view the management of a business processes as a realtime planning problem, then what were business exceptions are reduced to simply alternate routes to the same goal, rather than a problem which requires a compensating action.

Battle of Gaugamela (Arbela) (331BC)
Take that hill!

One key principle is to establish a clear goal – Take that hill!, or Find that lost shipment! – articulate the tactics, the courses of action we might use to achieve that goal, and then defer decisions on which course of action to take until the decision needs to be made. If we commit to a course of action too early, locking in a decision during design time, then it’s likely that we’ll be forced to manage the exception when we realise that we picked the wrong course of action. It’s better to wait until the moment when all relevant information and options are available to us, and then take decisive action.

From a modelling point of view, we need to establish where are the key events at which we need to make decisions in line with a larger strategy. The decisions at each of these events needs to weigh the available courses of action and select the most appropriate, much like using a set of business rules to identify applicable options. The course of action selected, a scenario or business process fragment, will be semi independent from the other in the applicable set, as it addresses a different business context. Nor can the scenario we pick cannot be predetermined, as it depends on the business context. Short and sharp, each scenario will be simple, general and flexible, enabling us to configure it for the specific circumstances at hand, as we can’t anticipate all possible scenarios. And finally, we need to ensure that the scenarios we provide cover the situations we can anticipate, including the provision of a manual escape hatch.

Goals, rules and process: in that order. Integrated rather than as standalone engines. Pull pull these established technologies into a single platform and we might just be closer to a BPM solution inline with what we really need. (And we know there is nothing new under the sun, as this essentially a build on Jim Sinurs rules-and-process argument, and borrows a lot from STRIPS, PRS, dMARS and even the work I did at Agentis.)

As I mentioned at the start of this missive, BPM as a product category makes sense and the current implementations are capable distributed programming tools. The problem is that business process management is not a distributed programming challenge. Business exceptions are not exceptional. I say steal a page from the military strategy book – they, after all, have been successfully working on this problem for some time – and build our solutions around ideas the military use to succeed in a rapidly changing environment. Goals, rules and processes. The trick is to be pragmatic, rather than dogmatic in our implementation, and focus on solving the problem rather then trying to create a new technology.

Tea bags: the unexpected

As seen on a plaque at Scienceworks in the House Secrets exhibit.

A thrifty tea merchant from New York named Thomas Sullivan is credited with inventing the first tea bag in 1908. Looking to save money, Sullivan reportedly distributed small samples of tea in silk bags instead of little metal tins. It wasn’t until after he saw restaurant and coffee shop owners brewing the entire bag of tea leaves that he realized the potential of his actions.

A nice demonstration of the first, and most valuable, of Peter Drucker’s seven sources of innovation.

The unexpected. The unexpected success, failure or outside event.

Posted via web from PEG @ Posterous

Penicillin: the unexpected

As seen on a plaque at Scienceworks.

The penicillin mold was a pest, not a resource. Backteriologists went to great lengths to protect their bacterial cultures against contamination by it. Then in the 1920s, a London doctor, Alexander Fleming, realized that this “pest” was exactly the bacterial killer bacteriologists had been looking for – and the penicillin mold became a valuable resource.

A nice demonstration of the first, and most valuable, of Peter Drucker‘s seven sources of innovation.

The unexpected. The unexpected success, failure or outside event.

Posted via web from PEG @ Posterous