Monthly Archives: July 2010

Michelangelo’s approach to workflow discovery

Take any existing workflow — any people driven business process — and I expect that most of the tasks within it could best be described as cruft.

cruft: /kruhft/
[very common; back-formation from crufty]

  1. n. An unpleasant substance. The dust that gathers under your bed is cruft; the TMRC Dictionary correctly noted that attacking it with a broom only produces more.
  2. n. The results of shoddy construction.
  3. vt. [from hand cruft, pun on ‘hand craft’] To write assembler code for something normally (and better) done by a compiler (see hand-hacking).
  4. n. Excess; superfluous junk; used esp. of redundant or superseded code.
  5. [University of Wisconsin] n. Cruft is to hackers as gaggle is to geese; that is, at UW one properly says “a cruft of hackers”.

The Jargon File, v4.4.7

Capturing and improving a workflow (optimising it even) is a processes of removing cruft to identify what really needs to be there. This is remarkably like Michalangelo{{1}}’s approach to carving David{{2}}. When asked how he created such a beautiful sculpture, everything just as it should be, Michalangeo responded (and I’m paraphrasing):

[[1]]Michelangelo Buonarroti[[1]]
[[2]]Michelangelo’s David[[2]]

Michelangelo's David
Michelangelo’s David

David was always there in the limestone; I just carved away the bits that weren’t David.

Cruft is the result of the people — the knowledge workers engaged in the process — dealing with the limitations of last decade’s technology. Cruft is the work-arounds and compensating actions for a fragmented and conflicting IT environment, an environment which gets in the road more often than it supports the knowledge workers. Or cruft might be the detritus of quality control and risk management measures put in place some time ago (decades in many instances) to prevented an expensive mistake that is no longer possible.

Most approaches to workflow automation are based on some sort of process improvement methodology, such as LEAN or Six Sigma. These methods work: I’ve often heard is stated that pointing Six Sigma at a process results in a 30% saving, each and every time. They do this by aggressively removing variation in the process — slicing away unnecessary decisions, as each decisions is an opportunity for a mistake. These decisions might represent duplicated decisions, redundant process steps, or unnecessarily complicated handoffs.

There’s a couple of problems with this though, when dealing with workflow. Looking for what’s redundant doesn’t create an explicit link between business objectives and the steps in the workflow, explicitly justifying each step’s existence, making it hard to ensure that we caught all the cruft. And the aggressive removal of variation can strip a process’s value along with its cost.

Much of the cruft in a workflow process is there for historical reasons. These reasons can range from something bad happened a long time in the past through to we don’t know why, but if we don’t do that then the whole thing falls over. A good facilitator will challenge seemingly obsolete steps, identifying those steps who have served their purpose and should be removed. However, it’s not possible to justify every step without quickly wearing down subject matter experts. Some obsolete steps will always leak through, no matter how many top-down and bottom-up iterations we do.

We can also find that we reach the end of the processes improvement journey only to find that much of the process’s value — the exceptions and variation that make the process valuable — has been cut out to make the process more efficient or easier to implement. In the quest for more science in our processes, we’ve eliminated the art that we relied on.

If business process management isn’t a programming challenge{{3}}, then this holds even truer for human driven workflow.

[[3]]A business process is not a programming challenge @ PEG[[3]]

What we need is a way to chip away the cruft and establish a clear line of traceability between the goals of each stakeholder involved in the process, and each step and decision in the workflow. And we need to do this in a way that allows us to balance art and science.

I’m pretty sure that Michalangeo had a good idea of what he wanted to create when he started belting on the chisel. He was looking for something in the rock, the natural seems and faults, that would let him find David. He kept the things that supported his grand plan, while chipping away those that didn’t.

For a workflow processes, these are the rules, tasks and points of variation that knowledge workers use to navigate their way through the day. Business rules and tasks are the basic stuff of workflow: decisions, data transformation and hand-offs between stakeholders. Points of variation let us identify those places in a workflow where we want to allow variation — alternate ways of achieving the one goal — as a way of balancing art and science.

Rather than focus on programming the steps of the process, worrying if we should send an email or a fax, we need to make this (often) tacit knowledge explicit. Working top-down, from the goals of the business owners, and bottom-up, from the hand-offs and touch-points with other stakeholders, we can chip away at the rock. Each rule, task or point of variation we find is measured against our goals to see if we should chip it away, or leave it to become part of the sculpture.

That which we need stays, that which is unnecessary is chipped away.

The sun-shaped individual

(Yep, this is a cross post from Stuff I find interesting, but the missive grew to the point that I thought it worthwhile putting it on this blog as well.)

I stumbled across a rather interesting, and rather old (in internet terms), blog post today: T-Shaped + Sun-Shaped People by David Armano. I suppose you could say that it’s a build on the old idea of t-shaped people, folk with deep experience in one domain (their core discipline). As the post quotes, from Tim Brown at IDEO:

We look for people who are so inquisitive about the world that they’re willing to try to do what you do. We call them “T-shaped people.” They have a principal skill that describes the vertical leg of the T — they’re mechanical engineers or industrial designers. But they are so empathetic that they can branch out into other skills, such as anthropology, and do them as well. They are able to explore insights from many different perspectives and recognize patterns of behavior that point to a universal human need. That’s what you’re after at this point — patterns that yield ideas.

I’ve always found the concept of t-shaped people interesting and troubling at the same time. One the one hand their broader view provides them with some sensitivity for the problems and experience to be found in other domains. On the other, it reeks of dilettantism, as there is no rational behind their interest other than curiosity (what’s it like on the other side of the fence?). This leaves you a victim of the dogma of your core discipline, with the cross discipline stuff just window dressing.

For a while I’ve thought (and spoken) of then need to have some sort of coherent focus to our interests, something beyond the doctrine we learnt in our early twenties and which largely defines us. I think we need this focus for a few different reasons.

Firstly, it provides helps us identify the sort of problems we want to solve beyond the constraints of a well defined discipline. I’m interested in how people solve problems, which leads me to working in everything from (business) strategy down to workflow design.

Secondly, it provides you with a framework to identify and integrate new ideas and domains into your toolkit. It’s a bit like Bruce Lee’s ideas of “adopt what you can use” from Jeet Kyne Do. For years I’ve been finding, collecting, evaluating and then either integrating ideas from areas as diverse as logic and science, (bio-medical) engineering, history, philosophy (including the likes of Cicero), human factors, business theory (Michael Porter an the like) and even computer science (particularly AI). You don’t collect random ideas (a la TED), you find useful tools which integrate with and add value to your toolkit.

Thirdly, it provides you with a mechanism to cope with the deluge of information we live in today. There’s a lot of talk of the need for smart filters, which I’ve always had a problem with. Perhaps it’s my little internal John Boyd, but we shouldn’t be just throwing away valuable information. A more intelligent approach is to have a framework — a focus — which makes it easier to integrate the information into our world view. (There’s probably a whole post in this point alone.)

David’s post posited the concept of sun-shaped person, which sounds a lot like this idea of having a consistent focus.

Does this make us "sun-shaped people"?
Does this make us "sun-shaped people"?

Quoting David:

Most of us have some kind of passion in a specific area. For some—it’s a hobby or interest. For others, it’s directly related to our work. I fall into the latter category. If you were to ask me what my “passion is”—I would probably say that at the core, it’s creative problem solving. This is pretty broad and incorporates a lot of disciplines that can relate to it. But that’s the point. What if we start with our passions regardless of discipline, and look at the skills which radiate out from it the same way we think about how rays from the sun radiate warmth?

I think this makes a lot of sense, and fits in a lot more neatly with the direction the world is headed, than the concept of a t-shaped individual. Who doesn’t wear multiple hats these days? How much of your job is actually related to your job title? And don’t we all steal ideas from other disciplines?

Tying yourself to a single domain — I’m a supply chain person, I’m a techo, I do human factors — is committing yourself to doing the same thing that you did yesterday. Your marking yourself as a domain specialist. The challenge is that we seem to be entering an age where we need more generalists. Last year you worked in finance, this year your building robots, next year you might be in durable goods. Your focus, your passions, won’t have changed, but what you do day-to-day will have. That sounds a lot like the sun shaped individual to me.

What is innovation?

What is innovation? I don’t know, but then I’m not even sure that it’s an interesting question. The yearning so many companies have to be innovative often seems to prevent them from actually doing anything innovative. They get so caught up in trying to come up with the next innovation — the next big product — that they often fail to do anything innovative at all. It’s more productive to define innovation by understanding what it’s not: doing the same thing as the rest of the crowd, while accepting that there are no silver bullets and that you don’t control all the variables.

So, what is innovation? This seems to be a common question thats comes up whenever a company wants to innovate. After all, the first step in solving a problem is usually to define our terms.

Innovation is a bit like quantum theory’s spooky action at a distance,1)Spooky action at a distance? @ Fact and Fiction where stuff we know and understand behaves in a way we don’t expect. It can be easy to spot an innovative outcome (hindsight is a wonderful thing), but it’s hard to predict what will be innovative in the future. Just spend some time browsing Paleo-Future2)Paleo-Future (one of my favourite blogs) to see just how far off the mark we’ve been in the past.

The problem is that as it’s all relative; what’s innovative in one context may (or may not) be innovative in another. You need an environment that brings together a confluence of factors — ideas, skills, the right business and market drivers, the time and space to try something new — before there’s a chance that something innovative might happen.

Unfortunately innovation has been claimed as the engine behind the success of more than a few leading companies, so we all wanted to know what it is (and how to get some). Many books have been written promising to tell you exactly what to do to create innovation, providing you with a twelve step program3)Twelve step programs @ Wikipedia to a happier and more innovative future. If you just do this, then you too can invent the next iPhone.4)iPhone — the Apple innovation everyone expected @ Fast Company

Initially we were told that we just needed to find the big idea, a concept which will form the basis of our industry shattering innovation. We hired consultants to run ideation5)Ideation defined at Wikipedia workshops for us, or even outsourced ideation to an innovation consultancy asking them to hunt down the big idea for us. A whole industry has sprung up around the quest for the big idea, with TED6)TED (which I have mixed feelings about) being the most obvious example.

As I’ve said before, the quest for the new-new thing is pointless.7)Innovation should not be the quest for the new-new thing @ PEG

The challenge when managing innovation is not in capturing ideas before they develop into market shaping innovations. If we see an innovative idea outside our organization, then we must assume that we’re not the first to see it, and ideas are easily copied. If innovation is a transferable good, then we’d all have the latest version.

Ideas are a dime a dozen, so real challenge is to execute on an idea (i.e. pick one and do something meaningful with it). If you get involved in that ideas arms race, then you will come last as someone will always have the idea before you. As Scott McNealy at Sun likes to say:

Statistically, most of the smart people work for somebody else.

More recently our focus has shifted from ideas to method. Realising that a good idea is not enough, we’ve tried to find a repeatable method with which we can manufacture innovation. This is what business does after all; formalise and systematise a skill, and then deploy it at huge scale to generate a profit. Think Henry Ford and the creation of that first production line.

Design Thinking8)Design Thinking … what is that? @ Fast Company is the most popular candidate for method of innovation, due largely to the role of Jonathan Ive9)Jonathan Ive @ Design Museum and design in Apple’s rise from also-ran to market leader. There’s a lot of good stuff in Design Thinking — concepts and practices anyone with an engineering background10)Sorry, software engineering doesn’t count. would recognise. Understand the context that your product or solution must work in. Build up the ideas used in your solution in an incremental and iterative fashion, testing and prototyping as you go. Teamwork and collaboration. And so on…

The fairly obvious problem with this is that Design Thinking does not guarantee an innovative outcome. For every Apple with their iPhone there’s an Apple with a Newton.11)The story behind the Apple Newton @ Gizmodo Or Microsoft with a Kin.12)Microsoft Said to Blame Low Sales, High Price for Kin’s Failure @ Business Week Or a host of other carefully designed and crafted products which failed to have any impact in the market. I’ll let the blog-sphere debate the precise reason for each failure, but we can’t escape the fact the best people with a perfect method cannot guarantee us success.

People make bad decisions. You might have followed the method correctly, but perhaps you didn’t quite identify the right target audience. Or the technology might not quite be where you need it to be. Or something a competitor did might render all your blood sweet and tears irrelevant.

Design Thinking (and innovation) is not chess: a game where all variables are known and we have complete information, allowing us to make perfect decisions. We can’t expect a method like Design Thinking to provide an innovative outcome.

Why then do we try and define innovation in terms of the big idea or perfect methodology? I put this down to the quest for a silver bullet: most people hope that there’s a magic cure for their problems which requires little effort to implement, and they dislike the notion that hard work is key.

This is true in many of life’s facets. We prefer diet pills and magic foods over exercise and eating less. If I pay for this, then it will all come good. If we just can just find that innovative idea in our next facilitated ideation workshop. Or hire more designers and implement Design Thinking across our organisation.

Success with innovation, as with so many things, is more a question of hard work than anything else. We forget that the person behind P&G’s Design Thinking efforts,13)P&G changes it’s game @ Business Week Cindy Tripp, came out of marketing and finance, not design. She chose Design Thinking as the right tool for the problems she needed to solve — Design Thinking didn’t choose her. And she worked hard, pulling in ideas from left, right and centre, to find, test and implement the tools she needed.

So innovation is not the big idea. Nor is it a process like Design Thinking.

For me, innovation is simply:

  • working toward a meaningful goal, and
  • being empower to use whichever tools will be most beneficial.

If I was to try and define innovation more formally, then I would say that innovation is a combination of two key concepts: obliquity14)Obliquity defined at SearchCRM and Jeet Kune Do’s15)Jeet Kune Do, a martial art discipline developed by Bruce Lee @ Wikipedia concept of absorbing what is useful.

Obliquity is the simple idea that the best way to achieve a goal in a complex environment is to take an indirect approach. The fastest and most productive path to the top of the mountain might be to take the path that winds its way around the mountain, rather than to try and walk directly up the steepest face.

Apple is a good example of obliquity in action. Both Steve Jobs and Jonathan Ives are on record as wanting to make “great products that we want to own ourselves,” rather than plotting to build the biggest and most innovative company on the planet. Rather than try and game the financial metrics, they are focusing on making great products.

Bruce Lee16)Bruce Lee: the devine wind came up with the idea of “absorbing what is useful”17)Absorbing what is useful @ Wikipedia when he created Jeet Kune Do. He promoted the idea that students should learn a range of methods and doctrines, experiment to learn what works (and what doesn’t work) for them, “absorb what is useful” while discarding the remainder. The critical point of this principle is that the choice of what to keep is based on personal experimentation. It is not based on how a technique may look or feel, or how precisely the artist can mimic tradition. In the final analysis, if the technique is not beneficial, it is discarded. Lee believed that only the individual could come to understand what worked; based on critical self analysis, and by, “honestly expressing oneself, without lying to oneself.”

Cindy Tripp at P&G is a good example of someone absorbing what is useful. Her career has her investigating different topics and domains, more a sun shaped individual than a t-shaped one.18)T-Shaped + Sun-Shaped People @ Logic + Emotion Starting from a core passion, she accreted a collection of disciplines, tools and techniques which are beneficial. Design Thinking is one of these techniques (which she uses as a reframing tool).

I suppose you could say that I’ve defined innovation by identifying what it’s not: innovation is the courage to find a different way up the hill, while accepting that there are no silver bullets and that you don’t control all the variables.

Updated: Tweeked the wording in the (lucky) 13th paragraph in line with Bill Buxton’s comment.

For every Apple with their iPhone there’s an Apple with a Newton. Or Microsoft with a Kin.

References   [ + ]

Who gets the credit: the innovator or the implementer?

Who should get the credit? The person to came up with the idea? Or the person to did something with it?

I’m with the implementers. Thomas Edison might be remembered for the lightbulb, but Samuel Insull‘s hard work enabled everyone to have one in their homes. We also forget that it wasn’t even Edison who invented electric light (though you could argue that he perfected it). Edison was one of many who happened to have the same good idea. Insull changed society forever.

Taxonomies 1, Semantic Web (and Linked Data) 0

I’m not a big fan of Semantic Web{{1}}. For something that has been around for just over ten years — and which has been aggressively promoted by the likes of Tim Berners-Lee{{2}} — very little real has come of it.

Taxonomies, on the other hand, are going gangbusters, with solutions like GovDirect{{3}} showing that there is a real need for this sort of data-relationship driven approach{{4}}. Given this need, if the flexibility provided by Semantic Web (and more recently, Linked Data{{5}}) was really needed, then we would have expected someone to have invested in building significant solutions which use the technology.

While the technology behind Semantic Web and Linked Data is interesting, it seems that most people don’t think it’s worth the effort.

All this makes me think: the future of data management and standardisation is ad hoc, with communities or vendors scratching specific itches, rather than formal, top-down, theory driven approaches such as Semantic Web and Linked Data, or even other formal standardisation efforts of old.

[[1]]SemanticWeb.org[[1]]
[[2]]Tim Berners-Lee on Twitter[[2]]
[[3]]GovDirect[[3]]
[[4]]Peter Williams on the The Power of Taxonomies @ the Australian Government’s Standard Business Reporting Initiative[[4]]
[[5]]LinkedData.org[[5]]

The technologies behind the likes of Semantic Web and Linked Data have a long heritage. You can trace them back to at least the seventies when ontology and logic driven approaches to data management faced off against relational methodologies. Relational methods won that round — just ask Oracle or the nearest DBA.

That said, there has been a small number of interesting solutions built in the intervening years. I was involved in a few in one of my past lives{{6}}, and I’ve heard of more than a few built by colleagues and friends. The majority of these solutions used ontology management as a way to streamline service configuration, and therefor ease the pain of business change. Rather than being forced to rebuild a bunch of services, you could change some definitions, and off you go.

[[6]]AAII[[6]]

What we haven’t seen is a well placed Semantic Web SPARQL{{7}} query which makes all the difference. I’m still waiting for that travel website where I can ask for a holiday, somewhere warm, within my budget, and without too many tourists who use beach towels to reserve lounge chairs at six in the morning; and get a sensible result.

[[7]]SPARQL @ w3.org[[7]]

The flexibility which we could justify in the service delivery solutions just doesn’t appear to be justifiable in the data-driven solution. A colleague showed my a Semantic Web solution that consumed a million or so pounds worth of tax payer money to build a semantic-driven database for a small art collection. All this sophisticated technology would allow the user to ask all sorts of sophisticated questions, if they could navigate the (necessarily) complicated user interface, or if they could construct an even more daunting SPARQL query. A more pragmatic approach would have built a conventional web application — one which would easily satisfy 95% of users — for a fraction of the cost.

When you come down to it, the sort of power and flexibility provided by Semantic Web and Linked Data could only be used by a tiny fraction of the user population. For most people, something which gets them most of the way (with a little bit of trial and error) is good enough. Fire and forget. While the snazzy solution with the sophisticated technology might demo well (making it good TED{{8}} fodder), it’s not going to improve the day-to-day travail for most of the population.

[[8]]TED[[8]]

Then we get solutions like GovDirect. As the website puts it:

GovDirect® facilitates reporting to government agencies such as the Australian Tax Office via a single, secure online channel enabling you to reduce the complexity and cost of meeting your reporting obligations to government.

which make it, essentially, a Semantic Web solution. Except its not, as GovDirect is built on XBRL{{9}} with a cobbled together taxonomy.

[[9]]eXtensible Business Reporting Language[[9]]

Taxonomy driven solutions, such as GovDirect might not offer the power and sophistication of a Semantic Web driven solution, but they do get the job done. These taxonomies are also more likely to be ad hoc — codifying a vendor’s solution, or accreted whilst on the job — than the result of some formal, top down ontology{{10}} development methodology (such as those buried in the Semantic Web and Linked Data).

[[10]]Ontology defined in Wikipedia[[10]]

Take Salesforce.com{{11}} as an example. If we were to develop a taxonomy to exchange CRM data, then the most likely source will be other venders reverse engineering{{12}} whatever Salesforce.com is doing. The driver, after all, is to enable clients to get their data out of Salesforce.com. Or the source might be whatever a government working group publishes, given a government’s dominant role in its geography. By extension we can also see the end of the formal standardisation efforts of old, as they devolve into the sort of information frameworks represented by XBRL, which accrete attributes as needed.

[[11]]SalesForce.com[[11]]
[[12]]Reverse engineering defined in Wikipedia[[12]]

The general trend we’re seeing is a move away from top-down, tightly defined and structured definitions of data interchange formats, as they’re replaced by bottom-up, looser definitions.

Innovation [2010-07-05]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.