Category Archives: Mailing List

The IT department we have today is not the IT department we’ll need tomorrow

The IT departments many of us work in today (either as an employee or consultant) are often the result of thirty or more years of diligent labour. These departments are designed, optimised even, to create IT estates populated with large, expensive applications. Unfortunately these departments are also looking a lot like dinosaurs: large, slow and altogether unsuited for the the new normal. The challenge is to reconfigure our departments, transforming them from asset management functions into business (or business-technology) optimisation engines. This transformation should be a keen interest for all of us, as it’s going to drive a dramatic change in staffing profiles which will, in turn, effect our own jobs in the no so distant future.

Delivering large IT solutions is a tricky business. They’re big. They’re expensive. And the projects to create them go off the rails more often than we’d like to admit. IT departments have been built to minimise the risks associated with delivering and operating these applications. This means governance, and usually quite a lot of it. Departments which started off as small scale engineering functions soon picked up an administrative layer responsible to the mechanics of governance.

More recently we’ve been confronted with the challenge with managing the dependancies and interactions between IT applications. Initiatives like straight-through processing require us to take a holistic, rather than a pieces-parts, approach, and we’re all dealing with the problem of having one of each application or middleware product, as well as a few we brewed in the back room ourselves. Planning the operation and evolution of the IT estate became more important, and we picked up an enterprise architecture capability to manage the evolution of our IT estate.

It’s common to visualise these various departmental functions and roles as a triangle (or a pyramid, if you prefer). At the bottom we have engineering: the developers and other technical personnel who do the actual work to build and maintain our applications. Next layer up is governance, the project and operational administrators who schedule the work and check that it’s done to spec. Second from the top are the planners, the architects responsible for shaping the work to be done as well as acting as design authority. Capping of the triangle (or pyramid) is the IT leadership team who decide what should be done.

The departmental skills triangle

While specific techniques and technologies might come and go, the overall composition of the triangle has remained the same. From the sixties and seventies through to even quite recently, we’ve staffed our IT departments with many technical doers, a few less administrators, a smaller planning team, and a small IT leadership group. The career path for most of us been a progression from the bottom layers – when we were fresh out of school – to the highest point in the triangle that we can manage.

The emergence of off-shore and outsourcing put a spanner in the works. We all understand the rational: migrate the more junior positions – the positions with the least direct (if any) contact with the business proper – to a cheaper country. Many companies under intense cost pressure broke the triangle in two, keeping the upper planning and decision roles, while pushing the majority of the manage and all the do roles out of the country, or even out of the company.

Our first attempt at out-sourcing

Ignoring whether or not this drive to externalise the lower roles provided the expected savings or not, what it did do is break the career ladder for IT staff. Where does you next generation of senior IT personnel come from if you’ve pushed the lower ranks out of the business? Many companies found themselves with an awkward skills shortage a few years into an outsourcing / off-shore arrangement, as they were no longer able to train or promote senior personnel to replace those who were leaving through natural attrition.

The solution to this was to change how we brake-up the skills triangle; rather than a simple horizontal cut, we took a slice down the side. Retaining a portion of all skills in-house allows companies provide a career path and on the job training for their staff.

A second, improved, go at out-sourcing
A second, improved, go at out-sourcing

Many companies have tweaked this model, adding a bulge in the middle to provide a large enough resource pool to manage both internal projects, as well as those run by out-sourced and off-shore resources.

Factoring in the effort required to manage out-sourced projects
Factoring in the effort required to manage out-sourced projects

This model is now common in a lot of large companies, and it has served us well. However, the world has a funny habit of changing just when you’ve everything working smoothly.

The recent global financial criss has fundamentally changed the business landscape. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order. Many are even talking about the emergence of a new normal. The impact this will have on how we run our businesses (and our IT departments) is still being discussed, but we can see the outline of this impact already.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building internal capabilities. While this trend might have initially started as a cost saving, most of the benefit is in management time saved, which can then be used to focus on more important issues. We’re all finding that the limiting factor in our business is management time, so being able to hand off the management of less important tasks can help provide that edge you need.

We’re also seeing faster business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up. Nowhere is this more evident than the regulated industries (finance, utilities …), where updates in government regulation has changed from a generational to a quarterly occurrence as governments attempt to use regulation change to steer the economic boat.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

We can draw a few general conclusions on the potential impact on IT departments of these trends.

  • The increase reliance on partners, the broader partner ecosystem this implies, and an increasingly global approach to business will create more complex operational environments, increasing the importance of planning the IT estate and steering a company’s IT in the right direction.
  • The need to reduce leverage, and free up working capital, is pushing companies toward BPO and SaaS solutions, rather than the traditional on-premisses solutions, where the solution provider is paid per-seat, or might even be only paid a success fee.
  • The need for rapid project turn-around is pushing us toward running large portfolios of small projects, rather than a small number of large projects.
  • A lot of the admin work we used to do is now baked into web delivered solutions (BaseCamp et al).

This will trigger us to break up a the skills triangle in a different way.

A skills/roles triangle for the new normal
A skills/roles triangle for the new normal

While we’ll still take a slice down the side of the triangle, the buldge will move to the ends of the slice, giving it a skinny waist. The more complex operational environment means that we need to beef up planning (though we don’t want to get all dogmatic about our approach, as existing asset-centric IT planning methodologies won’t work in the new normal). A shift to large numbers of small projects (where the projects are potentially more technically complex) means that we’ll beef up our internal delivery capability, providing team leads with more autonomy. The move to smaller projects also means that we can reduce our administration and governance overhead.

We’ll replace some skills with automated (SaaS) solutions. Tools like BaseCamp will enable us to devolve responsibility for reporting and management to the team at the coalface. It will also reduce the need to develop and maintain infrastructure. Cloud technology is a good example of this, as it takes a lot of the tacit knowledge required to manage a fleet of servers and bakes it into software, placing it in the hands of the developers. Rumor has it that that a cloud admin can support 10,000 servers to a more traditional admin’s 500.

And finally, our suppliers act as a layer through the middle, a flex resource for us to call on. They can also provide us with a broader, cross-industry view, of how to best leverage technology.

This thinning out of the middle ranks is part of a trend we’re seeing elsewhere. Web2.0/E2.0/et al are causing organisations to remove knowledge workers — the traditional white collar middle layers of the organisaiton – leaving companies with a strategy/leadership group and task workers.

Update: Andy Mulholland has an interesting build on this post over at the Capgemini CTO blog. I particularly like the Holm service launched by Ford and Microsoft, a service that it’s hard to imagine a traditional IT department fielding.

With cloud computing, the world is not flat

Does location matter? Or, put another way, is the world no longer flat? Many cloud and SaaS providers work under the assumption that where we store data where it is most efficient from an application performance point of view, ignoring political considerations. This runs counter to many company and governments who care greatly where their data is stored. Have we entered a time where location does matter, not for technical reasons, but for political reasons? Is globalisation (as a political thing) finally starting to impact IT architecture and strategy?

Just who is taking your order?
Just who is taking your order?

Thomas Friedman‘s book, The World is Flat, contained a number of stories which where real eye openers. The one I remember the most was the McDonald’s drive through. The idea was simple: once you’ve removed direct physical contact from the ordering process, then it’s more efficient to accept orders from a contact centre than from within the restaurant itself. We could event locate that contact centre in a cheaper geography such as another state, or even another country.

Telecommunications made the world flat, as cheap telecommunications allows us to locate work wherever it is cheapest. The opportunity for labour arbitrage this created drove offshoring through the late nineties and into the new millenium. Everything from call centres to tax returns and medical image diagnosis started to migrate to cheaper geographies. Competition to be the cheapest and most efficient service provider, rather than location, determines who does the work. The entire world would compete on a level playing field.

In the background, whilst this was happening, enterprise applications went from common to ubiquitous. Adoption was driven by the productivity benefits the applications brought, which started of as a source of differentiation, but has now become one of the many requirements of being in business. SaaS and cloud are the most recent step in this evolution, leveraging the global market to create solutions operating at such a massive scale that they can provide price points and service levels which are hard, if not impossible, for most companies to achieve internally.

The growth of the U.S. enterprise application market
The growth of the U.S. enterprise application market (via INPUT)

Despite the world being laser levelled within an inch of its life, many companies are finding it difficult to move their operations to the cost-effective nirvana that is cloud and SaaS services. Location matters, it seems. Not for technical reasons, but for political ones.

Where we store our assets is important. Organisations want to put their assets somewhere safe, because without assets these the organisations don’t amount to much. Companies want to keep their information — their confidential trade secrets — hidden from prying eyes. Governments need to ensure they have the trust of their citizens by respecting their privacy. (Not to mention the skullduggery this is international relations.) While communications technology has made it incredibly easy to move this information around and keep it secure, it has yet to solve the political problem of ensuring that we can trust the people responsible for safeguarding our assets. And all these applications we have created — both the traditional on-premesis, hosted or SaaS and cloud versions — are really just asset management tools.

We’re reached a point where one of the a larger hidden assumptions of enterprise applications has been exposed. Each application was designed to live and operate within a single organisation. This organisation might be a company, or it might be a country, or it might be some combination of the two. The application you select to manage your data determines the political boundary it lives within. If you use any U.S. SaaS or cloud solution provider to manage your data, then your data falls under U.S. judicial discovery laws, irregardless of where you yourself are located. If your data transits through the U.S., then assume that the U.S. government has a copy. The world might be flat, but where you store your assets and where you send them still matters.

Country-specific regulations governing privacy and data protection vary greatly.
Global data protection heat map (via Forrester)

We can already see some moves by the vendors to address this problem. Microsoft, for example, has developed a dedicated cloud for the U.S. government, known as BPOS Federal, which is designed to meet the government’s stringent security and privacy standards. Amazon has also taken a portion of the cloud it runs and dedicated it to, and located it in, the EU, for similar reasons.

If we consider enterprise applications to be asset management tools rather than productivity tools, then ideas like private clouds start to make a lot of sense. Cloud technology reifies a lot of the knowledge required to configure and manage a virtualised environment in software, eliminating the data centre voodoo and empowering the development teams to manage the solutions themselves. This makes cloud technology simply a better asset management tool, but we need to freedom to locate the data (and therefore the application) where it makes the most sense from an asset management point of view. Sometimes this might imply a large, location agnostic, public cloud. Other times it might require a much smaller private cloud located within a specific political boundary. (And the need to prevent some data even transiting through a few specific geographies – requiring us to move the code to the data, rather than the data to the code – might be the killer application that mobile agents have been waiting for.)

What we really need are meta-clouds: clouds created by aggregating a number of different clouds, just as the Internet is a network of separate networks. While the clouds would all be technically similar, each would be located in a different political geography. This might be inside vs. outside the organisation, or in different states, or even different countries. The data would be stored and maintained where it made the most sense from an asset management point of view, with few technical considerations, the meta-cloud providing a consistent approach to locating and moving our assets within and across individual clouds as we see fit.

Innovation [2010-03-01]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Innovation [2010-02-01]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Is “agile enterprise IT” an oxymoron?

Have we managed to design agility out of enterprise IT? Are the two now incompatible? Our decision to measure IT purely in terms of cost (ROI) or stability (SLAs) means that we have put aside other desirable characteristics like responsiveness, making our IT estates more like the lumbering airships of the 1920s. While efficient and reliable (once we got the hydrogen out of them), they are neither exciting or responsive to the business. The business ends up going elsewhere for their thrills. What to do?

LZ-127 Graf Zeppelin
LZ-127 Graf Zeppelin

An interesting post on jugaad over at the Capgemini CTO blog got me thinking. The tension between the managed chaos that jugaad seems to represent and the stability we strive for in IT seems to nicely capture the current tensions between business and IT. Business finds that opportunities are blinking in and out of existence faster than ever before, providing dramatically reduced windows of opportunity leaving IT departments unable to respond in time, prompting the business to look outside the organisation for solutions.

The first rule of CIOs is “you only have a seat at the strategy table if you’re keeping the lights on”. The pressure is on to keep the transactions flowing, and we spend a lot of time and money (usually the vast majority of our budget) ensuring that transactions do indeed flow. We often complain that our entire focus seems to be on cost and operations, when there is so much more we can bring to the leadership team. We forget that all departments labour under a similar rule, and all these rules are really just localised versions of a single overarching rule: the first rule of business, which is to be in business (i.e. remain solvent). Sales needs to sell, manufacturing needs to manufacture, … By devoting so much of our energy on cost and stability, we seems to have dug ourselves into a bit of a hole.

There’s another rule that I like to quote from time-to-time: management is not the art of making the perfect decision, but making a timely decision and then making it work. This seems to be something we’ve forgotten in the West, and particularly in IT. Perfection is an unattainable ideal in the real world, and agility requires a little chaos/instability. What’s interesting about jugaad is the concept’s ability to embrace the chaos required to succeed when resource constraints prevent you for using the perfect (or even simply the best) solution.

Vickers F.B. 5 Gunbus
Vickers F.B.5. Gunbus

Consider a fighter plane. The other day I was watching a documentary on the history of aircraft which showed how the evolution of fighters is a progression from stability to instability The first fighters (and we’re talking the start of WWI here–all fabric and glue) were designed to float above the battlefield where the pilots could shoot down at soldiers, or even lob bombs at them. They were designed to be very stable, so stable that the pilot could ignore the controls for a while and the plane would fly itself. Or you could shoot out most of the control surfaces and still land safely. (Sounds a bit like a modern, bullet proof, IT application, eh?)

The Red Baron: NAME
The Red Baron: Manfred von Richthofen

The problem with these planes is that they are very stable. It’s hard to make them turn and dance about, and this makes them easy to shoot down. They needed to be more agile, harder to shoot down, and the solution was to make them less stable. The result, by the end of WWI, was the fairly unstable tri-planes we associate with the Red Baron. Yes, this made them harder to fly, and even harder to land, but it also made them harder to hit.

Wizz forward to the modern day, and we find that all modern fighters are unstable by design. They’re so unstable that they’re unflyable without modern fly-by-wire systems. Forget about landing: you couldn’t even get them off the ground without their fancy control systems. The governance of the fly-by-wire systems lets the pilot control the uncontrollable.

The problem with modern IT is that it is too stable. Not the parts, the individual applications, but the IT estate as a whole. We’ve designed agility out of it, focusing on creating a stable and efficient platform for lobbing bombs onto the enemy below. This is great is the landscape below us doesn’t change, and the enemy promises not to move or shoot back, but not so good in today’s rapidly changing business environment. We need to be able to rapidly turn and dance about, both to dodge bullets and pounce on opportunities. We need some instability as instability means that we’re poised for change.

Jugaad points out that we need to allow in a bit of chaos if we want to bring the agility back in. The chaos jugaad provides is the instability we need. This will require us to update our governance processes, evolving them beyond simply being a tool to stop the bad happening, transforming governance into a tool for harvesting the jugaad where it occurs. After all, the role of enterprise IT is to capture good ideas and automate them, allowing them to be leveraged across the entire enterprise.

Managing chaos has become something of a science in the aircraft world. Tools like Energy-Maneuverability theory are used during aircraft design to make informed tradeoffs between weight, weapons load, amount of wing (i.e. ability to turn), and so on. This goes well beyond most efforts to map and score business processes, which is inherently a static pieces/parts and cost driven approach. Our focus should be on using different technologies and delivery approaches to modify how our IT estate responds to business change; optimising our IT estate’s dynamic, change-driven characteristics as well as its cost-driven static characteristics.

This might be the root of some of the problems we’re seeing between business and IT. IT’s tendency to measure value in terms of cost and/or stability leads us to create IT estates optimised for a static environment, which are at odds with the dynamic nature of the modern business environment. We should be focusing on the overall dynamic business performance of the IT estate, its energy-maneuverability profile.

Innovation [2010-01-18]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Reducing costs is not the only benefit of cloud computing & SaaS

The wisdom of the crowd seems to have decided that both cloud computing and its sibling SaaS are cost plays. You engage a cloud or SaaS vendor to reduce costs, as their software utility has the scale to deliver the same functionality at a lower price point than you could do yourself.

I think this misses some of the potential benefits that these new delivery models can provide, from reducing your management overhead, allowing you to focus on more important or pressing problems, through to acting as a large flex resource or providing you with a testbed for innovation. In an environment where we’re all racing to keep up, the time and space we can create through intelligently leveraging cloud and SaaS solutions could provide us with the competitive advantage we need.

Sameul Insull

Could and SaaS are going to take over the world, or so I hear. And it increasingly looks that way, from Nicholas Carr‘s entertaining stories about Sameul Insull through to Salesforce.com, Google and Amazon‘s attempts to box-up SaaS and cloud for easy consumption. These companies massive economies of scale enable them to deliver commoditized functionality at a dramatically lower price point that most companies could achieve with even the best on-premises applications.

This simple fact causes many analysts to point out the folly of creating a private cloud. While a private cloud enables a company to avoid the security and ownership issues associated with a public service, they will never be able to realise the same economies of scale as their public brethren. It’s these economies of scale that enables companies like Google to devote significant time and effort into finding new and ever more creative techniques to extract every last drip of efficiency from their data centres, techniques which give them a competitive advantage.

I’ve always had problems with this point of view, as it ignores one important fact: a modern IT estate must deliver more than efficiency. Constant and dramatic business change means that our IT estate must be able to be rapidly reconfigured to support an ever evolving business environment. This might be as simple as scaling up and down, inline with changing transaction volumes, but it might also involve  rewriting business rules and processes as the organisation enters and leaves countries with differing regulation regimes, as well as adapting to mergers, acquisitions and divestments.

Once we look beyond cost, a few interesting potential uses for cloud and SaaS emerge.

First, we can use cloud as a tool to increase the flexibility of our IT estate. Using a standard cloud platform, such as an Amazon Machine Image, provides us with more deployment options than more traditional approaches. Development and testing can be streamlined, compressing development and testing time, while deployed applications can be migrated to the cloud instance which makes the most sense. We might choose to use public cloud for development and testing, while deploying to a private cloud under our own control to address privacy or political concerns. We might develop, test and deploy all into the public cloud. Or we might even use a hybrid strategy, retaining some business functionality in a private cloud, while using one or more public clouds as a flex resource to cope with peak loads.

Second, we can use cloud and SaaS as tools to increase the agility of our IT estate. By externalising the the management of our infrastructure (via cloud), or even the management of entire applications (via SaaS), we can create time and space to worry about more important problems. This enables us to focus on what needs to happen, rather than how to make it happen, and rely on the greater scale of our SaaS or cloud provider to respond more rapidly than we could if we were maintaining a traditional on-premises solution.

And finally, we can use cloud as the basis of an incubator strategy where an organisation may test a new idea using externalised resources, proving the business case before (potentially) moving to a more traditional internal deployment model.

One problem I’ve been thinking about recently is how to make our incredibly stable and reliable IT estates respond better to business change. Cloud and SaaS, with the ability to shape the flexibility and agility of our IT estate to meet what the business needs, might just be the tools we need to do this.

Innovation and the art of random

A little while ago I was invited to speak at an event, InnoFuture, which, for a mixture of reasons, didn’t end up happening. The theme for the event was Ahead of the trends — the random effect. My take on it was that innovation is not random, it’s just happening faster than you can process, and that ideas are commoditized making synthesis, the creation of new solutions to old problems, what drives innovation. I was pretty happy with the outline I put together for my talk, that I ended up reusing the content and breaking it into three blog posts, rather than letting it go to waste.

Innovation seems to be the topic of the day. Everyone seems to want some, thinking that it’s the secret sauce which will help them (or their company) bubble to the top of the heap. The self help and consulting communities have responded in force, trying to bottle lightening or package the silver bullet (whichever metaphor you prefer).

It was in this environment that I was quite taken by the topic of a recent InnoFuture event when I was asked to speak.

Ahead of trends — the random effect.
When a concept becomes a trend, you are a not the leader. How to tap into valuable ideas for products, services and communication before they are seen as trends, when they are just … random? Albert Einstein said that imagination is more important than knowledge. Let’s open the doors and let the imagination in for it seems that in the current crisis, the right brain is winning and we may be rationalized to death before things get better.

I’ve never seen the random effect, though I have been delightfully surprised when something unexpected pops up. Having been involved in a bunch of companies and projects that, I’m told, where innovative, I’ve always thought innovation was not so much random, as the result of obliquity. What makes it seem random is the simple fact that your are not aware of the intervening steps from interesting problem through to novel solution.

I figured I’d mash together a few ideas that capture this thought, and provide some (hopefully) sage advice based on what I do to deal with random. I ended up selecting:

  • John Boyd on why rapidly changing environments are confusing,
  • Peter Drucker‘s insight that insight (the tacit application of knowledge) is not a transferable good,
  • the struggle for fluency that we all go through as we learn to read,
  • John Boyd (again, but then he had a lot of good ideas) on the need for synthesis,
  • KK Pang (and old lecturer of mine) on the need to view problems from multiple contexts,
  • the need to follow a consistent theme of interest as the only tractable way of finding interesting problems to solve, and
  • my own experiences in leveraging a network of like and dissimilar minds as a way of effectivly out-sourcing analysis.

The result was called Of snow mobiles and childhood readers: why random isn’t, and how to make it work for you. I ended up having far to much content to fill my twenty minute slot, so it’s probably for the better that the event didn’t go ahead, as it would have taken a lot of time to cut it down.

Given that I had a fairly well developed outline, I decided to make it into a series of blog posts (plus my slides these days don’t have a lot of text on them, so if I just dropped the slides online they wouldn’t make any sense). The blog posts ended up breaking down this way:

  1. Innovation should not be the race for the new-new thing.
    Points out that innovation only seems random, unexpected, as you don’t see the intervening steps between a problem and new solution, and that innovation is the result of many small commoditized steps. This ties into one of my earlier posts of dealing with the speed of change.
  2. The role of snowmobiles in innovation.
    Argues that ideas are a common commodity, and that the real challenge with innovation is synthesis rather than ideation.
  3. Childhood readers and the art of random.
    Argues that the key to innovation is to find interesting problems to solve, and suggests that the best approach is to be fluent in a range of domains (sectors, geographies, activities, …) to provide a broader perspective, focus on a line of inquiry to provide some structure, and build a network of people with complimentary interests, providing you with the time, space and opportunity to focus on synthesis.

I expect that these are more productive if taken as a whole, rather than individual posts.

If you look at the path I’ve charted over my career then this is the approach I’ve taken, and my topic of choice is how people communicate and decide as a group, leading me to John Boyd, Cicero, human-computer interaction, agent technology, biology (my thesis was mathematically modelling nerves in a cat), and so on.

I still have the slides, so feel free to contact me it you’re interested in my presenting all or part of this topic.

Innovation [2009-12-14]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Childhood readers and the art of random

Note: This post is part of larger series on innovation, going under the collective name of Innovation and Art of Random.

Innovation can seem random. We’re dealing with so much change in our daily lives that we miss the long and tortuous journey an innovation takes from it’s first conception through to the delivered solution, causing the innovation to seemingly appear from nowhere. We’re distracted as we’re trying to cope with the huge volume of work our changing environment creates, adjusting to the new normal, while trying to find time to sift through the idea fire hose for that one good idea. However ideas are common, commoditized even, and our real challenge is to make connections.

As Peter Drucker pointed out: insight, the tacit application of knowledge is not a transferable good. The value we derive from innovation comes from synthesis, the tacit application of knowledge to create a new solution. The challenge is to find time to pull apart the tools available to us, recombining them to synthesis new (and hopefully innovative) solutions to the problems we’re confronting today.

While ideas may be cheap, the time and space needed to create insight are not. We need to understand our problem from multiple contexts, teasing out the important elements, bringing together ideas to address each element in the synthesis of an original solution. This process takes time, often more time than we can spare, and so we need to invest our time wisely. Which steps in this processes are the most valuable (or the least transferable), the steps we need to own? Which can we outsource, passing responsibility to partners, or even our social network? And is it possible to create time? Using technology to take some of the load and create the breathing room we need.

Dr. Khee Pang
Dr. Khee Pang

One of the best pieces of advice I picked up at university was from Dr. K. K. Pang, who unfortunately passed away in March 2009. Dr Pang taught circuit theory, which can be quite a frustrating subject. It’s common to encounter a problem in circuit theory which you just can’t find a way into, making it seemingly impossible to solve. Dr. Pang’s brilliant, yet simple, advice was “If you don’t like the problem, then change it to one you do like.”. Just start messing with the problem, transforming bits of the circuit at random until you find a problem that you can solve.

Fast forward to my current work, far removed from circuit theory, and I still find myself using this piece of advice at least once a week. It’s not uncommon to come across a problem, a problem with little direct connection to technology, that needs to be approached from a very different angle. When stuck, take a different angle, make it a different problem, and you might find this new problem more to you liking.

You often bump into the same problem in different contexts as you work across industries and geographies. Different contexts can necessitate a different point of view, making the problem look slightly different. This highlights other aspects of the problem that you might not have been aware of before, highlighting previously hidden assumptions or connections to other problems. However, while this cross industry and geography insight is a valuable tool, the time required to go spelunking for insight is prohibitive. We find ourselves spend too much decoding the new context, and too little teasing out the important elements.

Learning to read, something I expect we all did in our childhood, is a struggle for fluency. We work from the identification of letters and words, through struggling to decode the text, to a level of fluency that enables us to focus on the meaning behind the text. Being fluent means being good enough at identification and decoding that we have the time and space for comprehension.

The ability to change the problem in front of you is really a question of being fluent in a range of environments; understanding a number of doctrines. These might be different industries (finance, public sector, utilities …) domains (logistics, risk management, military tactics, rhetoric …) or even geographies (APAC, EU, US …) as each has its own approach. We need enough experience in an environment to be able to decode it easily. Generally this means in the trenches experience, focused on applying knowledge, allowing us to weed out the common place and find the interesting and new. But building fluency takes time though; we can’t afford to immerse ourselves in every possible environment that might be of interest.

For quite a few years (from back in the day when my email address had a .oz at the end) I’ve been collecting a network of colleagues. Each is inquisitive in our own way, each with our own area of interest or theme, covering a huge, overlapping range of doctrines, while always looking for another idea too add to our toolbox. With the world being small, or even flat, this network of like minds has often been the source of a different point of view, one which solves the problem I’m working on. More recently this network has been migrating to Twitter, making the shared conversation more dynamic and immediate. It’s small networks of like-minds like this which can provide us the ability to effectively outsource the majority of our analysis, spreading the effort amongst out peers and creating the time and space to focus on synthesis.

Which brings us to the crux of the problem: innovation relies on the synthesis, and the key to synthesis is in finding interesting problems to solve. An idea, no matter how brilliant, will not go far unless it results in a product or service the people want. Innovation exists out at the surface of our organisations, or at the production coal face. Just as with the breath strips example, interesting problems pop up in the most unexpected places. Our challenge is prepare ourselves so that we can capitalise on the the opportunity a problem represents. As a famous golfer once said:

Gary Player
Gary Player

The more I practice, the luckier I get.
Gary Player

The world around us changes so rapidly that innovation can seem random. The snowmobile was obvious to the people who invented it, as they worked via trial-and-error from the original problem they wanted to solve through to the completed solution; it didn’t leap from their brow as a fully formed concept. Develop your interests, become fluent in a wide range of relevant topics and environments, use your network to extend your reach even further, and look for interesting problems to solve. In a world awash with good ideas, when innovation relies on your ability synthesis new solutions by finding an new angle from which to approach old problems (possibly problems so old that people forgot that they had them), the key to success is to find our own focus and then use your own own interests to drive yourself forward while effectively leveraging your network and resources around you to take as much of the load as possible. Innovation is rarely the result of a brilliant idea, but a patient process of finding problems to solve and then solving them, and sometimes we’re surprised by how innovative our solutions can be.