Tag Archives: Web 2.0

The IT department we have today is not the IT department we’ll need tomorrow

The IT departments many of us work in today (either as an employee or consultant) are often the result of thirty or more years of diligent labour. These departments are designed, optimised even, to create IT estates populated with large, expensive applications. Unfortunately these departments are also looking a lot like dinosaurs: large, slow and altogether unsuited for the the new normal. The challenge is to reconfigure our departments, transforming them from asset management functions into business (or business-technology) optimisation engines. This transformation should be a keen interest for all of us, as it’s going to drive a dramatic change in staffing profiles which will, in turn, effect our own jobs in the no so distant future.

Delivering large IT solutions is a tricky business. They’re big. They’re expensive. And the projects to create them go off the rails more often than we’d like to admit. IT departments have been built to minimise the risks associated with delivering and operating these applications. This means governance, and usually quite a lot of it. Departments which started off as small scale engineering functions soon picked up an administrative layer responsible to the mechanics of governance.

More recently we’ve been confronted with the challenge with managing the dependancies and interactions between IT applications. Initiatives like straight-through processing require us to take a holistic, rather than a pieces-parts, approach, and we’re all dealing with the problem of having one of each application or middleware product, as well as a few we brewed in the back room ourselves. Planning the operation and evolution of the IT estate became more important, and we picked up an enterprise architecture capability to manage the evolution of our IT estate.

It’s common to visualise these various departmental functions and roles as a triangle (or a pyramid, if you prefer). At the bottom we have engineering: the developers and other technical personnel who do the actual work to build and maintain our applications. Next layer up is governance, the project and operational administrators who schedule the work and check that it’s done to spec. Second from the top are the planners, the architects responsible for shaping the work to be done as well as acting as design authority. Capping of the triangle (or pyramid) is the IT leadership team who decide what should be done.

The departmental skills triangle

While specific techniques and technologies might come and go, the overall composition of the triangle has remained the same. From the sixties and seventies through to even quite recently, we’ve staffed our IT departments with many technical doers, a few less administrators, a smaller planning team, and a small IT leadership group. The career path for most of us been a progression from the bottom layers – when we were fresh out of school – to the highest point in the triangle that we can manage.

The emergence of off-shore and outsourcing put a spanner in the works. We all understand the rational: migrate the more junior positions – the positions with the least direct (if any) contact with the business proper – to a cheaper country. Many companies under intense cost pressure broke the triangle in two, keeping the upper planning and decision roles, while pushing the majority of the manage and all the do roles out of the country, or even out of the company.

Our first attempt at out-sourcing

Ignoring whether or not this drive to externalise the lower roles provided the expected savings or not, what it did do is break the career ladder for IT staff. Where does you next generation of senior IT personnel come from if you’ve pushed the lower ranks out of the business? Many companies found themselves with an awkward skills shortage a few years into an outsourcing / off-shore arrangement, as they were no longer able to train or promote senior personnel to replace those who were leaving through natural attrition.

The solution to this was to change how we brake-up the skills triangle; rather than a simple horizontal cut, we took a slice down the side. Retaining a portion of all skills in-house allows companies provide a career path and on the job training for their staff.

A second, improved, go at out-sourcing
A second, improved, go at out-sourcing

Many companies have tweaked this model, adding a bulge in the middle to provide a large enough resource pool to manage both internal projects, as well as those run by out-sourced and off-shore resources.

Factoring in the effort required to manage out-sourced projects
Factoring in the effort required to manage out-sourced projects

This model is now common in a lot of large companies, and it has served us well. However, the world has a funny habit of changing just when you’ve everything working smoothly.

The recent global financial criss has fundamentally changed the business landscape. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order. Many are even talking about the emergence of a new normal. The impact this will have on how we run our businesses (and our IT departments) is still being discussed, but we can see the outline of this impact already.

Companies are becoming more focused, while leaning more heavily on partners and services companies (BPO, out-sourcers, consultants, and so on) to cover those areas of the business they don’t want to focus on. We can see this from the global companies who have effectively moved to a franchise model, though to the small end of town where startups are using on-line services such as Amazon S3, rather than building internal capabilities. While this trend might have initially started as a cost saving, most of the benefit is in management time saved, which can then be used to focus on more important issues. We’re all finding that the limiting factor in our business is management time, so being able to hand off the management of less important tasks can help provide that edge you need.

We’re also seeing faster business change: what used to take years now takes months, or even weeks. The constant value-chain optimisation we’ve been working on since the 70s has finally cumulated in product and regulatory life-cycles that change faster than we can keep up. Nowhere is this more evident than the regulated industries (finance, utilities …), where updates in government regulation has changed from a generational to a quarterly occurrence as governments attempt to use regulation change to steer the economic boat.

Money is also becoming (or has become) more expensive, causing companies and deals to operate with less leverage. This means that there is less capital available for major projects, pushing companies to favour renting over buying, as well as creating a preference for smaller, incremental change over the major business transformation of the past.

And finally, companies are starting to take a truly global outlook and operate as one cohesive business across the globe, rather than as a family of cloned business who operate more-or-less independently in each region.

We can draw a few general conclusions on the potential impact on IT departments of these trends.

  • The increase reliance on partners, the broader partner ecosystem this implies, and an increasingly global approach to business will create more complex operational environments, increasing the importance of planning the IT estate and steering a company’s IT in the right direction.
  • The need to reduce leverage, and free up working capital, is pushing companies toward BPO and SaaS solutions, rather than the traditional on-premisses solutions, where the solution provider is paid per-seat, or might even be only paid a success fee.
  • The need for rapid project turn-around is pushing us toward running large portfolios of small projects, rather than a small number of large projects.
  • A lot of the admin work we used to do is now baked into web delivered solutions (BaseCamp et al).

This will trigger us to break up a the skills triangle in a different way.

A skills/roles triangle for the new normal
A skills/roles triangle for the new normal

While we’ll still take a slice down the side of the triangle, the buldge will move to the ends of the slice, giving it a skinny waist. The more complex operational environment means that we need to beef up planning (though we don’t want to get all dogmatic about our approach, as existing asset-centric IT planning methodologies won’t work in the new normal). A shift to large numbers of small projects (where the projects are potentially more technically complex) means that we’ll beef up our internal delivery capability, providing team leads with more autonomy. The move to smaller projects also means that we can reduce our administration and governance overhead.

We’ll replace some skills with automated (SaaS) solutions. Tools like BaseCamp will enable us to devolve responsibility for reporting and management to the team at the coalface. It will also reduce the need to develop and maintain infrastructure. Cloud technology is a good example of this, as it takes a lot of the tacit knowledge required to manage a fleet of servers and bakes it into software, placing it in the hands of the developers. Rumor has it that that a cloud admin can support 10,000 servers to a more traditional admin’s 500.

And finally, our suppliers act as a layer through the middle, a flex resource for us to call on. They can also provide us with a broader, cross-industry view, of how to best leverage technology.

This thinning out of the middle ranks is part of a trend we’re seeing elsewhere. Web2.0/E2.0/et al are causing organisations to remove knowledge workers — the traditional white collar middle layers of the organisaiton – leaving companies with a strategy/leadership group and task workers.

Update: Andy Mulholland has an interesting build on this post over at the Capgemini CTO blog. I particularly like the Holm service launched by Ford and Microsoft, a service that it’s hard to imagine a traditional IT department fielding.

What is the role of government in a Web 2.0 world?

What will be the role of government in a post Web 2.0 world? I doubt it’s what a lot of us predict, given society’s poor track record in predicting it’s own future.

One thing I am reasonably sure of though, is that this future won’t represent the open source nirvana that some pundits hope for. When I’ve ruminated in the past about the changing role of government, I’ve pointed out that attempting to create the future by dictate is definitely not the right approach. As I said then:

You don’t create peace by starting a war, and nor do you create open and collaborative government through top down directives. We can do better.

There was an excellent article by Nat Torkington, Rethinking open data, posted over at O’Reilly radar which shows this in action. As it points out, the U.S. Open Government Directive has prompted datasets of questionable value to be added to data.gov; while many of the applications are developed as they are easy to build, rather than providing any tangible benefit. Many of the large infrastructure projects commissioned in the name of open data suffered the same fate as large, unjustified infrastructure projects in private enterprise (i.e. they’re hard for the layman to understand, they have scant impact on solving the problems society seems plagued with, and they’re overly complex to deliver and use due to technological and political puritism).

A more productive approach is focus on solving problems that we, the populace, actually care about. In Australia this might involve responding to the bush fire season. California has a similar problem. The recent disaster in Haiti was another significant call to action. It was great to see the success that was Web 2.0 in Haiti (New Scientist had an excellent article).

As Nat Torkington says:

the best way to convince them to open data is to show an open data project that’s useful to real people.

Which makes me think: government is a tool for us to work together, not the enemy to subdue. Why don’t we move government on from service provider of last resort, which is the role it seems to play today.

Haiti showed us that some degree of centralisation is required to make these efforts work efficiently. A logical role for government going forward would be something like a market maker: connecting people who need services with the organisations providing them, and working to ensure that the market remains liquid. Government becomes the trusted party that ensures that there are enough service providers to meet demand, possibly even bundling service to provide solutions to life’s more complex problems.

We’ve had the public debate on whether or not government should own assets (bridges, power utilities etc.), and the answer was generally not. Government provision of services is well down a similar road. This frees up dedicated and hard working public servants (case workers, forestry staff, policy wonks …) to focus on the harder problem of determining what services should be provided.

Which brings me back to my original point. Why are we trying to drive government, and society in general, toward a particular imagined future of our choosing (one involving Open Government Directives, and complicated and expensive RDF infrastructure projects). We can use events like the bush fires and Haiti to form a new working relationship. Let’s pick hard but tractable problems and work together to find solutions. As Nat (again) points out, there’s a lot of data in government that public servants are eager to share, if we just give them a reason. And if our efforts deliver tangible benefits, then everyone will want to come along for the ride.

Updated: The reports are in: data.gov has quality issues. I’ve updated the text updated with the following references.

Updated: More news on data.gov’s limitations highlighting the problems with a “push” model to open government.

Innovation [2010-02-01]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

The changing role of Government

Is Government 2.0 (whichever definition you choose) the ultimate aim of government? Government for the people and by the people. Or are we missing the point? We’re not a collection of individuals but a society where the whole is greater than the parts. Should government’s ultimate aim to be the trusted arbiter, bringing together society so that we can govern together? Rather than be disinterested and governed on, as seems to be the current fashion. In an age when everything is fragmented and we’re all responsible for our own destiny, government is in a unique position to be the body that binds together the life events that bring our society together.

Government 2.0 started with lofty goals: make government more collaborative. As with all definitions though, it seems that the custodians of definitions are swapping goals for means. Pundits are pushing for technology driven definitions, as Government 2.0 would not be possible without technology (but then, neither would my morning up of coffee).

Unfortunately Government 2.0 seems to be in danger of becoming “government as a platform”: GaaP or even GaaS (as it were). Entrepreneurs are calling on the government to open up government data, allowing start-ups to remix data to create new services. FixMyStreet might be interesting, and might even tick many of the right technology boxes, but it’s only a small fragment of what is possible.

GovHack

This approach has resulted in some interesting and worthwhile experiments like GovHack, but it seems to position much of government as a boat anchor to be yanked up with top-down directives rather than as valued members of society who are trying to do what they think is the right thing. You don’t create peace by starting a war, and nor do you create open and collaborative government through top down directives. We can do better.

The history of government has been a progression from government by and for the big man, through to today’s push for government for and by the people. Kings and Queens practiced stand-over tactics, going bust every four to seven years from running too many wars that they could not afford, and then leaning on the population to refill their coffers. The various socialist revolutions pushed the big man (or woman) out and replaced them with a bureaucracy intended to provide the population with the services they need. Each of us contributing in line with ability, and taking in line with need. The challenge (and possibly the unsolvable problem) was finding a way to do this in an economically sustainable fashion.

The start of the modern era saw government as border security and global conglomerate. The government was responsible for negotiating your relationship with the rest of the world, and service provision was out-sourced (selling power stations and rail lines). Passports went from a convenient way of identifying yourself when overseas, to become the tool of choice for governments to control border movements.

Government 2.0 is just the most recent iteration in this ongoing evolution of government. The initial promise: government for the little man, enabled by Web 2.0.

As with Enterprise 2.0, what we’re getting from the application of Web 2.0 to an organisation is not what we expected. For example, Enterprise 2.0 was seen as a way to empower knowledge workers but instead, seems to be resulting in a generation of hollowed out companies where the C-level and task workers at the coal face remain, but knowledge workers have been eliminated. Government 2.0 seems to have devolved into “government as a platform” for similar reasons, driven by a general distrust of government (or, at least, the current government which the other people elected) and a desire to have more influence on how government operates.

Government, The State, has come to be defined as the enemy of the little man. The giant organisation which we are largely powerless against (even though we elected them). Government 2.0 is seen as the can opener which can be used to cut the lid off government. Open up government data for consumption and remixing by entrepreneurs. Provide APIs to make this easy. Let us solve your citizen’s problems.

We’re already seeing problems with trust in on-line commerce due to this sort of fine-grained approach. The rise of online credit card purchases has pull the credit card fraud rate up with it resulting in a raft of counter-measures, from fraud detection through to providing consumers with access to their credit reports. Credit reports which, in the U.S., some providers are using as the basis for questionable tactics which scam and extort money from the public.

Has the pendulum swung too far? Or is it The Quiet American all over again?

Gone are the days where we can claim that “The State” is something that doesn’t involve the citizens. Someone to blame when things go wrong. We need to accept that now, more than ever, we always elect the government we deserve.

Technology has created a level of transparency and accountablility—exhemplified by Obama’s campaign—that are breeding a new generation of public servants. Rather than government for, by or of the people, we getting government with the people.

This is driving a the next generation of government: government as the arbitrator of life events. Helping citizens collaborate together. Making us take responsibility for our own futures. Supporting us when facing challenges.

Business-technology, a term coined by Forrester, is a trend for companies to exploit the synergies between business and technology and create new solutions to old problems. Technology is also enabling a new approach to government. Rather than deliver IT Government alignment to support an old model of government, the current generation of technologies make available a new model which harks back to the platonic ideals.

We’ve come along way from the medieval days when government was (generally) something to be ignored:

  • Government for the man (the kings and queens)
  • Government by the man (we’ll tell you what you need) (each according to their need, each …)
  • Government as a conglomerate (everything you need)
  • Government as a corporation (everything you can afford)

The big idea behind Government 2.0 is, at its nub, government together. Erasing the barriers between citizens, between citizens and the government, helping us to take responsibility for our future, and work together to make our world a better place.

Government 2.0 should not be a platform for entrepreneurs to exploit, but a shared framework to help us live together. Transparent development of policy. Provision (though not necessirly ownership) of shared infrastructure. Support when you need it (helping you find the services you need). Involvement in line with the Greek/Roman ideal (though more inclusive, without exclusions such as women or slaves).

Enterprise Mashups Defined

Luis Derechin, of JackBe has just published a nice definition of enterprise mashups over at the Enterprise Web 2.0 Blog:

Enterprise Mashups are secure, visually rich web applications that expose actionable information from diverse internal and external information sources.

This seems to cover all the bases and should keep most people happy. I’d like it to include something about how the data is integrated to provide a single consolidated and consistent view of the information, as a traditional (but AJAX heavy) portal would probably fall into under the same definition. On the whole it still works for me though.

The most interesting thing, however, is the approach they used. Rather than gather yet another small group of smart people to write yet another manifesto, they took a more democratic route.

The team used a series of games and contests to engage the broader community, largely relegating themselves to a roll of guiding and coordinating the action. The end result were answers to three key questions:

  • What is an Enterprise Mashup?, (the definition from above)
  • How do you do create an Enterprise Mashup?, and
  • Why should an organization care about mashups?’

The answer don’t have the obvious “designed by committee” smell that these things acquire. I particularly like the statement on why to use enterprise mash-ups,

Poor decisions are often made because decision-makers do not have the right information at the right time. Enterprise Mashups deliver new insights and enable better decisions through personalized access to the right, real-time information for the specific problem at hand.

as it nicely captures the shift to a more use centred approach–something badly needed in enterprise IT.

Often it seems to the be the enterprise IT community that is the most resistant to change in an organisation. Sure, we might like to use the shiny new toys, but don’t you dare change how we go about our business. We’ll tell you what you need and the best way of doing it.

It’s nice to see that an old dog can learn new tricks.

Posted via email from PEG

Knowledge Worker of the Future & Google Wave

I’ve written before about the need for an integrated approach to applying Web 2.0 ideas and tools to the enterprise. While navigating the plethora of point solutions and complex interfaces might be fine for the early adopters, most folk just want something that makes their work easier and can’t be bothered with navigating a convoluted technology and solution landscape.

I’ve been playing with Google Wave for a little while now, and initially thought that it fell into the same bucket; it’s an impressive piece of technology, but it’s also to complicated for most people to be bothered with. That was before Daniel Tenner put together a thoughtful post on the pros and cons of Wave, pointing out that Wave is a communication platform rather than a communication channel. It’s a tool for people to work together, rather than a tool to communicate.

Putting one-and-one together, what if we used Wave as a solution platform? Plug transactional data and workflow processes into Wave, rework the UI to be more task or problem centric and less messaging centric, and it would make a nice platform to build the sort of collaboration and knowledge rich solutions we need.

Bruce, a colleague of mine, has taken this a step further and built a little PoC, creating a Wave enabled leave application process. You can find the blog post, Using Google Wave for Workflow Tasks, over at his blog, and he’s put together a nice screen cast of the leave application solution, included below.

Posted via email from PEG

Extreme Competition

I’ve uploaded another presentation to SlideShare. (Still trying to work through the backlog.) This is something that I had been doing for banks and insurance companies as part of their “thought leadership” sessions.

A new company enters the market in late 2008, LGM Wealth Management, who have found a new way of spinning existing solutions and technologies to provide it with capabilities an order of magnitude better than anyone else.

  • Time to Revenue < 5 days
  • Cost to Serve < ½ industry average
  • New Product Introduction < 5 days
  • Infinite customization

How do you react?

Why we can’t keep up

We’re struggling to keep up. The pace of business seems to be constantly accelerating. Requirements don’t just slip anymore: they can change completely during the delivery of a solution. And the application we spent the last year nudging over the line into production became instant legacy before we’d even finished. We know intuitively that only a fraction of the benefits written into the business case will be realized. What do we need to do to get back on top of this situation?

We used to operate in a world where applications were delivered on time and on budget. One where the final solution provided a demonstrable competitive advantage to the business. Like SABER, and airline reservation system developed for American Airlines by IBM which was so successful that the rest of the industry was forced to deploy similar solutions (which IBM kindly offered to develop) in response. Or Walmart, who used a data warehouse to drive category leading supply chain excellence, which they leveraged to become the largest retailer in the world. Both of these solutions were billion dollar investments in todays money.

The applications we’ve delivered have revolutionized information distribution both within and between organizations. The wave of data warehouse deployments triggered by Walmart’s success formed the backbone for category management. By providing suppliers with a direct feed from the data warehouse—a view of supply chain state all the way from the factory through to the tills—retailers were able to hand responsibility for transport, shelf-stacking, pricing and even store layout for a product category to their suppliers, resulting in a double digit rises in sales figures.

This ability to rapidly see and act on information has accelerated the pulse of business. What used to take years now takes months. New tools such as Web 2.0 and pervasive mobile communications are starting to convert these months into week.

Take the movie industry for example. Back before the rise of the Internet even bad films could expect a fair run at the box-office, given a star billing and strong PR campaign too attract the punters. However, post Internet, SMS and Twitter, the bad reviews have started flying into punters hands moments after the first screening of a film has started, transmitted directly from the first audience. Where the studios could rely a month or of strong returns, now that run might only last hours.

To compensate, the studios are changing how they take films to market; running more intensive PR campaigns for their lesser offerings, clamping down on leaks, and hoping to make enough money to turn a small profit before word of mouth kicks in. Films are launched, distributed and released to DVD (or even iTunes) in weeks rather than months or years, and studios’ funding, operations and the distribution models are being reconfigured to support the accelerated pace of business.

While the pulse of business has accelerated, enterprise technology’s pulse rate seems to have barely moved. The significant gains we’ve made in technology and methodologies has been traded for the ability to build increasingly complex solutions, the latest being ERP (enterprise resource planning) whose installation in a business is often compared to open heart surgery.

The Diverging Pulse Rates of Business and Technology

This disconnect between the pulse rates of business and enterprise technology is the source of our struggle. John Boyd found his way to the crux of the problem with his work on fighter tactics.

John Boyd—also know as “40 second Boyd”—was a rather interesting bloke. He had a standing bet for 40 dollars that he beat any opponent within 40 seconds in a dog fight. Boyd never lost his bet.

The key to Boyd’s unblemished record was a single insight: that success in rapidly changing environment depends on your ability to orient yourself, decide on, and execute a course of action, faster than the environment (or your competition) is changing. He used his understanding of the current environment—the relative positions, speed and performance envelopes of both planes—to quickly orient himself then select and act on a tactic. By repeatedly taking decisive action faster than his opponent can react, John Boyd’s actions were confusing and unpredictable to his opponent.

We often find ourselves on the back foot, reacting to seemingly chaotic business environment. To overcome this we need to increase the pulse of IT so that we’re operating at a higher pace than the business we support. Tools like LEAN software development have provided us with a partial solution, accelerating the pulse of writing software, but if we want to overcome this challenge then we need to find a new approach to managing IT.

Business, however, doesn’t have a single pulse. Pulse rate varies by industry. It also varies within a business. Back office compliance runs at a slow rate, changing over years as reporting and regulation requirements slowly evolve. Process improvement and operational excellence programs evolve business processes over months or quarters to drive cost out of the business. While customer or knowledge worker facing functionality changes rapidly, possibly even weekly, in response to consumer, marketing or workforce demands.

Aligning technology with business

We can manage each of these pulses separately. Rather than using a single approach to managing technology and treating all business drivers as equals, we can segment the business and select management strategies to match the pulse rate and amplitude of each.

Sales, for example, is often victim of an over zealous CRM (customer relationship management) deployment. In an effort to improve sales performance we’ll decide to role out the latest-greatest CRM solution. The one with the Web 2.0 features and funky cross-sell, up-sell module.

Only of a fraction of the functionality in the new CRM solution is actually new though—the remainder being no different to the existing solution. The need to support 100% of the investment on the benefits provided by a small fraction of the solution’s features dilutes the business case. Soon we find ourselves on the same old roller-coaster ride, with delivery running late,  scope creeping up, the promised benefits becoming more intangible every minute, and we’re struggling to keep up.

There might be an easier way. Take the drugs industry for example. Sales are based on relationships and made via personal calls on doctors. Sales performance is driven by the number of sales calls a representative can manage in a week, and the ability to answer all of a doctor’s questions during a visit (and avoid the need for a follow-up visit to close the sale). It’s not uncommon for tasks unrelated to CRM—simple tasks such as returning to the office to process expenses or find an answer to a question—to consume a disproportionate amount of time. Time that would be better spent closing sales.

One company came up with an interesting approach. To support the sales reps in the field they provided them with the ability to query the team back in the office, answering a clients question without the need to return to head office and then try to get back in their calendar. The solution was to deploy a corporate version of Twitter, connecting the sales rep into the with the call center and all staff using the company portal via a simple text message.

By separating concerns in this way—by managing each appropriately—we can ensure that we are working at a faster pace than the business driver we supporting. By allocating our resources wisely we can set the amplitude of each pulse. Careful management of the cycles will enable us to bring business and technology into alignment.

The Value of Enterprise Architecture

Note: Updated with the slides and script from 2011’s lecture.

Is Enterprise Architecture in danger of becoming irrelevant? And if so, what can we do about it?

Presented as part of RMIT’s Master of Technology (Enterprise Architecture) course.

The Value of Enterprise Architecture

The revolution will not be televised

Or the importance of being both good and original.

While I’m not a big fan of musicians reworking past hits, I’m beginning to wonder if we should ask Gil Scott-Heron to run up a new version of The Revolution Will Not Be Televised. He made a good point then: that real change comes from the people dealing with the day-to-day challenges, not the people talking about them. His point still holds today. Web 2.0 might be where the buzz is, but the real revolution will emerge from the child care workers, farmers, folk working in Starbucks, and all the other people who live outside the limelight.

There appears to be a disconnect between the technology community—the world of a-list bloggers, venture capital, analysts, (non-)conferences, etc.—and the people who doing real things. The world we technologists live in is not the real world. The real world is people going out and solving problems and trying to keep their head above water, rather than worrying about their blog, twitter, venture funding, or the new-new thing. This is the world that invented micro-credit, where fishermen off the african coast use a mobile phones to find the market price of their cash, and where farmers in Australia are using Web 2.0 (not that they care what it is) to improve their farm management. These people don’t spend their time blogging since they’re too busy trying to improve the world around them. Technology will not change the world on its own; however real people solving real problems will.

We’re all too caught up in the new-new thing. A wise friend of mine often makes the point that we have more technology than we can productively use; perhaps it’s time to take a breather from trying to create the new-new-new thing, look around the world, and see what problems we can solve with the current clutch of technologies we have. The most impressive folk I’ve met in recent years don’t blog, vlog, twitter or spend their time changing their Facebook profile. They’re focused on solving their problems using whatever tools are available.

Mesh Collaboration
Mesh Collaboration

Which I suppose brings me to my point. In a world where we’re all communicating with each other all of the time—the world of mesh collaboration—it’s all to easy to mistake the medium for the message. We get caught up in the sea of snippets floating around us, looking for that idea that will solve our problem and give us a leg up on the competition. What we forget is that our peers and competitors are all swimming in the same sea of information, so the ideas we’re seeing represent best practice at best. The mesh is a great leveler, spreading information evenly like peanut butter over the globe, but don’t expect it to provide you with that insight that will help you stand out from the
crowd.

Another wise friend makes the equally good point that in the mesh it’s not enough to be good: you need to both good and original. The mesh doesn’t help you with original. Original is something that bubbles up when our people in the field struggle with real problems and we give them the time, space, and tools to explore new ways of working.

A great example is the rise in sharity blogs. The technical solution to sharing music files is to create peer-to-peer (P2P) applications—applications, which a minority of internet users use to consume the majority of the available bandwidth. However, P2P is too complicated for many people (involving downloading and installing software, finding a torrent seeds, and learning a new language including terms like torrent seed) and disconnected from the music communities. Most of the music sharing action has moved onto sharity blogs. Using free blogging and file sharing services (such as Blogger and RapidShare, respectively) communities are building archives of music that you can easily download, archives which you can find via a search engines and which are integrated (via links) into the communities and discussions around them. The ease of plugging together a few links lets collectors focus on being original; putting their own spin on the collection they are building, be it out of print albums, obscure artists or genres, or simply whatever they can get their hands on.

What can we learn from this? When we develop our new technology and/or Web 2.0 strategy, we need to remember that what we’re trying to do is provide our team with a new tool to help them do a better job. Deploying Web 2.0 as a new suite of information silos, disconnected from the current work environment, will create yet another touch point for our team members to navigate as they work toward their goals. This detracts from their work, which is what they’re really interested in, resulting in them ignoring the new application as it seems more trouble than it is worth. The mesh is a tool to be used and not an end in itself, and needs to be integrated into and support the existing work environment in a way that makes work easer. This creates the time and space for our employees to explore new ideas and new ways of working, helping them to become both good and original.

Update: Swapped the image of Gil Scott-Heron’s Pieces of Man for an embedded video of The revolution will not be televised, at the excellent suggestion of Judith Ellis.