Tag Archives: Computing

The changing role of Government

Is Government 2.0 (whichever definition you choose) the ultimate aim of government? Government for the people and by the people. Or are we missing the point? We’re not a collection of individuals but a society where the whole is greater than the parts. Should government’s ultimate aim to be the trusted arbiter, bringing together society so that we can govern together? Rather than be disinterested and governed on, as seems to be the current fashion. In an age when everything is fragmented and we’re all responsible for our own destiny, government is in a unique position to be the body that binds together the life events that bring our society together.

Government 2.0 started with lofty goals: make government more collaborative. As with all definitions though, it seems that the custodians of definitions are swapping goals for means. Pundits are pushing for technology driven definitions, as Government 2.0 would not be possible without technology (but then, neither would my morning up of coffee).

Unfortunately Government 2.0 seems to be in danger of becoming “government as a platform”: GaaP or even GaaS (as it were). Entrepreneurs are calling on the government to open up government data, allowing start-ups to remix data to create new services. FixMyStreet might be interesting, and might even tick many of the right technology boxes, but it’s only a small fragment of what is possible.

GovHack

This approach has resulted in some interesting and worthwhile experiments like GovHack, but it seems to position much of government as a boat anchor to be yanked up with top-down directives rather than as valued members of society who are trying to do what they think is the right thing. You don’t create peace by starting a war, and nor do you create open and collaborative government through top down directives. We can do better.

The history of government has been a progression from government by and for the big man, through to today’s push for government for and by the people. Kings and Queens practiced stand-over tactics, going bust every four to seven years from running too many wars that they could not afford, and then leaning on the population to refill their coffers. The various socialist revolutions pushed the big man (or woman) out and replaced them with a bureaucracy intended to provide the population with the services they need. Each of us contributing in line with ability, and taking in line with need. The challenge (and possibly the unsolvable problem) was finding a way to do this in an economically sustainable fashion.

The start of the modern era saw government as border security and global conglomerate. The government was responsible for negotiating your relationship with the rest of the world, and service provision was out-sourced (selling power stations and rail lines). Passports went from a convenient way of identifying yourself when overseas, to become the tool of choice for governments to control border movements.

Government 2.0 is just the most recent iteration in this ongoing evolution of government. The initial promise: government for the little man, enabled by Web 2.0.

As with Enterprise 2.0, what we’re getting from the application of Web 2.0 to an organisation is not what we expected. For example, Enterprise 2.0 was seen as a way to empower knowledge workers but instead, seems to be resulting in a generation of hollowed out companies where the C-level and task workers at the coal face remain, but knowledge workers have been eliminated. Government 2.0 seems to have devolved into “government as a platform” for similar reasons, driven by a general distrust of government (or, at least, the current government which the other people elected) and a desire to have more influence on how government operates.

Government, The State, has come to be defined as the enemy of the little man. The giant organisation which we are largely powerless against (even though we elected them). Government 2.0 is seen as the can opener which can be used to cut the lid off government. Open up government data for consumption and remixing by entrepreneurs. Provide APIs to make this easy. Let us solve your citizen’s problems.

We’re already seeing problems with trust in on-line commerce due to this sort of fine-grained approach. The rise of online credit card purchases has pull the credit card fraud rate up with it resulting in a raft of counter-measures, from fraud detection through to providing consumers with access to their credit reports. Credit reports which, in the U.S., some providers are using as the basis for questionable tactics which scam and extort money from the public.

Has the pendulum swung too far? Or is it The Quiet American all over again?

Gone are the days where we can claim that “The State” is something that doesn’t involve the citizens. Someone to blame when things go wrong. We need to accept that now, more than ever, we always elect the government we deserve.

Technology has created a level of transparency and accountablility—exhemplified by Obama’s campaign—that are breeding a new generation of public servants. Rather than government for, by or of the people, we getting government with the people.

This is driving a the next generation of government: government as the arbitrator of life events. Helping citizens collaborate together. Making us take responsibility for our own futures. Supporting us when facing challenges.

Business-technology, a term coined by Forrester, is a trend for companies to exploit the synergies between business and technology and create new solutions to old problems. Technology is also enabling a new approach to government. Rather than deliver IT Government alignment to support an old model of government, the current generation of technologies make available a new model which harks back to the platonic ideals.

We’ve come along way from the medieval days when government was (generally) something to be ignored:

  • Government for the man (the kings and queens)
  • Government by the man (we’ll tell you what you need) (each according to their need, each …)
  • Government as a conglomerate (everything you need)
  • Government as a corporation (everything you can afford)

The big idea behind Government 2.0 is, at its nub, government together. Erasing the barriers between citizens, between citizens and the government, helping us to take responsibility for our future, and work together to make our world a better place.

Government 2.0 should not be a platform for entrepreneurs to exploit, but a shared framework to help us live together. Transparent development of policy. Provision (though not necessirly ownership) of shared infrastructure. Support when you need it (helping you find the services you need). Involvement in line with the Greek/Roman ideal (though more inclusive, without exclusions such as women or slaves).

Balancing our two masters

We seem to be torn between two masters. On one hand we’re driven to renew our IT estate, consolidating solutions to deliver long term efficiency and cost savings. On the other hand, the business wants us to deliver new, end user functionality (new consumer kiosks, workforce automation and operational excellence solutions …) to support tactical needs. But how do we balance these conflicting demands, when our vertically integrated solutions tightly bind user interaction to the backend business systems and their multi-year life-cycle? We need to decouple the two, breaking the strong connection between business system and user interface. This will enable us to evolve them separately, delivering long term savings while meeting short term needs.

Business software’s proud history is the story of managing the things we know. From the first tabulation systems through enterprise applications to modern SaaS solutions, the majority of our efforts have been focused data: capturing or manufacturing facts, and pumping them around the enterprise.

We’ve become so adept at delivering these IT assets into the business, that most companies’ IT estates a populated with an overabundance of solutions. Many good solutions, some no so good, and many redundant or overlapping. Gardening our IT estate has become a major preoccupation, as we work to simplify and streamline our collection of applications to deliver cost savings and operational improvements. These efforts are often significant undertakings, with numbers like “5 years” and “$50 million” not uncommon.

While we’ve become quite sophisticated at delivering modular business functionality (via methods such as SOA), our approach to supporting users is still dominated by a focus on isolated solutions. Most user interfaces are slapped on as nearly an after thought, providing stakeholders with a means to interact with the vast, data processing monsters we create. Tightly coupled to the business system (or systems) they are deployed with, these user interfaces are restricted to evolving at a similar pace.

Business has changed while we’ve been honing our application development skills. What used to take years, now takes months, if not weeks. What used to make sense now seems confusing. Business is often left waiting while we catch up, working to improve our IT estate to the point that we can support their demands for new consumer kiosks, solutions to support operational excellence, and so on.

What was one problem has now become two. We solved the first order challenge of managing the vast volumes of data an enterprise contains, only to unearth a second challenge: delivering the right information, at the right time, to users so that they can make the best possible decision. Tying user interaction to the back end business systems forces our solutions for these two problems to evolve at a similar pace. If we break this connection, we can evolve users interfaces at a more rapid pace. A pace more in line with business demand.

We’ve been chipping away at this second problem for a quite a while. Our first green screen and client-server solutions were over taken from portals, which promised to solve the problem of swivel-chair integration. However, portals seem to be have been defeated by browser tabs. While these allowed us to bring together the screens from a collection of applications, providing a productivity boost by reducing the number of interfaces a user interacted with, it didn’t break the user interfaces explicit dependancy on the back end business systems.

We need to create a modular approach to composing new, task focused user interfaces, doing to user interfaces what SOA has done for back-end business functionality. The view users see should be focused on supporting the decision they are making. Data and function sourced from multiple back-end systems, broken into reusable modules and mashed together, creating an enterprise mash-up. A mashup spanning multiple screens to fuse both data and process.

Some users will find little need an enterprise mash-up—typically users who spend the vast majority of their time working within a single application. Others, who work between applications, will see a dramatic benefit. These users typically include the knowledge rich workers who drive the majority of value in a modern enterprise. These users are the logistics exception managers, who can make the difference between a “best of breed” supply chain and a category leading one. They are the call centre operators, whose focus should be on solving the caller’s problem, and not worrying about which backend system might have the data they need. Or they could be field personnel (sales, repairs …), working between a range of systems as they engage with you customer’s or repair your infrastructure.

By reducing the number of ancillary decisions required, and thereby reducing the number of mistakes made, enterprise mash-ups make knowledge workers more effective. By reducing the need to manually synchronise applications, copying data between them, we make them more efficient.

But more importantly, enterprise mash-ups enable us to decouple development of user interfaces from the evolution of the backend systems. This enables us to evolve the two at different rates, delivering long term savings while meeting short term need, and mitigating one of the biggest risks confronting IT departments today: the risk of becoming irrelevant to the business.

Working from the outside in

We’re drowning in a sea of data and ideas, with huge volumes of untapped information available both inside and outside our organization. There is so much information at our disposal that it’s hard to discern Arthur from Martha, let alone optimize the data set we’re using. How can we make sense of the chaos around us? How can we find the useful signals which will drive us to the next level of business performance, from amongst all this noise?

I’ve spent some time recently, thinking about how the decisions our knowledge workers make in planning and managing business exceptions can have a greater impact on our business performance than the logic reified in the applications themselves. And how the quality of information we feed into their decision making processes can have an even bigger impact, as the data’s impact is effectively amplified by the decision making process. Not all data is of equal value and, as is often said, if you put rubbish in then you get rubbish out.

Traditional Business Intelligence (BI) tackles this problem by enabling us to mine for correlations in the data tucked away in our data warehouse. These correlations provide us with signals to help drive better decisions. Managing stock levels based on historical trends (Christmas rush, BBQs in summer …) is good, but connecting these trends to local demographic shifts is better.

Unfortunately this approach is inherently limited. Not matter how powerful your analytical tools, you can only find correlations within and between the data sets you have in the data warehouse, and this is only a small subset of the total data available to us. We can load additional data sets into the warehouse (such as demographic data bought from a research firm), but in a world awash with (potentially useful) data, the real challenge is deciding on which data sets to load, and not in finding the correlations once they are loaded.

What we really need is a tool to help scan across all available data sets and find the data which will provide the best signals to drive the outcome we’re looking for. An outside-in approach, working from the outcome we want to the data we need, rather than an inside-out approach, working from the data we have to the outcomes it might support. This will provide us with a repeatable method, a system, for finding the signals needed to drive us to the next level of performance, rather than the creative, hit-and-miss approach we currently use. Or, in geekier terms, a methodology which enables us to proactively manage our information portfolio and derive the greatest value from it.

I was doodling on the tram the other day, playing with the figure I created for the Inside vs. Outside post, when I had a thought. The figure was created as a heat map showing how the value of information is modulated by time (new vs. old) and distance (inside vs. outside). What if we used it the other way around? (Kind of obvious in hindsight, I know, but these things usually are.) We might use the figure to map from the type of outcome we’re trying to achieve back to the signals required to drive us to that outcome.

Time and distance drive the value of information
Time and distance drive the value of information

This addresses an interesting comment (in email) by a U.K. colleague of mine. (Jon, stand up and be counted.) As Andy Mulholland pointed out, the upper right represents weak confusing signals, while the lower left represents strong, coherent signals. Being a delivery guy, Jon’s first though was how to manage the dangers in excessively focusing on the upper right corner of the figure. Sweeping a plane’s wings forward increases its maneuverability, but at the cost of decreasing it’s stability. Relying too heavily on external, early signals can, in a similar fashion, could push an organization into a danger zone. If we want to use these types of these signals to drive crucial business decisions, then we need to understand the tipping point and balance the risks.

My tram-doodle was a simple thing, converting a heat map to a mud map. For a given business decision, such as planning tomorrow’s stock levels for a FMCG category, we can outline the required performance envelope on the figure. This outline shows us the sort of signals we should be looking for (inside good, outside bad), while the shape of the outlines provides us with an understanding (and way of balancing) the overall maneuverability and stability of the outcome the signals will support. More external predictive scope in the outline (i.e. more area inside the outline in the upper-right quadrant) will provide a more responsive outcome, but at the cost of less stability. Increasing internal scope will provide a more stable outcome, but at the cost of responsiveness. Less stability might translate to more (potentially unnecessary) logistics movements, while more stability would represent missed sales opportunities. (This all creates a little deja vu, with a strong feeling of computing Q values for non-linear control theory back in university, so I’ve started formalizing how to create and measure these outlines, as well as how to determine the relative weights of signals in each area of the map, but that’s another blog post.)

An information performance mud map
An information performance mud map

Given a performance outline we can go spelunking for signals which fit inside the outline.

Luckily the mud map provides us with guidance on where to look. An internal-historical signal is, by definition driven by historical data generated inside the organization. Past till data? An external-reactive signal is, by definition external and reactive. A short term (i.e. tomorrow’s) weather forecast, perhaps? Casting our net as widely as possible, we can gather all the signals which have the potential to drive us toward to the desired outcome.

Next, we balance the information portfolio for this decision, identifying the minimum set of signals required to drive the decision. We can do this by grouping the signals by type (internal-historical, …) and then charting them against cost and value. Cost is the acquisition cost, and might represent a commercial transaction (buying access to another organizations near-term weather forecast), the development and consulting effort required to create the data set (forming your own weather forecasting function), or a combination of the two, heavily influenced by an architectural view of the solution (as Rod outlined). Value is a measure of the potency and quality of the signal, which will be determined by existing BI analytics methodologies.

Plotting value against cost on a new chart creates a handy tool for finding the data sets to use. We want to pick from the lower right – high value but low cost.

An information mud map
An information mud map

It’s interesting to tie this back to the Tesco example. Global warming is making the weather more variable, resulting in unseasonable hot and cold spells. This was, in turn, driving short-term consumer demand in directions not predicted by existing planning models. These changes in demand represented cost, in the from of stock left on the shelves past it’s use-by date, or missed opportunities, by not being able to service the demand when and where it arises.

The solution was to expand the information footprint, pulling in more predictive signals from outside the business: changing the outline on the mud map to improve closed-loop performance. The decision to create an in-house weather bureau represents a straight forward cost-value trade-off in delivering an operational solution.

These two tools provide us with an interesting approach to tackling a number of challenges I’m seeing inside companies today. We’re a lot more externally driven now than we were even just a few years ago. The challenge is to identify customer problems we can solve and tie them back to what our organization does, rather than trying to conceive offerings in isolation and push them out into the market. These tools enable us to sketch the customer challenges (the decisions our customers need to make) and map them back to the portfolio of signals that we can (or might like to) provide to them. It’s outcome-centric, rather than asset-centric, which provides us with more freedom to be creative in how we approach the market, and has the potential to foster a more intimate approach to serving customer demand.

Dealing with hysterical raisins

Had an interesting chat with a CIO the other day. He’s been pushed to provide documentation in the field to meet regulatory requirements. This documentation needs to exist separately from the primary systems that the folk in the field use to do their jobs. I expect the regulation is intend to provide some sort of disaster recovery – the world is going to hell in a hand-basket, but at least you have the redundant documentation to work from.

As with a lot of regulation, it’s there for hysterical raisins rather then good reason, having out-lived it’s usefulness, and is now zealously enforced. Since the existing, primary documentation is delivered as part of the desktop/laptop SOE, this would mean providing the team with a second laptop to support the redundant documentation. I suppose there’s some logic in that.

The solution his team came up with is just brilliant. They found a cheap/free/cost-effective VM player and created a VM with the documentation and appropriate reading software in it. The VM image was then loaded onto a USB stick (even 16G sticks are pretty affordable now). Plug the USB stick into a PC (not sure if they got it working on a Mac) and you’re soon up and running, reading important documentation while the world burns around you. For bonus points, the team created the image with the VM hibernated, so it’s up and running in more-or-less an instant as you only need to wait for the hibernated image to restart.

Meeting the hysterical documentation requirements are now a breeze. Simply mail out new VM images on a USB stick. Put them on a nice lanyard and you might even get marketing to pay for it. Staff sign for the new stick, and drop the old one in a envelope to mail back for recycling.

Lovely!

The Scoop: Oracle swallows Sun

Gavin Clarke (Editor @ The Register), Rob Janson (President @ Enterprise Java Australia) and myself are on Mark Jones’ The Scope this week.

For loyal Sun customers and industry watchers, it was almost unthinkable – Oracle buying Sun. Sun Microsystems is one of Silicon Valley’s iconic technology companies, and Oracle doesn’t do hardware. And Sun was proud to wear the underdog badge. But the proposed acquisition raises fresh questions about the long-term health of the industry’s dominant suppliers. What’s the future hold for Oracle & Sun customers?

  • Oracle license inspections – costs to rise? What about Oracle’s famed licensing complexity. Will this get any better?
  • Consolidation problems: Will customer service deteriorate and product innovation wane?
  • What of Java – what’s Oracle likely to do with this prized jewel?
  • Did Oracle buy more problems than opportunities? (Sun’s debt, poor revenues…)
  • Enterprise app consolidation leaves CIOs with fewer choices: how will they bargain with suppliers now?
  • Larry Ellison said he wouldn’t buy Sun, or a hardware company, back in 2003. What changed? Does this mean that Oracle is likely to divest itself of Sun’s hardware business once the acquisition is completed?
  • What the growth engines for Oracle now – hardware/servers appear to have little headroom for serious growth.

About The Scoop

The Scoop is an open, free-flowing conversation between industry peers. It’s about unpacking issues that affect CIOs, senior IT executives and the Australian technology industry. The conversation is moderated by Mark Jones, The Scoop’s host and producer. More information about The Scoop, including a list of previous guests, can be found here:

http://filteredmedia.com.au/about-the-scoop/

Innovation [2009-05-04]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

This issue:

Innovation [2009-01-27]

Another week and another collection of interesting ideas from around the Internet.

As always, thoughts and/or comments are greatly appreciated.

This issue:

Innovation [2008-12-01]

Another week and another collection of interesting ideas from around the Internet.

As always, thoughts and/or comments are greatly appreciated.

This issue:

  • Engineers rule [Forbes]
    At American auto companies, finance guys and marketers rise to the top. Not at Honda.
  • China’s long road to innovation [strategy+business]
    Beijing is mandating an increase in home-grown R&D, but Chinese companies face long odds in meeting international standards of innovation.
  • Cisco CEO John Chambers on speeding up innovation [BusinessWeek]
    In Chambers’ view, business is on the verge—not in the midst—of a dramatic transformation, a huge leap forward in productivity built on collaboration made possible by Web 2.0-style tools similar to YouTube, FaceBook, and Wikipedia but adapted to the corporate environment. “Our children, with their social network[ing], have presented us with the future of productivity,” he emphatically told the crowd of about 4,500 executives.
  • The kids are alright [Economist]
    Worries about the damage the internet may be doing to young people has produced a mountain of books—a suitably old technology in which to express concerns about the new. Robert Bly claims that, thanks to the internet, the “neo-cortex is finally eating itself”. Today’s youth may be web-savvy, but they also stand accused of being unread, bad at communicating, socially inept, shameless, dishonest, work-shy, narcissistic and
    indifferent to the needs of others.

The revolution will not be televised

Or the importance of being both good and original.

While I’m not a big fan of musicians reworking past hits, I’m beginning to wonder if we should ask Gil Scott-Heron to run up a new version of The Revolution Will Not Be Televised. He made a good point then: that real change comes from the people dealing with the day-to-day challenges, not the people talking about them. His point still holds today. Web 2.0 might be where the buzz is, but the real revolution will emerge from the child care workers, farmers, folk working in Starbucks, and all the other people who live outside the limelight.

There appears to be a disconnect between the technology community—the world of a-list bloggers, venture capital, analysts, (non-)conferences, etc.—and the people who doing real things. The world we technologists live in is not the real world. The real world is people going out and solving problems and trying to keep their head above water, rather than worrying about their blog, twitter, venture funding, or the new-new thing. This is the world that invented micro-credit, where fishermen off the african coast use a mobile phones to find the market price of their cash, and where farmers in Australia are using Web 2.0 (not that they care what it is) to improve their farm management. These people don’t spend their time blogging since they’re too busy trying to improve the world around them. Technology will not change the world on its own; however real people solving real problems will.

We’re all too caught up in the new-new thing. A wise friend of mine often makes the point that we have more technology than we can productively use; perhaps it’s time to take a breather from trying to create the new-new-new thing, look around the world, and see what problems we can solve with the current clutch of technologies we have. The most impressive folk I’ve met in recent years don’t blog, vlog, twitter or spend their time changing their Facebook profile. They’re focused on solving their problems using whatever tools are available.

Mesh Collaboration
Mesh Collaboration

Which I suppose brings me to my point. In a world where we’re all communicating with each other all of the time—the world of mesh collaboration—it’s all to easy to mistake the medium for the message. We get caught up in the sea of snippets floating around us, looking for that idea that will solve our problem and give us a leg up on the competition. What we forget is that our peers and competitors are all swimming in the same sea of information, so the ideas we’re seeing represent best practice at best. The mesh is a great leveler, spreading information evenly like peanut butter over the globe, but don’t expect it to provide you with that insight that will help you stand out from the
crowd.

Another wise friend makes the equally good point that in the mesh it’s not enough to be good: you need to both good and original. The mesh doesn’t help you with original. Original is something that bubbles up when our people in the field struggle with real problems and we give them the time, space, and tools to explore new ways of working.

A great example is the rise in sharity blogs. The technical solution to sharing music files is to create peer-to-peer (P2P) applications—applications, which a minority of internet users use to consume the majority of the available bandwidth. However, P2P is too complicated for many people (involving downloading and installing software, finding a torrent seeds, and learning a new language including terms like torrent seed) and disconnected from the music communities. Most of the music sharing action has moved onto sharity blogs. Using free blogging and file sharing services (such as Blogger and RapidShare, respectively) communities are building archives of music that you can easily download, archives which you can find via a search engines and which are integrated (via links) into the communities and discussions around them. The ease of plugging together a few links lets collectors focus on being original; putting their own spin on the collection they are building, be it out of print albums, obscure artists or genres, or simply whatever they can get their hands on.

What can we learn from this? When we develop our new technology and/or Web 2.0 strategy, we need to remember that what we’re trying to do is provide our team with a new tool to help them do a better job. Deploying Web 2.0 as a new suite of information silos, disconnected from the current work environment, will create yet another touch point for our team members to navigate as they work toward their goals. This detracts from their work, which is what they’re really interested in, resulting in them ignoring the new application as it seems more trouble than it is worth. The mesh is a tool to be used and not an end in itself, and needs to be integrated into and support the existing work environment in a way that makes work easer. This creates the time and space for our employees to explore new ideas and new ways of working, helping them to become both good and original.

Update: Swapped the image of Gil Scott-Heron’s Pieces of Man for an embedded video of The revolution will not be televised, at the excellent suggestion of Judith Ellis.

The Scoop: Are we facing tech wreck take 2?

Stephen Tame (CIO @ JetStar), Mike Zimmerman (principle @ Technology Venture Partners) and myself are on Mark Jones’ The Scope this week. The show looks at the impact of the global economic crisis on the local and international technology industries.

The worldwide economic crisis has caused business to completely reevaluate spending priorities, and IT is no exception. What strategies must CIOs consider during the downturn? What impact will a difficult economic climate in 2009 have on enterprise technology spending?

Topics covered include:

  • What IT projects are likely to be cut immediately?
  • What IT projects will likely survive — what can’t we live without?
  • What advice do we have for CIOs evaluating IT projects. Is this the catalyst for more spending on outsourcing, SAAS, etc?
  • Other observations about the impact of worldwide downturn? eg. will this redefine how we think about sourcing IT products? What solutions should CIOs bring to the boardroom?

About The Scoop

The Scoop is an open, free-flowing conversation between industry peers. It’s about unpacking issues that affect CIOs, senior IT executives and the Australian technology industry. The conversation is moderated by Mark Jones, The Scoop’s host and producer. More information about The Scoop, including a list of previous guests, can be found here:

http://filteredmedia.com.au/about-the-scoop/