Tag Archives: Computing

Taxonomies 1, Semantic Web (and Linked Data) 0

I’m not a big fan of Semantic Web{{1}}. For something that has been around for just over ten years — and which has been aggressively promoted by the likes of Tim Berners-Lee{{2}} — very little real has come of it.

Taxonomies, on the other hand, are going gangbusters, with solutions like GovDirect{{3}} showing that there is a real need for this sort of data-relationship driven approach{{4}}. Given this need, if the flexibility provided by Semantic Web (and more recently, Linked Data{{5}}) was really needed, then we would have expected someone to have invested in building significant solutions which use the technology.

While the technology behind Semantic Web and Linked Data is interesting, it seems that most people don’t think it’s worth the effort.

All this makes me think: the future of data management and standardisation is ad hoc, with communities or vendors scratching specific itches, rather than formal, top-down, theory driven approaches such as Semantic Web and Linked Data, or even other formal standardisation efforts of old.

[[1]]SemanticWeb.org[[1]]
[[2]]Tim Berners-Lee on Twitter[[2]]
[[3]]GovDirect[[3]]
[[4]]Peter Williams on the The Power of Taxonomies @ the Australian Government’s Standard Business Reporting Initiative[[4]]
[[5]]LinkedData.org[[5]]

The technologies behind the likes of Semantic Web and Linked Data have a long heritage. You can trace them back to at least the seventies when ontology and logic driven approaches to data management faced off against relational methodologies. Relational methods won that round — just ask Oracle or the nearest DBA.

That said, there has been a small number of interesting solutions built in the intervening years. I was involved in a few in one of my past lives{{6}}, and I’ve heard of more than a few built by colleagues and friends. The majority of these solutions used ontology management as a way to streamline service configuration, and therefor ease the pain of business change. Rather than being forced to rebuild a bunch of services, you could change some definitions, and off you go.

[[6]]AAII[[6]]

What we haven’t seen is a well placed Semantic Web SPARQL{{7}} query which makes all the difference. I’m still waiting for that travel website where I can ask for a holiday, somewhere warm, within my budget, and without too many tourists who use beach towels to reserve lounge chairs at six in the morning; and get a sensible result.

[[7]]SPARQL @ w3.org[[7]]

The flexibility which we could justify in the service delivery solutions just doesn’t appear to be justifiable in the data-driven solution. A colleague showed my a Semantic Web solution that consumed a million or so pounds worth of tax payer money to build a semantic-driven database for a small art collection. All this sophisticated technology would allow the user to ask all sorts of sophisticated questions, if they could navigate the (necessarily) complicated user interface, or if they could construct an even more daunting SPARQL query. A more pragmatic approach would have built a conventional web application — one which would easily satisfy 95% of users — for a fraction of the cost.

When you come down to it, the sort of power and flexibility provided by Semantic Web and Linked Data could only be used by a tiny fraction of the user population. For most people, something which gets them most of the way (with a little bit of trial and error) is good enough. Fire and forget. While the snazzy solution with the sophisticated technology might demo well (making it good TED{{8}} fodder), it’s not going to improve the day-to-day travail for most of the population.

[[8]]TED[[8]]

Then we get solutions like GovDirect. As the website puts it:

GovDirect® facilitates reporting to government agencies such as the Australian Tax Office via a single, secure online channel enabling you to reduce the complexity and cost of meeting your reporting obligations to government.

which make it, essentially, a Semantic Web solution. Except its not, as GovDirect is built on XBRL{{9}} with a cobbled together taxonomy.

[[9]]eXtensible Business Reporting Language[[9]]

Taxonomy driven solutions, such as GovDirect might not offer the power and sophistication of a Semantic Web driven solution, but they do get the job done. These taxonomies are also more likely to be ad hoc — codifying a vendor’s solution, or accreted whilst on the job — than the result of some formal, top down ontology{{10}} development methodology (such as those buried in the Semantic Web and Linked Data).

[[10]]Ontology defined in Wikipedia[[10]]

Take Salesforce.com{{11}} as an example. If we were to develop a taxonomy to exchange CRM data, then the most likely source will be other venders reverse engineering{{12}} whatever Salesforce.com is doing. The driver, after all, is to enable clients to get their data out of Salesforce.com. Or the source might be whatever a government working group publishes, given a government’s dominant role in its geography. By extension we can also see the end of the formal standardisation efforts of old, as they devolve into the sort of information frameworks represented by XBRL, which accrete attributes as needed.

[[11]]SalesForce.com[[11]]
[[12]]Reverse engineering defined in Wikipedia[[12]]

The general trend we’re seeing is a move away from top-down, tightly defined and structured definitions of data interchange formats, as they’re replaced by bottom-up, looser definitions.

Innovation [2010-07-05]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Business is like a train…

The following analogy popped up the other day in an email discussion with a friend.

Running a business is a bit like being the Fat Controller, running his vast train network. We spend our time trying to get the trains to run on time with the all too often distraction of digging the Troublesome Trucks out of trouble.

Improvement often means upgrading the tracks to create smoother, straighter lines. After years of doing this, any improvement to the tracks can only provide a minor, incremental benefit.

What we really need is a new signalling system. We need to better utilise the tracks we already have, and this means making better decisions about which trains to run where, and better coordination between the trains. Our tracks are fine (as long as we keep up the scheduled maintenance), but we do need to better manage transit across and between them.

Swap processes for tracks, and I think that this paints quite a nice visual picture.

Years of processes improvement (via LEAN, Six Sigma and, more recently, BPM) had straightened and smoothed our processes to the point that any additional investment has hit the law of diminishing returns. Rather than continue to try and improve the processes on my own, I’d outsource process maintenance to a collection of SaaS and BPO providers.

The greater scale of these providers allows them to invest in improvements which I don’t have the time or money for. Handing over responsibility also creates the time and space for me to focus on improving the decisions on which process to run where, and when: my signalling system.

This is especially important in a world where it is becoming rare to even own the processes these days.

We forget just how important a good signalling system is. Get it right and you get the German or Japanese train networks. Get it wrong and you rapidly descend into the second or third world, regardless of the quality of your tracks.

Is the market for IT services and solutions shrinking or growing?

Here’s an interesting and topical question: is the market for enterprise IT services (SI, BPO, advisory et al) growing or shrinking? I’m doing the rounds at the moment to see where the market is going (a side effect of moving on), and different folk seems to have quite different views.

  • It’s shrinking as the new normal is squeezing budgets and OPEX is the new CAPEX.
  • It’s growing as companies are externalising more functions than ever before as they attempt to create a laser like focus on their core business.
  • It’s shrinking as the transition from on-premsis applications to SaaS implies a dramatic reduction (some folk are saying around 80-90%) in the effort required to deploy and maintain a solution.
  • It’s growing as the mid market is becoming a lot more sophisticated and starting to spend a lot more on enterprise software (witness Microsoft Dynamics huge market share).
  • It’s shrinking as SaaS is replacing BPO, in effect replacing people with cheaper software solutions? (Remember when TrueAdvantage, and Indian BPO, laid off all 150 of its workers after being purchased by InsideView?)
  • It’s growing as the need for more mobility solutions, and the massive growth in the mobile web, is driving us to create a new generation of enterprise solutions.
  • It’s shrinking as cloud computing and netbooks remove what little margin was left in infrastructure services.
  • It’s growing as investment in IT is a bit like gas, and tends to expand until it consumes all available funds. (Remember integration? As the cost of integration went down, we just found more integration projects to fill the gap.)

Like of a lot of these questions, it depends.

Update: Gartner finds that the worldwide IT services declined 5.3% last year, while Computer World UK tells us to expect another year of decline. How much of this is cyclic, and how much is due to a definition of “services” which could be more inclusive?

Updated: It appears that some organisations are not happy with the size and dominance of the IT services industry.

With cloud computing, the world is not flat

Does location matter? Or, put another way, is the world no longer flat? Many cloud and SaaS providers work under the assumption that where we store data where it is most efficient from an application performance point of view, ignoring political considerations. This runs counter to many company and governments who care greatly where their data is stored. Have we entered a time where location does matter, not for technical reasons, but for political reasons? Is globalisation (as a political thing) finally starting to impact IT architecture and strategy?

Just who is taking your order?
Just who is taking your order?

Thomas Friedman‘s book, The World is Flat, contained a number of stories which where real eye openers. The one I remember the most was the McDonald’s drive through. The idea was simple: once you’ve removed direct physical contact from the ordering process, then it’s more efficient to accept orders from a contact centre than from within the restaurant itself. We could event locate that contact centre in a cheaper geography such as another state, or even another country.

Telecommunications made the world flat, as cheap telecommunications allows us to locate work wherever it is cheapest. The opportunity for labour arbitrage this created drove offshoring through the late nineties and into the new millenium. Everything from call centres to tax returns and medical image diagnosis started to migrate to cheaper geographies. Competition to be the cheapest and most efficient service provider, rather than location, determines who does the work. The entire world would compete on a level playing field.

In the background, whilst this was happening, enterprise applications went from common to ubiquitous. Adoption was driven by the productivity benefits the applications brought, which started of as a source of differentiation, but has now become one of the many requirements of being in business. SaaS and cloud are the most recent step in this evolution, leveraging the global market to create solutions operating at such a massive scale that they can provide price points and service levels which are hard, if not impossible, for most companies to achieve internally.

The growth of the U.S. enterprise application market
The growth of the U.S. enterprise application market (via INPUT)

Despite the world being laser levelled within an inch of its life, many companies are finding it difficult to move their operations to the cost-effective nirvana that is cloud and SaaS services. Location matters, it seems. Not for technical reasons, but for political ones.

Where we store our assets is important. Organisations want to put their assets somewhere safe, because without assets these the organisations don’t amount to much. Companies want to keep their information — their confidential trade secrets — hidden from prying eyes. Governments need to ensure they have the trust of their citizens by respecting their privacy. (Not to mention the skullduggery this is international relations.) While communications technology has made it incredibly easy to move this information around and keep it secure, it has yet to solve the political problem of ensuring that we can trust the people responsible for safeguarding our assets. And all these applications we have created — both the traditional on-premesis, hosted or SaaS and cloud versions — are really just asset management tools.

We’re reached a point where one of the a larger hidden assumptions of enterprise applications has been exposed. Each application was designed to live and operate within a single organisation. This organisation might be a company, or it might be a country, or it might be some combination of the two. The application you select to manage your data determines the political boundary it lives within. If you use any U.S. SaaS or cloud solution provider to manage your data, then your data falls under U.S. judicial discovery laws, irregardless of where you yourself are located. If your data transits through the U.S., then assume that the U.S. government has a copy. The world might be flat, but where you store your assets and where you send them still matters.

Country-specific regulations governing privacy and data protection vary greatly.
Global data protection heat map (via Forrester)

We can already see some moves by the vendors to address this problem. Microsoft, for example, has developed a dedicated cloud for the U.S. government, known as BPOS Federal, which is designed to meet the government’s stringent security and privacy standards. Amazon has also taken a portion of the cloud it runs and dedicated it to, and located it in, the EU, for similar reasons.

If we consider enterprise applications to be asset management tools rather than productivity tools, then ideas like private clouds start to make a lot of sense. Cloud technology reifies a lot of the knowledge required to configure and manage a virtualised environment in software, eliminating the data centre voodoo and empowering the development teams to manage the solutions themselves. This makes cloud technology simply a better asset management tool, but we need to freedom to locate the data (and therefore the application) where it makes the most sense from an asset management point of view. Sometimes this might imply a large, location agnostic, public cloud. Other times it might require a much smaller private cloud located within a specific political boundary. (And the need to prevent some data even transiting through a few specific geographies – requiring us to move the code to the data, rather than the data to the code – might be the killer application that mobile agents have been waiting for.)

What we really need are meta-clouds: clouds created by aggregating a number of different clouds, just as the Internet is a network of separate networks. While the clouds would all be technically similar, each would be located in a different political geography. This might be inside vs. outside the organisation, or in different states, or even different countries. The data would be stored and maintained where it made the most sense from an asset management point of view, with few technical considerations, the meta-cloud providing a consistent approach to locating and moving our assets within and across individual clouds as we see fit.

Innovation [2010-03-01]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Renovation

I’ve done a bit of spring cleaning of the blog on a quiet Sunday afternoon (plus the kids are monopolising the Wii, so I can’t play New Super Mario Bros).

There’s more to do, but the big change is to gather some of the article threads into categories. A couple of posts seem to have taken a life of their own, and the resultant ping-pong between this blog and others has generated some interesting narratives on a couple of topics. Rather than leave them hidden in the threads, I’ve created a Focus category, and started to collect each thread in a sub-category.

So far:

  • The Value of Information. Starting with a simple observation that when we get information has as much impact as what we get, this thread generated some nice thoughts on how we might use information to create a more dynamic enterprise.
  • The Art of Random. Triggered by an invitation to present at InnoFuture — which unfortunately didn’t eventuate — I used the content create a series of posts (the outline for the preso was around six pages, so it would have been to much for one post). It covers the idea that innovation seems random due to the simple fact that your are not aware of the intervening steps from interesting problem through to novel solution.

There’s also a placeholder for Knowledge Worker of the Future, but more on that later.

Oh — and my favorite flying car is now in the header. Next I need to sort out the CSS colours to match.

What is the role of government in a Web 2.0 world?

What will be the role of government in a post Web 2.0 world? I doubt it’s what a lot of us predict, given society’s poor track record in predicting it’s own future.

One thing I am reasonably sure of though, is that this future won’t represent the open source nirvana that some pundits hope for. When I’ve ruminated in the past about the changing role of government, I’ve pointed out that attempting to create the future by dictate is definitely not the right approach. As I said then:

You don’t create peace by starting a war, and nor do you create open and collaborative government through top down directives. We can do better.

There was an excellent article by Nat Torkington, Rethinking open data, posted over at O’Reilly radar which shows this in action. As it points out, the U.S. Open Government Directive has prompted datasets of questionable value to be added to data.gov; while many of the applications are developed as they are easy to build, rather than providing any tangible benefit. Many of the large infrastructure projects commissioned in the name of open data suffered the same fate as large, unjustified infrastructure projects in private enterprise (i.e. they’re hard for the layman to understand, they have scant impact on solving the problems society seems plagued with, and they’re overly complex to deliver and use due to technological and political puritism).

A more productive approach is focus on solving problems that we, the populace, actually care about. In Australia this might involve responding to the bush fire season. California has a similar problem. The recent disaster in Haiti was another significant call to action. It was great to see the success that was Web 2.0 in Haiti (New Scientist had an excellent article).

As Nat Torkington says:

the best way to convince them to open data is to show an open data project that’s useful to real people.

Which makes me think: government is a tool for us to work together, not the enemy to subdue. Why don’t we move government on from service provider of last resort, which is the role it seems to play today.

Haiti showed us that some degree of centralisation is required to make these efforts work efficiently. A logical role for government going forward would be something like a market maker: connecting people who need services with the organisations providing them, and working to ensure that the market remains liquid. Government becomes the trusted party that ensures that there are enough service providers to meet demand, possibly even bundling service to provide solutions to life’s more complex problems.

We’ve had the public debate on whether or not government should own assets (bridges, power utilities etc.), and the answer was generally not. Government provision of services is well down a similar road. This frees up dedicated and hard working public servants (case workers, forestry staff, policy wonks …) to focus on the harder problem of determining what services should be provided.

Which brings me back to my original point. Why are we trying to drive government, and society in general, toward a particular imagined future of our choosing (one involving Open Government Directives, and complicated and expensive RDF infrastructure projects). We can use events like the bush fires and Haiti to form a new working relationship. Let’s pick hard but tractable problems and work together to find solutions. As Nat (again) points out, there’s a lot of data in government that public servants are eager to share, if we just give them a reason. And if our efforts deliver tangible benefits, then everyone will want to come along for the ride.

Updated: The reports are in: data.gov has quality issues. I’ve updated the text updated with the following references.

Updated: More news on data.gov’s limitations highlighting the problems with a “push” model to open government.

Innovation [2010-02-01]

Another week and another collection of interesting ideas from around the internet.

As always, thoughts and/or comments are greatly appreciated.

Reducing costs is not the only benefit of cloud computing & SaaS

The wisdom of the crowd seems to have decided that both cloud computing and its sibling SaaS are cost plays. You engage a cloud or SaaS vendor to reduce costs, as their software utility has the scale to deliver the same functionality at a lower price point than you could do yourself.

I think this misses some of the potential benefits that these new delivery models can provide, from reducing your management overhead, allowing you to focus on more important or pressing problems, through to acting as a large flex resource or providing you with a testbed for innovation. In an environment where we’re all racing to keep up, the time and space we can create through intelligently leveraging cloud and SaaS solutions could provide us with the competitive advantage we need.

Sameul Insull

Could and SaaS are going to take over the world, or so I hear. And it increasingly looks that way, from Nicholas Carr‘s entertaining stories about Sameul Insull through to Salesforce.com, Google and Amazon‘s attempts to box-up SaaS and cloud for easy consumption. These companies massive economies of scale enable them to deliver commoditized functionality at a dramatically lower price point that most companies could achieve with even the best on-premises applications.

This simple fact causes many analysts to point out the folly of creating a private cloud. While a private cloud enables a company to avoid the security and ownership issues associated with a public service, they will never be able to realise the same economies of scale as their public brethren. It’s these economies of scale that enables companies like Google to devote significant time and effort into finding new and ever more creative techniques to extract every last drip of efficiency from their data centres, techniques which give them a competitive advantage.

I’ve always had problems with this point of view, as it ignores one important fact: a modern IT estate must deliver more than efficiency. Constant and dramatic business change means that our IT estate must be able to be rapidly reconfigured to support an ever evolving business environment. This might be as simple as scaling up and down, inline with changing transaction volumes, but it might also involve  rewriting business rules and processes as the organisation enters and leaves countries with differing regulation regimes, as well as adapting to mergers, acquisitions and divestments.

Once we look beyond cost, a few interesting potential uses for cloud and SaaS emerge.

First, we can use cloud as a tool to increase the flexibility of our IT estate. Using a standard cloud platform, such as an Amazon Machine Image, provides us with more deployment options than more traditional approaches. Development and testing can be streamlined, compressing development and testing time, while deployed applications can be migrated to the cloud instance which makes the most sense. We might choose to use public cloud for development and testing, while deploying to a private cloud under our own control to address privacy or political concerns. We might develop, test and deploy all into the public cloud. Or we might even use a hybrid strategy, retaining some business functionality in a private cloud, while using one or more public clouds as a flex resource to cope with peak loads.

Second, we can use cloud and SaaS as tools to increase the agility of our IT estate. By externalising the the management of our infrastructure (via cloud), or even the management of entire applications (via SaaS), we can create time and space to worry about more important problems. This enables us to focus on what needs to happen, rather than how to make it happen, and rely on the greater scale of our SaaS or cloud provider to respond more rapidly than we could if we were maintaining a traditional on-premises solution.

And finally, we can use cloud as the basis of an incubator strategy where an organisation may test a new idea using externalised resources, proving the business case before (potentially) moving to a more traditional internal deployment model.

One problem I’ve been thinking about recently is how to make our incredibly stable and reliable IT estates respond better to business change. Cloud and SaaS, with the ability to shape the flexibility and agility of our IT estate to meet what the business needs, might just be the tools we need to do this.