Tag Archives: Mashup

We need a better definition for “mash-up”

Mash-up no longer seems to mean was we thought it meant. The term has been claimed by the analysts and platform vendors as short hand for the current collection of hot product features, and no longer represents the goals and benefits of those original mash-ups that drew our interest. If we want to avoid the hype, firmly tying mash-up to the benefits we saw in those first solutions, then we need to reclaim the term, basing its definition on the outcomes those first mash-up solutions delivered, rather than the (fairly) conventional means used to deliver them.

Definitions are a good thing, as they help keep us all on the same page and make conversations easier. However, what often starts our as a powerful concept—with a clear value proposition—is rapidly diluted as the original definition gets pulled in different directions.

Over time, the foundation of a term’s definition moves from the outcome it represents (and the benefits this outcome provides), taking rest on the means which the original outcome was delivered, driven by everyones’ desire to define what they are doing in relation to the current hot topic. Next, the people who consider it to be just a means, often start redefining the meaning to make it more inclusive, while continuing to claim the original benefits. We end up selling the new hype as either means or goals or any half-hearted solution in between – and missing the original outcome nearly completely

The original mash-ups were simple things. Pulling together data from two or more sources to create a new consolidated view. Think push-pins on a map. Previously I would have had to access these data sources separately—find, select, remember, find, select correlation, click. With the mash-up this multi-step, and multi-decision workflow is reduced to a single look, select, click. Many decisions became one, and I was no longer forced to remember intermediate steps or data. 

It was this elimination of unnecessary decisions that first attracted many of us to the idea of a mash-up. As TQMLEAN, et al tell us, unnecessary decisions are a source of errors. If we want to deliver high quality at a low cost (i.e. efficient and effective knowledge workers) then we need to eliminate these decisions. This helps us become more productive by spending a greater proportion of our time on the decisions that really matter, rather than on messy busy work. Fewer decisions also means fewer chances for mistakes.

Since those original mash-up solutions, our definition of mash-up evolved. Todays definitions are founded on the tools and techniques used to deliver a modern web-based GUI. These definitions focus on the technology standards, where the data is processed (client vs. server), standards and APIs, and even mention application architectures used. Rarely do they talk about the outcome delivered, or the benefits this brings.

There’s little difference, for example, between some mashups and a modern portal. We can debate the differences between aggregating data on the client vs. the server, but does it really matter if it doesn’t change the outcome, and the difference is invisible to the user? The same can be said for the use of standards, APIs used, user configuration options, differing solution architectures and so on.

The shift to a feature-function base definition has allowed the product vendors and analysts of seize control of our definition, and apply it to the next generation of products they would like us to buy. This has diluted the term to the point that it seems to cover much of what we’ve been doing for the last decade, and many of the benefits ascribed to the original mash-ups don’t apply to solutions which fit under this new, broader church.

Modern consumer home pages, such as iGoogle and NetVibes for example, do allow us to use desk and screen real estate more effectively–providing a small productivity boost–but they don’t address the root of the problem. Putting two gadgets on a page does little to fuse the data. The user is still required to scan the CRM and order management gadgets separately, fusing the data in their head.  Find, select, remember, find, select correlation, click rather than a single look, select, click.

The gadgets might be visually proximate, but we could do that with two browser windows. Or two green screens side-by-side. The user is still required to look at both, and establish the correlation themselves. The chair might not swivel as much as with old school portlets, but eyeballs still do, and we are still forcing the user to make unnecessary decisions about data correlation. They don’t deliver that eliminate unnecessary decisions outcome that first attracted us to mash-ups.

The gold standard we need to measure potential mash-ups against is the melding of data used to eliminate unnecessary decisions. This might something visual, like push-pins on a map or markup on an x-ray. Or it might cover tabular data, where different cells in the table are sourced from different back-end systems. (Single customer view generated at the user interface.) If we fuse the data, building new gadgets which pull data attributes and function into one consistent view, then we eliminate these decisions. We can even extend this to function, allowing the user to trigger a workflow or process that make sense in the view they are presented, but with no knowledge of what or where implements the workflow.

We need a definition for mash-ups is that captures this outcome. Something like:

A mash-up is a user interface, or user interface element, that melds data and function from multiple sources to create one single, seamless view of a topic, eliminating unnecessary decisions and actions.

This v0.1 definition provides a nice, terse, strong definition for mash-up which we can hang a number of concrete benefits from.

  • More productive knowledge workers. Our knowledge workers only spend time on the decisions that really matter, rather than on messy busy work, making them more productive.
  • More effective knowledge workers. Fewer decisions mean fewer chances for mistakes, reducing the cost of error recovery and rework resulting in more effective knowledge workers.

Posted via email from PEG @ Posterous

Balancing our two masters

We seem to be torn between two masters. On one hand we’re driven to renew our IT estate, consolidating solutions to deliver long term efficiency and cost savings. On the other hand, the business wants us to deliver new, end user functionality (new consumer kiosks, workforce automation and operational excellence solutions …) to support tactical needs. But how do we balance these conflicting demands, when our vertically integrated solutions tightly bind user interaction to the backend business systems and their multi-year life-cycle? We need to decouple the two, breaking the strong connection between business system and user interface. This will enable us to evolve them separately, delivering long term savings while meeting short term needs.

Business software’s proud history is the story of managing the things we know. From the first tabulation systems through enterprise applications to modern SaaS solutions, the majority of our efforts have been focused data: capturing or manufacturing facts, and pumping them around the enterprise.

We’ve become so adept at delivering these IT assets into the business, that most companies’ IT estates a populated with an overabundance of solutions. Many good solutions, some no so good, and many redundant or overlapping. Gardening our IT estate has become a major preoccupation, as we work to simplify and streamline our collection of applications to deliver cost savings and operational improvements. These efforts are often significant undertakings, with numbers like “5 years” and “$50 million” not uncommon.

While we’ve become quite sophisticated at delivering modular business functionality (via methods such as SOA), our approach to supporting users is still dominated by a focus on isolated solutions. Most user interfaces are slapped on as nearly an after thought, providing stakeholders with a means to interact with the vast, data processing monsters we create. Tightly coupled to the business system (or systems) they are deployed with, these user interfaces are restricted to evolving at a similar pace.

Business has changed while we’ve been honing our application development skills. What used to take years, now takes months, if not weeks. What used to make sense now seems confusing. Business is often left waiting while we catch up, working to improve our IT estate to the point that we can support their demands for new consumer kiosks, solutions to support operational excellence, and so on.

What was one problem has now become two. We solved the first order challenge of managing the vast volumes of data an enterprise contains, only to unearth a second challenge: delivering the right information, at the right time, to users so that they can make the best possible decision. Tying user interaction to the back end business systems forces our solutions for these two problems to evolve at a similar pace. If we break this connection, we can evolve users interfaces at a more rapid pace. A pace more in line with business demand.

We’ve been chipping away at this second problem for a quite a while. Our first green screen and client-server solutions were over taken from portals, which promised to solve the problem of swivel-chair integration. However, portals seem to be have been defeated by browser tabs. While these allowed us to bring together the screens from a collection of applications, providing a productivity boost by reducing the number of interfaces a user interacted with, it didn’t break the user interfaces explicit dependancy on the back end business systems.

We need to create a modular approach to composing new, task focused user interfaces, doing to user interfaces what SOA has done for back-end business functionality. The view users see should be focused on supporting the decision they are making. Data and function sourced from multiple back-end systems, broken into reusable modules and mashed together, creating an enterprise mash-up. A mashup spanning multiple screens to fuse both data and process.

Some users will find little need an enterprise mash-up—typically users who spend the vast majority of their time working within a single application. Others, who work between applications, will see a dramatic benefit. These users typically include the knowledge rich workers who drive the majority of value in a modern enterprise. These users are the logistics exception managers, who can make the difference between a “best of breed” supply chain and a category leading one. They are the call centre operators, whose focus should be on solving the caller’s problem, and not worrying about which backend system might have the data they need. Or they could be field personnel (sales, repairs …), working between a range of systems as they engage with you customer’s or repair your infrastructure.

By reducing the number of ancillary decisions required, and thereby reducing the number of mistakes made, enterprise mash-ups make knowledge workers more effective. By reducing the need to manually synchronise applications, copying data between them, we make them more efficient.

But more importantly, enterprise mash-ups enable us to decouple development of user interfaces from the evolution of the backend systems. This enables us to evolve the two at different rates, delivering long term savings while meeting short term need, and mitigating one of the biggest risks confronting IT departments today: the risk of becoming irrelevant to the business.