Tag Archives: GUI

Decisions are more important than data

Names and categories are important. Just look at the challenges faced by the archeology community as DNA evidence forces history to be rewritten when it breaks old understandings, changing how we think and feel in the process. Just who invaded who? Or was related to who?

We have the same problem with (enterprise) technology; how we think about the building blocks of the IT estate has a strong influence on how approach the problems we need to solve. Unfortunately our current taxonomy has a very functional basis, rooted as it is in the original challenge of creating the major IT assets we have today. This is a problem, as it’s preventing us to taking full advantage of the technologies available to us. If we want to move forward, creating solutions that will thrive in a post GFC world, then we need to think about enterprise IT in a different way.

Enterprise applications – the applications we often know and love (or hate) – fall into a few distinct types. A taxonomy, if you will. This taxonomy has a very functional basis, founded as it is on the challenge of delivering high performance and stable solutions into difficult operational environments. Categories tend to be focused on the technical role a group of assets have in the overall IT estate. We might quibble over the precise number of categories and their makeup, but for the purposes of this argument I’m going to go with three distinct categories (plus another one).

SABER
SABER @ American Airlines

First, there’s the applications responsible for data storage and coherence: the electronic filing cabinets that replaced rooms full of clerks and accountants back in the day. From the first computerised general ledger through to CRM, their business case is a simple one of automating paper shuffling. Put the data in on place and making access quick and easy; like SABER did, which I’ve mentioned before.

Next, are the data transformation tools. Applications which take a bunch of inputs and generate an answer. This might be a plan (production plan, staffing roster, transport planning or supply chain movements …) or a figure (price, tax, overnight interest calculation). State might be stored somewhere else, but these solutions still need some some serious computing power to cope with hugh bursts in demand.

Third is data presentation: taking corporate information and presenting in some form that humans can consume (though looking at my latest phone bill, there’s no attempt to make the data easy to consume). This might be billing or invoicing engines, application specific GUIs, or even portals.

We can also typically add one more category – data integration – though this is mainly the domain of data warehouses. Solutions that pull together data from multiple sources to create a summary view. This category of solutions wouldn’t exist aside from the fact that our operational, data management solutions, can’t cope with an additional reporting load. This is also the category for all those XLS spreadsheets that spread through business like a virus, as high integration costs or more important projects prevent us from supporting user requests.

A long time ago we’d bake all these layers into the one solution. SABER, I’m sure, did a bit of everything, though its main focus was data management. Client-server changed things a bit by breaking user interface from back-end data management, and then portals took this a step further. Planning tools (and other data transformation tools) started as modules in larger applications, eventually popping out as stand alone solutions when they grew large enough (and complex enough) to justify their own delivery effort. Now we have separate solutions in each of these categories, and a major integration problem.

This categorisation creates a number of problems for me. First and foremost is the disconnection between what business has become, and what technology is trying to be. Back in the day when “computer” referred to someone sitting at a desk computing ballistics tables, we organised data processing in much the same way that Henry Ford organised his production line. Our current approach to technology is simply the latest step in the automation of this production line.

Computers in the past
Computers in the past

Quite a bit has changed since then. We’ve reconfigured out businesses, we’re reconfiguring our IT departments, and we need to reconfigure our approach to IT. Business today is really a network of actors who collaborate to make decisions, with most (if not all) of the heavy data lifting done by technology. Retail chains are trying to reduce the transaction load on their team working the tills so that they can focus on customer relationships. The focus in supply chains to on ensuring that your network of exception managers can work together to effectively manage disruptions in the supply chain. Even head office focused on understanding and responding to market changes, rather than trying to optimise the business in an unchanging market.

The moving parts of business have changed. Henry Ford focused on mass: the challenge of scaling manufacturing processes to get cost down. We’re moved well beyond mass, through velocity, to focus on agility. A modern business is a collection of actors collaborating and making decisions, not a set of statically defined processes backed by technology assets. Trying to force modern business practices into yesterdays IT taxonomy is the source of one of the disconnects between business and IT that we complain so much about.

There’s no finer example of this than Sales and Operations Planning (S&OP). What should be a collaborative and fluid process – forward planning among a network of stakeholders – has been shoehorned into a traditional n-tier, database driven, enterprise solution. While an S&OP solution can provided significant cost saving, many companies find it too hard to fit themselves into the solution. It’s not surprising that S&OP has a reputation for being difficult to deploy and use, with many planners preferring to work around the system than with it.

I’ve been toying with a new taxonomy for a little while now, one that tries to reflect the decision, actor and collaboration centric nature of modern business. Rather than fit the people to the factory, which was the approach during the industrial revolution, the idea is to fit the factory to the people, which is the approach we use today post LEAN and flexible manufacturing. While it’s a work in progress, it still provides a good starting point for discussions on how we might use technology to support business in the new normal.

In no particular order…

Fusion solutions blend data and process to create a clear and coherent environment to support specific roles and decisions. The idea is to provide the right data and process, at the right time, in a format that is easy to consume and use, to drive the best possible decisions. This might involve blending internal data with externally sourced data (potentially scraped from a competitor’s web site); whatever data required. Providing a clear and consistent knowledge work environment, rather than the siloed and portaled environment we have today, will improve productivity (more time on work that matters, and less time on busy work) and efficiency (fewer mistakes).

Next, decisioning solutions automate key decisions in the enterprise. These decisions might range from mortgage approvals through office work, such as logistics exception management, to supporting knowledge workers workers in the field. We also need to acknowledge that decisions are often decision making processes which require logic (roles) applied over a number of discrete steps (processes). This should not be seen as replacing knowledge workers, as a more productive approach is to view decision automation as a way of amplifying our users talents.

While we have a lot of information, some information will need to be manufactured ourselves. This might range from simple charts generated from tabular data, through to logistics plans or maintenance scheduling, or even payroll.

Information and process access provide stakeholders (both people and organisations) with access to our corporate services. This is not your traditional portal to web based GUI, as the focus will be on providing stakeholders with access wherever and whenever they need, on whatever device they happen to be using. This would mean embedding your content into a Facebook app, rather than investing in a strategic portal infrastructure project. Or it might involve developing a payment gateway.

Finally we have asset management, responsible for managing your data as a corporate asset. This looks beyond the traditional storage and consistency requires for existing enterprise applications to include the political dimension, accessibility (I can get at my data whenever and wherever I want to) and stability (earthquakes, disaster recovery and the like).

It’s interesting to consider the sort of strategy a company might use around each of these categories. Manufacturing solutions – such as crew scheduling – are very transactional. Old data out, new data in. This makes them easily outsourced, or run as a bureau service. Asset management solutions map very well to SaaS: commoditized, simple and cost effective. Access solutions are similar to asset management.

Fusion and decisioning solutions are interesting. The complete solution is difficult to outsource. For many fusion solutions, the data and process set presented to knowledge workers will be unique and will change frequently, while decisioning solutions contain decisions which can represent our competitive advantage. On the other hand, it’s the intellectual content in these solutions, and not the platform, which makes them special. We could sell our platform to our competitors, or even use a commonly available SaaS platform, and still retain our competitive advantage, as the advantage is in the content, while our barrier to competition is the effort required to recreate the content.

This set of categories seems to map better to where we’re going with enterprise IT at the moment. Consider the S&OP solution I mention before. Rather than construct a large, traditional, data-centric enterprise application and change our work practices to suit, we break the problem into a number of mid-sized components and focus on driving the right decisions: fusion, decisioning, manufacturing, access, and asset management. Our solution strategy becomes more nuanced, as our goal is to blend components from each category to provide planners with the right information at the right time to enable them to make the best possible decision.

After all, when the focus is on business agility, and when we’re drowning in a see of information, decisions are more important than data.

We need a better definition for “mash-up”

Mash-up no longer seems to mean was we thought it meant. The term has been claimed by the analysts and platform vendors as short hand for the current collection of hot product features, and no longer represents the goals and benefits of those original mash-ups that drew our interest. If we want to avoid the hype, firmly tying mash-up to the benefits we saw in those first solutions, then we need to reclaim the term, basing its definition on the outcomes those first mash-up solutions delivered, rather than the (fairly) conventional means used to deliver them.

Definitions are a good thing, as they help keep us all on the same page and make conversations easier. However, what often starts our as a powerful concept—with a clear value proposition—is rapidly diluted as the original definition gets pulled in different directions.

Over time, the foundation of a term’s definition moves from the outcome it represents (and the benefits this outcome provides), taking rest on the means which the original outcome was delivered, driven by everyones’ desire to define what they are doing in relation to the current hot topic. Next, the people who consider it to be just a means, often start redefining the meaning to make it more inclusive, while continuing to claim the original benefits. We end up selling the new hype as either means or goals or any half-hearted solution in between – and missing the original outcome nearly completely

The original mash-ups were simple things. Pulling together data from two or more sources to create a new consolidated view. Think push-pins on a map. Previously I would have had to access these data sources separately—find, select, remember, find, select correlation, click. With the mash-up this multi-step, and multi-decision workflow is reduced to a single look, select, click. Many decisions became one, and I was no longer forced to remember intermediate steps or data. 

It was this elimination of unnecessary decisions that first attracted many of us to the idea of a mash-up. As TQMLEAN, et al tell us, unnecessary decisions are a source of errors. If we want to deliver high quality at a low cost (i.e. efficient and effective knowledge workers) then we need to eliminate these decisions. This helps us become more productive by spending a greater proportion of our time on the decisions that really matter, rather than on messy busy work. Fewer decisions also means fewer chances for mistakes.

Since those original mash-up solutions, our definition of mash-up evolved. Todays definitions are founded on the tools and techniques used to deliver a modern web-based GUI. These definitions focus on the technology standards, where the data is processed (client vs. server), standards and APIs, and even mention application architectures used. Rarely do they talk about the outcome delivered, or the benefits this brings.

There’s little difference, for example, between some mashups and a modern portal. We can debate the differences between aggregating data on the client vs. the server, but does it really matter if it doesn’t change the outcome, and the difference is invisible to the user? The same can be said for the use of standards, APIs used, user configuration options, differing solution architectures and so on.

The shift to a feature-function base definition has allowed the product vendors and analysts of seize control of our definition, and apply it to the next generation of products they would like us to buy. This has diluted the term to the point that it seems to cover much of what we’ve been doing for the last decade, and many of the benefits ascribed to the original mash-ups don’t apply to solutions which fit under this new, broader church.

Modern consumer home pages, such as iGoogle and NetVibes for example, do allow us to use desk and screen real estate more effectively–providing a small productivity boost–but they don’t address the root of the problem. Putting two gadgets on a page does little to fuse the data. The user is still required to scan the CRM and order management gadgets separately, fusing the data in their head.  Find, select, remember, find, select correlation, click rather than a single look, select, click.

The gadgets might be visually proximate, but we could do that with two browser windows. Or two green screens side-by-side. The user is still required to look at both, and establish the correlation themselves. The chair might not swivel as much as with old school portlets, but eyeballs still do, and we are still forcing the user to make unnecessary decisions about data correlation. They don’t deliver that eliminate unnecessary decisions outcome that first attracted us to mash-ups.

The gold standard we need to measure potential mash-ups against is the melding of data used to eliminate unnecessary decisions. This might something visual, like push-pins on a map or markup on an x-ray. Or it might cover tabular data, where different cells in the table are sourced from different back-end systems. (Single customer view generated at the user interface.) If we fuse the data, building new gadgets which pull data attributes and function into one consistent view, then we eliminate these decisions. We can even extend this to function, allowing the user to trigger a workflow or process that make sense in the view they are presented, but with no knowledge of what or where implements the workflow.

We need a definition for mash-ups is that captures this outcome. Something like:

A mash-up is a user interface, or user interface element, that melds data and function from multiple sources to create one single, seamless view of a topic, eliminating unnecessary decisions and actions.

This v0.1 definition provides a nice, terse, strong definition for mash-up which we can hang a number of concrete benefits from.

  • More productive knowledge workers. Our knowledge workers only spend time on the decisions that really matter, rather than on messy busy work, making them more productive.
  • More effective knowledge workers. Fewer decisions mean fewer chances for mistakes, reducing the cost of error recovery and rework resulting in more effective knowledge workers.

Posted via email from PEG @ Posterous