Names and categories are important. Just look at the challenges faced by the archeology community as DNA evidence forces history to be rewritten when it breaks old understandings, changing how we think and feel in the process. Just who invaded who? Or was related to who?
We have the same problem with (enterprise) technology; how we think about the building blocks of the IT estate has a strong influence on how approach the problems we need to solve. Unfortunately our current taxonomy has a very functional basis, rooted as it is in the original challenge of creating the major IT assets we have today. This is a problem, as it’s preventing us to taking full advantage of the technologies available to us. If we want to move forward, creating solutions that will thrive in a post GFC world, then we need to think about enterprise IT in a different way.
Enterprise applications – the applications we often know and love (or hate) – fall into a few distinct types. A taxonomy, if you will. This taxonomy has a very functional basis, founded as it is on the challenge of delivering high performance and stable solutions into difficult operational environments. Categories tend to be focused on the technical role a group of assets have in the overall IT estate. We might quibble over the precise number of categories and their makeup, but for the purposes of this argument I’m going to go with three distinct categories (plus another one).
First, there’s the applications responsible for data storage and coherence: the electronic filing cabinets that replaced rooms full of clerks and accountants back in the day. From the first computerised general ledger through to CRM, their business case is a simple one of automating paper shuffling. Put the data in on place and making access quick and easy; like SABER did, which I’ve mentioned before.
Next, are the data transformation tools. Applications which take a bunch of inputs and generate an answer. This might be a plan (production plan, staffing roster, transport planning or supply chain movements …) or a figure (price, tax, overnight interest calculation). State might be stored somewhere else, but these solutions still need some some serious computing power to cope with hugh bursts in demand.
Third is data presentation: taking corporate information and presenting in some form that humans can consume (though looking at my latest phone bill, there’s no attempt to make the data easy to consume). This might be billing or invoicing engines, application specific GUIs, or even portals.
We can also typically add one more category – data integration – though this is mainly the domain of data warehouses. Solutions that pull together data from multiple sources to create a summary view. This category of solutions wouldn’t exist aside from the fact that our operational, data management solutions, can’t cope with an additional reporting load. This is also the category for all those XLS spreadsheets that spread through business like a virus, as high integration costs or more important projects prevent us from supporting user requests.
A long time ago we’d bake all these layers into the one solution. SABER, I’m sure, did a bit of everything, though its main focus was data management. Client-server changed things a bit by breaking user interface from back-end data management, and then portals took this a step further. Planning tools (and other data transformation tools) started as modules in larger applications, eventually popping out as stand alone solutions when they grew large enough (and complex enough) to justify their own delivery effort. Now we have separate solutions in each of these categories, and a major integration problem.
This categorisation creates a number of problems for me. First and foremost is the disconnection between what business has become, and what technology is trying to be. Back in the day when “computer” referred to someone sitting at a desk computing ballistics tables, we organised data processing in much the same way that Henry Ford organised his production line. Our current approach to technology is simply the latest step in the automation of this production line.
Quite a bit has changed since then. We’ve reconfigured out businesses, we’re reconfiguring our IT departments, and we need to reconfigure our approach to IT. Business today is really a network of actors who collaborate to make decisions, with most (if not all) of the heavy data lifting done by technology. Retail chains are trying to reduce the transaction load on their team working the tills so that they can focus on customer relationships. The focus in supply chains to on ensuring that your network of exception managers can work together to effectively manage disruptions in the supply chain. Even head office focused on understanding and responding to market changes, rather than trying to optimise the business in an unchanging market.
The moving parts of business have changed. Henry Ford focused on mass: the challenge of scaling manufacturing processes to get cost down. We’re moved well beyond mass, through velocity, to focus on agility. A modern business is a collection of actors collaborating and making decisions, not a set of statically defined processes backed by technology assets. Trying to force modern business practices into yesterdays IT taxonomy is the source of one of the disconnects between business and IT that we complain so much about.
There’s no finer example of this than Sales and Operations Planning (S&OP). What should be a collaborative and fluid process – forward planning among a network of stakeholders – has been shoehorned into a traditional n-tier, database driven, enterprise solution. While an S&OP solution can provided significant cost saving, many companies find it too hard to fit themselves into the solution. It’s not surprising that S&OP has a reputation for being difficult to deploy and use, with many planners preferring to work around the system than with it.
I’ve been toying with a new taxonomy for a little while now, one that tries to reflect the decision, actor and collaboration centric nature of modern business. Rather than fit the people to the factory, which was the approach during the industrial revolution, the idea is to fit the factory to the people, which is the approach we use today post LEAN and flexible manufacturing. While it’s a work in progress, it still provides a good starting point for discussions on how we might use technology to support business in the new normal.
In no particular order…
Fusion solutions blend data and process to create a clear and coherent environment to support specific roles and decisions. The idea is to provide the right data and process, at the right time, in a format that is easy to consume and use, to drive the best possible decisions. This might involve blending internal data with externally sourced data (potentially scraped from a competitor’s web site); whatever data required. Providing a clear and consistent knowledge work environment, rather than the siloed and portaled environment we have today, will improve productivity (more time on work that matters, and less time on busy work) and efficiency (fewer mistakes).
Next, decisioning solutions automate key decisions in the enterprise. These decisions might range from mortgage approvals through office work, such as logistics exception management, to supporting knowledge workers workers in the field. We also need to acknowledge that decisions are often decision making processes which require logic (roles) applied over a number of discrete steps (processes). This should not be seen as replacing knowledge workers, as a more productive approach is to view decision automation as a way of amplifying our users talents.
While we have a lot of information, some information will need to be manufactured ourselves. This might range from simple charts generated from tabular data, through to logistics plans or maintenance scheduling, or even payroll.
Information and process access provide stakeholders (both people and organisations) with access to our corporate services. This is not your traditional portal to web based GUI, as the focus will be on providing stakeholders with access wherever and whenever they need, on whatever device they happen to be using. This would mean embedding your content into a Facebook app, rather than investing in a strategic portal infrastructure project. Or it might involve developing a payment gateway.
Finally we have asset management, responsible for managing your data as a corporate asset. This looks beyond the traditional storage and consistency requires for existing enterprise applications to include the political dimension, accessibility (I can get at my data whenever and wherever I want to) and stability (earthquakes, disaster recovery and the like).
It’s interesting to consider the sort of strategy a company might use around each of these categories. Manufacturing solutions – such as crew scheduling – are very transactional. Old data out, new data in. This makes them easily outsourced, or run as a bureau service. Asset management solutions map very well to SaaS: commoditized, simple and cost effective. Access solutions are similar to asset management.
Fusion and decisioning solutions are interesting. The complete solution is difficult to outsource. For many fusion solutions, the data and process set presented to knowledge workers will be unique and will change frequently, while decisioning solutions contain decisions which can represent our competitive advantage. On the other hand, it’s the intellectual content in these solutions, and not the platform, which makes them special. We could sell our platform to our competitors, or even use a commonly available SaaS platform, and still retain our competitive advantage, as the advantage is in the content, while our barrier to competition is the effort required to recreate the content.
This set of categories seems to map better to where we’re going with enterprise IT at the moment. Consider the S&OP solution I mention before. Rather than construct a large, traditional, data-centric enterprise application and change our work practices to suit, we break the problem into a number of mid-sized components and focus on driving the right decisions: fusion, decisioning, manufacturing, access, and asset management. Our solution strategy becomes more nuanced, as our goal is to blend components from each category to provide planners with the right information at the right time to enable them to make the best possible decision.
After all, when the focus is on business agility, and when we’re drowning in a see of information, decisions are more important than data.
Peter
The power of decisioning applications to encapsulate competitive differentiation and collective wisdom is one of their most important characteristics, as you note. I would add that decisioning applications need logic (business know-how, regulations, policies, experience) to be encapsulated in them but also require analytic insight. The use of data mining and predictive analytics, in particular, allows you to turn the data you have (your record of your historical, collective wisdom in many ways) into usable insight. Decisioning applications need that as well as logic.
JT
[…] Decisions are more important than data […]
I agree on the importance of providing a decision with the right portfolio of information (there's a whole series of posts on this blog on the very topic of the value of information). However, the capabilities offered by data mining and predictive analytics live in the manufactured category. They manufacture insight from existing, historical data. As Andy Mulholland has been known to say:
Sometimes we need to look out the front or sides. While data mining and predictive analytics are useful tools, sometimes we need to use a different tool from the tool box. I love the TESCO story as a brilliant example of balancing the information portfolio to optimise a decision.
The role of information fusion is to blend the data from various sources to provide decisioning with the information portfolio it needs, some of which might be manufactured.
Hmm
When I read your definition of Fusion it seemed very focus on delivering integrated/coherent information to people to make decisions and a decisioning app needs more than that, it needs analytic insight that is executable.
Think we are in violent agreement overall!
I expect we are in agreement 🙂
Peter
Great post. I took a different approach of how decisions get made, i.e. by humans or machines. http://blog.kinaxis.com/2010/05/human-intellige…
Though I do not have your insight into the taxinomy of IT solutions, I agree whole heartedly regarding the value-add of the decisioning process. The Tesco story is one aspect of predictive analytics, which relies on non-traditional inputs to predict buyer behaviour. What I like about the tesco story is that causality is obvious: Hot weather = more barbecues. What is not apparent is the correlation. You wrote that “A rise of 10C, for example, led to a 300% uplift in sales of barbecue meat and a 50% increase in sales of lettuce.” What is missing in your write-up about the Tesco story – and might have been missing int he original story too – are the words “on average.” Not every case of a 10C temperature rise results in a 300% increase in barbecue meat sales. There is uncertainty in all decisioning, especially the further out we look, such as for S&OP. Relying too heavily on “numbers” is not the solution.
Another aspect of the decisioning process is being able to predict consequence in the supply chain. This is less important for food retailers, such as Tesco, with very short shelf-life and order-to-delivery processes. But for consumer electronics manufacturers and retailers this is a core requirement, especially in B2B environments. Many CE OEM's now outsource much if not all of their manufacturing to several contract manufacturers, any one of which could make the finished good. In additional, commodity components are bought from several component suppliers. So a decision to satisfy particular demand requires a huge number of variables to be evaluated, all of which contain some amount of uncertainty, starting from the likelihood of the customer actually placing the order for the quantity, price, and delivery date currently being discussed.
What also captured my attention was your related posting about “We’re moved well beyond mass, through velocity, to focus on agility.” I absolutely agree. Yet so much of the IT focus is still on “mass”. Our focus is firmly in the supply chain space. I see a continuous discussion going on between mass, velocity, and agility. I presume you are familiar with Hau Lee's work on the “Triple-A supply chain”?
Our focus is very much on agility.
Hi Trevor,
I'd love to get some more information behind the TESCO story, but unfortunately the Internet hasn't offered up anything more concrete that what I already have. As you quite rightly point out, the problem is inherently non-linear: the change in peoples' burger buying habits due to a 5° rise in temperature depends on, among other things, what the temperature was in the first place. Numbers are useful, but only in the context of the heuristics, the tacit wisdom, that SMEs bring to the table.
I think we forget just how good a situated SME is as a decision maker, and get carried away with technology. A lot of the effort that goes into BI seems to be the futile quest for more (more data, more algorithms …). A more productive approach is to try and support and amplify your SMEs. Clean up their knowledge work environment (two phones and four screens was never a good idea) and find ways to automate much of the drudge work. Create the time and space for them make better decisions, and then try and capture the commonplaces in software to create even more time and space. And we need to consider their role in the end-to-end environment. As you point out, modern supply chains are complex beasts, possibly due to the drive to statically optimise them. We need to optimise supply chain and knowledge workers as a single overarching system.
Yep–I'm familiar with Hau Lee's work. He's one of the few who managed to avoid the quest for more (more cost savings, more velocity …) to understand that its really a question of balance.
IT, I think, is behind in this area. To date, enterprise IT has largely been seen as a tool to automate paper shuffling. Mass, as it were. This agenda has been driven by the legal and regulatory framework companies work under, a framework designed for the pen-and-paper age. What do the CFO and auditors need to see to meet governance requirements? What is the retention and disposition of that order? Can I track my orders? This was fine when enterprise IT was simply automating paper management, but we're emerging into an age where we need a new legal framework. The assumptions the current legal framework was built on (i.e. companies as independent entities who contain all their state and decision making) is rapidly departing. Can a tweet or yam represent a binding contract? Is it even possible for me to get a copy of all data that my financial and legal state depends on? It's the old “leaky walls problem” that led to ideas like deperimeterisation.
I think the winners in the near future are the ones who realise that the rule of business are changing, and we need to change the rules of IT to suit.