Tag Archives: logistics

The Boundaryless Value-Chain

I’ve uploaded another presentation to SlideShare. (Still trying to work through the backlog.) This is something that I had been doing logistics companies and a few public forums, such as The Open Group.

How real-time computing will transform supply chain decision-making

This presentation will provide a plain-English account of how real-time computing will transform supply chain decision-making and control. Peter Evans-Greenwood will illustrate the emerging leading practices with lessons learned from case studies, featuring clients across the globe.

The biggest challenge for today’s supply chains is to be adaptive. While tremendous gains have been made over the last thirty years, today’s applications are not as flexible as promised. New tools and techniques are required to capture and automate the non-linear, exception-rich, business logic that we currently rely on employees to deliver. Extending the technology stack will allow us to leverage the higher capacity of technology to deliver globally optimal solutions and to introduce innovations such as the moving warehouse into all our supply chains.

The value of information

We all know that data is valuable; without it it would be somewhat difficult to bill customers and stay in business. Some companies have accumulated masses of data in a data warehouse which they’ve used to drive organizational efficiencies or performance improvements. But do we ever ask ourselves when is the data most valuable?

Billing is important, but if we get the data earlier then we might be able to deal with a problem—a business exception—more efficiently. Resolving a short pick, for example, before the customer notices. Or perhaps even predicting a stock-out. And in the current hyper-competitive business environment where everyone is good, having data and the insight that comes with it just a little bit sooner might be enough to give us an edge.

A good friend of mine often talks about the value of information in a meter. This makes more sense when you know that he’s a utility/energy guru who’s up to his elbows in the U.S. smart metering roll out. Information is a useful thing when you’re putting together systems to manage distributed networks of assets worth billions of dollars. While the data will still be used to drive billing in the end, the sooner we receive the data the more we can do with it.

One of the factors driving the configuration of smart meter networks is the potential uses for the information the meters will generate. A simple approach is to view smart meters as a way to reduce the cost of meter reading; have meters automatically phone readings home rather than drive past each customer’s premisses in a truck and eyeball each meter. We might even used this reduced cost to read the meters more frequently, shrinking our billing cycle, and the revenue outstanding with it. However, the information we’re working from will still be months, or even quarters, old.

If we’re smart (and our meter has the right instrumentation) then we will know exactly which and how many houses have been affected by a fault. Vegetation management (tree trimming) could become proactive by analyzing electrical noise on the power lines that the smart meters can see, and determine where along a power line we need to trim the trees. This lets us go directly to where work needs to be done, rather than driving past every every power line on a schedule—a significant cost and time saving, not to mention an opportunity to engage customers more closely and service them better.

If our information is a bit younger (days or weeks rather than months) then we can use it too schedule just-in-time maintenance. The same meters can watch for power fluctuations coming out of transformers, motors and so on, looking for the tell tail signs of imminent failure. Teams rush out and replace the asset just before it fails, rather than working to a program of scheduled maintenance (maintenance which might be causing some of the failures).

When the information is only minutes old we can consider demand shaping. By turning off hot water heaters and letting them coast we can avoid spinning up more generators.

If we get at or below seconds we can start using the information for load balancing across the network, managing faults and responding to disasters.

I think we, outside the energy industry, are missing a trick. We tend to use a narrow, operational view of the information we can derive from our IT estate. Data is either considered transactional or historical; we’re either using it in an active transaction or we’re using it to generate reports well after the event. We typically don’t consider what other uses we might put the information to if it were available in shorter time frames.

I like to think of information availability in terms of a time continuum, rather than a simple transactional/historical split. The earlier we use the information, the more potential value we can wring from it.

The value of data
The value of data decreases rapidly with age

There’s no end of useful purposes we can turn our information too between the billing and transactional timeframes. Operational excellence and business intelligence allow us to tune business processes to follow monthly or seasonal cycles. Sales and logistics are tuned on a weekly basis to adjust for the dynamics of the current holiday. Days old information would allow us to respond in days, calling a client when we haven’t received their regular order (a non-event). Operations can use hours old information for capacity planning, watching for something trending in the wrong direction and responding before everything falls overs.

If we can use trending data—predicting stock-outs and watching trends in real time—then we can identify opportunities or head off business exceptions before they become exceptional. BAM (business activity monitoring) and real-time data warehouses take on new meaning when viewed in this light.

In a world where we are all good, being smart about the information we can harvest from our business environment (both inside and outside our organization) has the potential to make us exceptional.

Update: Andy Mulholland has a nice build on this idea over at Capgemini‘s CTO blog: Have we really understood what Business Intelligence means?

What we’re doing today is not what we did yesterday

Telxon
Telxon hand unit

The business of IT has changed radically in the last few years. Take Walmart for example. In the 80s Walmart laid the foundations for its future growth by fielding a supply chain data warehouse. The insight the data warehouse fueled their amazing growth to become the largest retailer in the world. However, our focus has moved on from developing applications. More recently Walmart fielded the Telxon, a barcode scanner with a wireless link to the corporate back-end. This device is the front end of a distributed solution which has let Walmart devolve buying decisions to the team walking the shop floor.

For a long time IT departments have defined themselves by their ability to deliver major applications into the enterprise. CRM, MRP, even ERP; all the three letter acronyms. For a long time this has been the right thing to do. Walmart’s data warehouse, to return to our example, was a large application which was a significant driver in the company’s outlier performance for the next couple of decades.

The world has changed a lot since that data warehouse went operational. First the market for enterprise applications grew into the mature market we see today. If you have a well defined problem—an unsupported business activity—then a range of vendors will line up to provide you with off-the-shelf solutions. Next we saw a range of non-technology options emerge, from business process outsourcing (BPO) and leveraging partnerships, through to emerging software-as-a-service (SaaS) solutions.

What used to be a big problem—fielding a large bespoke (or even off-the-shelf) application—has become a (relatively) small one. Take CRM (customer relationship management) as one example. What was a multi-year project requiring an investment of tens of millions of dollars to deploy a best-of-breed on-premises solution, has become a few million dollar and a matter of months to field SaaS solution. And the SaaS solutions seem to be pulling ahead in the feature-function war; Salesforce.com (one of the early SaaS CRM solutions) is now seen as the market leader (check with your favorite analyst).

Nor has business been standing still while technology has been marching forward. The productivity improvements provided by the last generation of enterprise applications have created the time and space for business stakeholders to solve more difficult problems. That supply chain solution Walmart deployed that was the first of many, automating most (if not all) of the mundane tasks across the supply chain. Business process methodologies such as LEAN (derived from the Toyota Production System) and Six Sigma (from GE) then rolled through the business, ripping all the fat from our supply chains as they went past. The latest focus has been category management: managing groups of product as separate businesses and, in many chases, handing responsibility for managing the category back to the supplier.

Which brings us back to the Telxon. If we’ve all been on the same journey—fielding a complete set of applications, optimizing our business processes, and deploying the latest, best practice, management techniques—then how do we differentiate? Walmart realized that, all things being equal, it was their ability to respond to supply chain exceptions that would provide them with an edge. As a retailer, this means responding to stock-outs on the shop floor. The only way to do this in a timely manner is to empower the people walking the floor to make a procurement decision when they see fit. Walmart’s solution was the Telxon.

The Telxon is an interesting device as it reveals an astonishing amounts of information: the quantity that should be on the shelf, the availability from the nearest warehouse, the retail price, and even the markup. It also empowers the employee to place an order for anything from a pallet to a truck-load.

Writer
Writer Charles Platt during his stint as a Wal-Mart employee in Flagstaff, Ariz.

As one journalist found:

We received an inspirational talk on this subject, from an employee who reacted after the store test-marketed tents that could protect cars for people who didn’t have enough garage space. They sold out quickly, and several customers came in asking for more. Clearly this was a singular, exceptional case of word-of-mouth, so he ordered literally a truckload of tent-garages, “Which I shouldn’t have done really without asking someone,” he said with a shrug, “because I hadn’t been working at the store for long.” But the item was a huge success. His VPI was the biggest in store history—and that kind of thing doesn’t go unnoticed in Arkansas.

Charles Platt, Fly on the Wall (7th Feb 2009), New Your Post

Clearly the IT world has moved on since that first data warehouse went live in Arkansas. Enterprise applications have been transformed from generators of competitive advantage into efficient sources of commodity functionality. Technology’s ability to create value should be focused on how we effectively support knowledge workers and the differentiation they create. These solutions only have a passing resemblance to the application monoliths of the past. They’re distributed, rather than centralized, pulling information from a range of sources, including partner and public sources. They’re increasingly real time, in the Twitter sense of the term, pulling current transactional data in as needed rather than working from historical data and relying on overnight ETLs. They’re heterogeneous, integrating a range of technologies as well as changes in business processes and employee workplace agreements, all brought together for delivery of the final solution. And, most importantly, they’re not standalone n-tier applications like we built in the past.

But while the IT world has moved on, it seems that many of our IT departments haven’t. Our heritage as application factories has us focused on managing applications, rather than technology, actively preventing us from creating this new generation of solutions. This behavior is ingrained in our organizations, with a large number of architects through project managers to senior management measuring their worth by the size of the project (in terms of CAPEX and OPEX required, or head count) that they are involved in, with the counter productive behavior that this creates.

In a world where solutions are shrinking and becoming more heterogeneous (even to the extent of becoming increasingly cross discipline) our inability to change ourselves is the biggest thing holding us back

Product Meta-Models

Imagine the future. Not the distant future, we’re talking about next week or maybe the week after rather than an eventual future where we all have flying cars. A new business competitor has emerged on the market, coming out of nowhere with a business model that makes it impossible for your company to compete. They have half the cost to serve of their competitors, half the time to revenue, they seem to be able to introduce a new product in a matter of days rather than weeks, and their products are incredibly customisable. They seem to have halved the business metrics that you want to go down, doubled the ones you want to go up, while as the same time supporting a product portfolio of impressive depth and complexity. And they claim to be able to do this with conventional technology. How did they do it? And how are you going to respond?

A version was published in Align Journal as Product Meta-Models:
Delivering business agility through a new perspective on technology
.

Link to complete article.