I have a new post up over at the Deloitte Digital blog, ‘WhatsApp wasn’t overvalued’. I’d been watching the debate around Facebook’s purchase of WhatsApp Media Release (19 February 2014), http://newsroom.fb.com/News/805/Facebook-to-Acquire-WhatsApp">Facebook to Acquire WhatsApp, http://facebook.com”">Facebook. and I was struck how many analysts and journalists were stuck in the past, trying to value WhatsApp based on the assets it holds (user base and ad inventory) when clearly the firm’s value lay in the information flow it controlled (~1,000 messages per month, per subscriber). It’s true that it you do asset-based valuation that the deal doesn’t make sense, but if you do an information-flow base valuation then the deal is a no brainer.
On of the big ideas behind the Shift Index Peter Evans-Greenwood & Peter Williams (2014), Setting aside the burdens of the past, Deloitte. is the shift from stocks to flows: it’s not the stocks that you hold (assets, information, etc.), but the flows that you can tap into (partner networks, information, etc.) that drive your competitive advantage. Put another way, in a world where everything you need is available on demand and the world is awash with information, it’s your ability to tap into whats happening in the environment and react that defines your competitive advantage, and not the assets and data you hold.
Facebook’s purchase of WhatsApp is a great example of this difference.
Look assets that WhatsApp holds and the deal doesn’t make sense. WhatsApp’s user base of around 450 million active monthly users, many of whom will already be using Facebook, doesn’t seem to be worth the effort, especially since the company is only making US$1 per year (with the first year free). Nor is the advertising revenue of interest, since there isn’t any and WhatsApp has a public position of ‘no ads, no games, no gimmicks’. That user base, with WhatsApp as a standalone service, is not worth what Facebook paid.
Since the deal doesn’t stack up based on a standard valuation most of the pundits are calling the acquisition a ‘strategic’ move. That’s usually code for ‘we’re not sure why they did it’. However, what if we value WhatsApp based on the information flow that the firm controls?
The average WhatsApp user sends more than 1,000 messages every month, and receives more than 2,000 messages. That’s over 30 messages a day, few of which are the spam which dominates email. It’s also a user base where over 70% of the population is active on any given day.
WhatsApp might just provide Facebook with something like that Google Search box, as WhatsApp gives Facebook a big, fat data stream that tells them what their user base is about to do. WhatsApp might not grow Facebook’s user base, and it won’t be a direct source of ad revenue. It will, however, allow them to watch what you’re saying – privately – to your friends and relatives, and then use that information to tailor the ads presented to you on the firm’s various web properties. Tell your best friend that you’re test driving a car tomorrow, and expect to see ads from car manufacturers when you’re browsing another friend’s timeline later that night.
If we value WhatsApp based on the information flows that it has, then the deal starts to make a lot of sense.
I spent a little time over the break thinking about what’s happening with anti money-laundering (AML) and counter terrorism-financing (CTF) regulation, since it had come time to update the Technological Considerations of AML/CTF Programs published by LexisNexis as part of their Anti-Money Laundering and Financial Crime publication. (There’s a blurb for my part embedded below.)
The interesting shift in this version is that growth of AML/CTF regulation for complementary currencies (ie. currencies that are not backed by a government). Organised crime groups are finding all sorts of creative ways to use complementary currencies to launder money, including the creation of bitcoin ‘mixers’ that are intended to improve anonymity for bitcoin transactions.
A side effect of this regulation – which is largely targeted at bitcoin but which is been written in a way to bring all complimentary currencies under regulation – is that the points-based loyalty programme that you were thinking about introducing might actually bring you under the AML/CTF regulator’s watchful eye. Something as ambitious as Facebook Credits definitely would.
This has all sorts of interesting implications for enterprise-wide governance, but that’s a different discussion since it’s well beyond the scope of the Technological Considerations of AML/CTF Programs piece.
If you’re interested then head over to LexisNexis (or we can catch up for a coffee if you like).
Names and categories are important. Just look at the challenges faced by the archeology community as DNA evidence forces history to be rewritten when it breaks old understandings, changing how we think and feel in the process. Just who invaded who? Or was related to who?
We have the same problem with (enterprise) technology; how we think about the building blocks of the IT estate has a strong influence on how approach the problems we need to solve. Unfortunately our current taxonomy has a very functional basis, rooted as it is in the original challenge of creating the major IT assets we have today. This is a problem, as it’s preventing us to taking full advantage of the technologies available to us. If we want to move forward, creating solutions that will thrive in a post GFC world, then we need to think about enterprise IT in a different way.
Enterprise applications – the applications we often know and love (or hate) – fall into a few distinct types. A taxonomy, if you will. This taxonomy has a very functional basis, founded as it is on the challenge of delivering high performance and stable solutions into difficult operational environments. Categories tend to be focused on the technical role a group of assets have in the overall IT estate. We might quibble over the precise number of categories and their makeup, but for the purposes of this argument I’m going to go with three distinct categories (plus another one).
First, there’s the applications responsible for data storage and coherence: the electronic filing cabinets that replaced rooms full of clerks and accountants back in the day. From the first computerised general ledger through to CRM, their business case is a simple one of automating paper shuffling. Put the data in on place and making access quick and easy; like SABER did, which I’ve mentioned before.
Next, are the data transformation tools. Applications which take a bunch of inputs and generate an answer. This might be a plan (production plan, staffing roster, transport planning or supply chain movements …) or a figure (price, tax, overnight interest calculation). State might be stored somewhere else, but these solutions still need some some serious computing power to cope with hugh bursts in demand.
Third is data presentation: taking corporate information and presenting in some form that humans can consume (though looking at my latest phone bill, there’s no attempt to make the data easy to consume). This might be billing or invoicing engines, application specific GUIs, or even portals.
We can also typically add one more category – data integration – though this is mainly the domain of data warehouses. Solutions that pull together data from multiple sources to create a summary view. This category of solutions wouldn’t exist aside from the fact that our operational, data management solutions, can’t cope with an additional reporting load. This is also the category for all those XLS spreadsheets that spread through business like a virus, as high integration costs or more important projects prevent us from supporting user requests.
A long time ago we’d bake all these layers into the one solution. SABER, I’m sure, did a bit of everything, though its main focus was data management. Client-server changed things a bit by breaking user interface from back-end data management, and then portals took this a step further. Planning tools (and other data transformation tools) started as modules in larger applications, eventually popping out as stand alone solutions when they grew large enough (and complex enough) to justify their own delivery effort. Now we have separate solutions in each of these categories, and a major integration problem.
This categorisation creates a number of problems for me. First and foremost is the disconnection between what business has become, and what technology is trying to be. Back in the day when “computer” referred to someone sitting at a desk computing ballistics tables, we organised data processing in much the same way that Henry Ford organised his production line. Our current approach to technology is simply the latest step in the automation of this production line.
Quite a bit has changed since then. We’ve reconfigured out businesses, we’re reconfiguring our IT departments, and we need to reconfigure our approach to IT. Business today is really a network of actors who collaborate to make decisions, with most (if not all) of the heavy data lifting done by technology. Retail chains are trying to reduce the transaction load on their team working the tills so that they can focus on customer relationships. The focus in supply chains to on ensuring that your network of exception managers can work together to effectively manage disruptions in the supply chain. Even head office focused on understanding and responding to market changes, rather than trying to optimise the business in an unchanging market.
The moving parts of business have changed. Henry Ford focused on mass: the challenge of scaling manufacturing processes to get cost down. We’re moved well beyond mass, through velocity, to focus on agility. A modern business is a collection of actors collaborating and making decisions, not a set of statically defined processes backed by technology assets. Trying to force modern business practices into yesterdays IT taxonomy is the source of one of the disconnects between business and IT that we complain so much about.
There’s no finer example of this than Sales and Operations Planning (S&OP). What should be a collaborative and fluid process – forward planning among a network of stakeholders – has been shoehorned into a traditional n-tier, database driven, enterprise solution. While an S&OP solution can provided significant cost saving, many companies find it too hard to fit themselves into the solution. It’s not surprising that S&OP has a reputation for being difficult to deploy and use, with many planners preferring to work around the system than with it.
I’ve been toying with a new taxonomy for a little while now, one that tries to reflect the decision, actor and collaboration centric nature of modern business. Rather than fit the people to the factory, which was the approach during the industrial revolution, the idea is to fit the factory to the people, which is the approach we use today post LEAN and flexible manufacturing. While it’s a work in progress, it still provides a good starting point for discussions on how we might use technology to support business in the new normal.
In no particular order…
Fusion solutions blend data and process to create a clear and coherent environment to support specific roles and decisions. The idea is to provide the right data and process, at the right time, in a format that is easy to consume and use, to drive the best possible decisions. This might involve blending internal data with externally sourced data (potentially scraped from a competitor’s web site); whatever data required. Providing a clear and consistent knowledge work environment, rather than the siloed and portaled environment we have today, will improve productivity (more time on work that matters, and less time on busy work) and efficiency (fewer mistakes).
Next, decisioning solutions automate key decisions in the enterprise. These decisions might range from mortgage approvals through office work, such as logistics exception management, to supporting knowledge workers workers in the field. We also need to acknowledge that decisions are often decision making processes which require logic (roles) applied over a number of discrete steps (processes). This should not be seen as replacing knowledge workers, as a more productive approach is to view decision automation as a way of amplifying our users talents.
While we have a lot of information, some information will need to be manufactured ourselves. This might range from simple charts generated from tabular data, through to logistics plans or maintenance scheduling, or even payroll.
Information and process access provide stakeholders (both people and organisations) with access to our corporate services. This is not your traditional portal to web based GUI, as the focus will be on providing stakeholders with access wherever and whenever they need, on whatever device they happen to be using. This would mean embedding your content into a Facebook app, rather than investing in a strategic portal infrastructure project. Or it might involve developing a payment gateway.
Finally we have asset management, responsible for managing your data as a corporate asset. This looks beyond the traditional storage and consistency requires for existing enterprise applications to include the political dimension, accessibility (I can get at my data whenever and wherever I want to) and stability (earthquakes, disaster recovery and the like).
It’s interesting to consider the sort of strategy a company might use around each of these categories. Manufacturing solutions – such as crew scheduling – are very transactional. Old data out, new data in. This makes them easily outsourced, or run as a bureau service. Asset management solutions map very well to SaaS: commoditized, simple and cost effective. Access solutions are similar to asset management.
Fusion and decisioning solutions are interesting. The complete solution is difficult to outsource. For many fusion solutions, the data and process set presented to knowledge workers will be unique and will change frequently, while decisioning solutions contain decisions which can represent our competitive advantage. On the other hand, it’s the intellectual content in these solutions, and not the platform, which makes them special. We could sell our platform to our competitors, or even use a commonly available SaaS platform, and still retain our competitive advantage, as the advantage is in the content, while our barrier to competition is the effort required to recreate the content.
This set of categories seems to map better to where we’re going with enterprise IT at the moment. Consider the S&OP solution I mention before. Rather than construct a large, traditional, data-centric enterprise application and change our work practices to suit, we break the problem into a number of mid-sized components and focus on driving the right decisions: fusion, decisioning, manufacturing, access, and asset management. Our solution strategy becomes more nuanced, as our goal is to blend components from each category to provide planners with the right information at the right time to enable them to make the best possible decision.
After all, when the focus is on business agility, and when we’re drowning in a see of information, decisions are more important than data.
Generational distinctions seem to make less and less sense every year. While my grandmother never learnt to drive a car, my mother happily uses a computer and the Internet. Yes, the pace of change has sped up, but it appears that so have we. Age is a very crude factor, and as we shift to increasing personalisation age looks less and less relevant as a driver for change.
There’s been a lot of talk about how the next generation (whichever that happens to be) is going to change the world. We had it with the Greatest Generation. We had it with the Pre-Boomers and Baby Boomers. We had it with Gen X. Now we have it with Gen Y. This might have made sense some time ago, when changes in social mores and practices took longer than a single generation. Change takes time, and if the pressure is only gentle then we can expect significant time to pass before the change is substantial.
I remember my grandmother who never learn’t to drive. Back in the day, before World War II, women driving was not the done thing. My grandmother never learnt to use a video recorder, computer, or the Internet, either. The pressure to change was gentle, and she was happy with her lot.
Sociologists now tell to that the differences between populations is often less than the differences within populations. Or, put another way, on aggregate we’re all pretty much the same. The same is true for my grandmothers. While one never learn’t to drive (among other things), my other grandmother charted a different course. No, she never learnt to use the Internet, but she did take the time when her husband went off to war to learn how to drive, and the both had a bit of a crush on Cary Grant.
If we wizz forward to the present day, then we can see the same dynamics at work. My parents have, in the course of only a few years, leapt from a technology-free zone to the proud owners of laptops, a wireless network, and a passion for doing their own video editing. Even mother-in-law, who has zero experience with technology, bought a Wii recently. She also seems to have more luck with the Wii than her video recorder which she’s never been able to work.
The idea that technology adoption is generational seems to have eroded to the point of irrelevance. There was even a report recently (by Cisco I think, though I can’t find the link) where the researchers could find no significant correlation between new technology adoption and generational strata.
Why then do we persist in pigeon holing generations when it is proven to be counter productive? Not all Gen X’s want to kill themselves. I’m a Gen X, I even like Nirvana, and I’ve yet to have that urge. Not all Gen Y’s want to publish their lives on Facebook. And not all baby boomers want to be helicopter parents. The only accomplishment this type of media story achieves by promoting these stereotypes is to massage the ego of their target demographic. To divide people into generations and say that this generation likes certain tools and techniques, and this generation doesn’t, and will never adapt, is naive.
If we must categorise people, then it makes more sense to use something like NEOs to divide the population into vertical groups based on how we approach life. Do you like change? Do you not? Do you value your privacy? Are you willing to put everything out in public? And so on…
The pace of change has accelerated to the point that everyone’s challenge, from Pre-Boomers and Baby Boomers through to Generation Z, is how to cope with significant change over the next ten year. If we are, as some predict, moving to an innovation economy, then it is the ability to adapt that is most important. Those betting their organisation on a generational change will be sadly disappointed as no generation has a monopoly on coping with change.
A more productive approach is to seek out the people from all generations who thrive in change, and aim for a diverse workforce so that you can tap into the broad range of skills this diversity will provide. Ultimately competition in the workplace is the main determinant for change, with individuals adopting the tools and techniques they need to get the job done, whatever generation they are from.
Cisco CEO John Chambers on speeding up innovation [BusinessWeek]
In Chambers’ view, business is on the verge—not in the midst—of a dramatic transformation, a huge leap forward in productivity built on collaboration made possible by Web 2.0-style tools similar to YouTube, FaceBook, and Wikipedia but adapted to the corporate environment. “Our children, with their social network[ing], have presented us with the future of productivity,” he emphatically told the crowd of about 4,500 executives.
The kids are alright [Economist]
Worries about the damage the internet may be doing to young people has produced a mountain of books—a suitably old technology in which to express concerns about the new. Robert Bly claims that, thanks to the internet, the “neo-cortex is finally eating itself”. Today’s youth may be web-savvy, but they also stand accused of being unread, bad at communicating, socially inept, shameless, dishonest, work-shy, narcissistic and
indifferent to the needs of others.
Or the importance of being both good and original.
While I’m not a big fan of musicians reworking past hits, I’m beginning to wonder if we should ask Gil Scott-Heron to run up a new version of The Revolution Will Not Be Televised. He made a good point then: that real change comes from the people dealing with the day-to-day challenges, not the people talking about them. His point still holds today. Web 2.0 might be where the buzz is, but the real revolution will emerge from the child care workers, farmers, folk working in Starbucks, and all the other people who live outside the limelight.
There appears to be a disconnect between the technology community—the world of a-list bloggers, venture capital, analysts, (non-)conferences, etc.—and the people who doing real things. The world we technologists live in is not the real world. The real world is people going out and solving problems and trying to keep their head above water, rather than worrying about their blog, twitter, venture funding, or the new-new thing. This is the world that invented micro-credit, where fishermen off the african coast use a mobile phones to find the market price of their cash, and where farmers in Australia are using Web 2.0 (not that they care what it is) to improve their farm management. These people don’t spend their time blogging since they’re too busy trying to improve the world around them. Technology will not change the world on its own; however real people solving real problems will.
We’re all too caught up in the new-new thing. A wise friend of mine often makes the point that we have more technology than we can productively use; perhaps it’s time to take a breather from trying to create the new-new-new thing, look around the world, and see what problems we can solve with the current clutch of technologies we have. The most impressive folk I’ve met in recent years don’t blog, vlog, twitter or spend their time changing their Facebook profile. They’re focused on solving their problems using whatever tools are available.
Which I suppose brings me to my point. In a world where we’re all communicating with each other all of the time—the world of mesh collaboration—it’s all to easy to mistake the medium for the message. We get caught up in the sea of snippets floating around us, looking for that idea that will solve our problem and give us a leg up on the competition. What we forget is that our peers and competitors are all swimming in the same sea of information, so the ideas we’re seeing represent best practice at best. The mesh is a great leveler, spreading information evenly like peanut butter over the globe, but don’t expect it to provide you with that insight that will help you stand out from the
Another wise friend makes the equally good point that in the mesh it’s not enough to be good: you need to both good and original. The mesh doesn’t help you with original. Original is something that bubbles up when our people in the field struggle with real problems and we give them the time, space, and tools to explore new ways of working.
A great example is the rise in sharity blogs. The technical solution to sharing music files is to create peer-to-peer (P2P) applications—applications, which a minority of internet users use to consume the majority of the available bandwidth. However, P2P is too complicated for many people (involving downloading and installing software, finding a torrent seeds, and learning a new language including terms like torrent seed) and disconnected from the music communities. Most of the music sharing action has moved onto sharity blogs. Using free blogging and file sharing services (such as Blogger and RapidShare, respectively) communities are building archives of music that you can easily download, archives which you can find via a search engines and which are integrated (via links) into the communities and discussions around them. The ease of plugging together a few links lets collectors focus on being original; putting their own spin on the collection they are building, be it out of print albums, obscure artists or genres, or simply whatever they can get their hands on.
What can we learn from this? When we develop our new technology and/or Web 2.0 strategy, we need to remember that what we’re trying to do is provide our team with a new tool to help them do a better job. Deploying Web 2.0 as a new suite of information silos, disconnected from the current work environment, will create yet another touch point for our team members to navigate as they work toward their goals. This detracts from their work, which is what they’re really interested in, resulting in them ignoring the new application as it seems more trouble than it is worth. The mesh is a tool to be used and not an end in itself, and needs to be integrated into and support the existing work environment in a way that makes work easer. This creates the time and space for our employees to explore new ideas and new ways of working, helping them to become both good and original.
Update: Swapped the image of Gil Scott-Heron’s Pieces of Man for an embedded video of The revolution will not be televised, at the excellent suggestion of Judith Ellis.