I, along with Alan Marshall and Robert Hillard, have a new essay published by Deloitte Insights – The new division of labor: On our evolving relationship with technology. This is the latest in an informal series that looks into how artificial intelligence (AI) is changing work. The other essays (should you be interested) are Cognitive collaboration, Reconstructing work and Reconstructing jobs.
Over the last few essays we’ve argued that humans and AI might both think but they think differently, though in complimentary ways, and if we’re to make the most of these differences we need to approach work differently. This was founded on the realisation that there is no skill – when construed within a task – that is unique to humans. Reconstructing work proposed that rather than thinking about work in terms of products, processes and tasks, it might be more productive to approach human work as a process of discovering what problems need to be solved, with automation doing the problem solving. Reconstructing jobs took this a step further and explored how jobs might change if we’re to make the most of both human and AI-powered machine using this approach, rather than simply using the machine to replace humans.
This new essay, The new division of labour, looks at what is holding us back. It’s common to focus on what’s known as the “skills gap”, the gap between the knowledge and skills the worker has and those required by the new technology. What’s often forgotten is that there’s also an emotional angle. The introduction of the word processor, for example, streamlined the production of business correspondence, but only after managers became comfortable taking on the responsibility of preparing their own correspondence. (And there’s still a few senior managers around who have their emails printed out so that they can draft a reply on the back for their assistant to type.) Social norms and attitudes often need to change before a technology’s full potential can be realised.
We can see something similar with AI. This time, though, the transition is complicated as the new tools and systems are not passive tools anymore. We’re baking decisions into software then connecting these automated decisions to the levers that control our businesses: granting loans, allocating work and so on. These digital systems are no longer passive tools, they have some autonomy and, consequently, some agency. They’re not human, but they’re not “tools” in the traditional sense.
This has the interesting consequence that we relate to them as sort-of humans as their autonomy and agency affects our own. They’re consequently taking on roles in the organogram as we find ourselves working for, with and on machines. This also works the other way around, and machines find themselves working for, with and on humans. Consider how a ride-sharing driver has their work assigned to them, and their competence is measured, by an algorithm that is effectively their manager. A district nurse negotiates their schedule with a booking and work scheduling system. Or it might be more of a peer relationship, such as when a judge consult a software tool when determining a sentence. We might even find humans and machines teaching each other new tricks.
As with the word processors, we can only make the most of this new technology if we address the social issues. With the word processor it was managers seeing typing as being below their station. The challenge with AI is much more difficult though, as making the most of this new generation of technology requires us to value humans to do something other than complete tasks.
The essay uses the example of superannuation. Nobody wants retirement financial products, they want a happy retirement, the problem is that ‘happy retirement’ is no more than a vague idea for most of us. We need to go on a journey through sorting out if what we think will make us happy will actually make us happy, setting reasonable expectations, and adjusting our attitudes and behaviours to balance our life today with the retirement we want to work toward. This is something like a Socratic dialogue, a conversation with others where we create the knowledge of what ‘happy retirement’ means for us. Only then can we engage the robots-advisor to crunch the numbers and create an investment plan.
The problem is the disconnect between how the client and firm derive value from this journey. The client values discovering what happy retirement means, and adjusting their attitudes and behaviours to suit. The firm values investments made. This disconnect means that firms focus their staff on clients later in life, once the kids have left home and the house is paid off. The client, on the other hand, would realise the most value by engaging early to establish the attitudes and behaviours that will enable the magic of compound interest to work.
As we say in the conclusion to the report:
However, successfully adopting the next generation of digital tools, autonomous tools to which we delegate decisions and that have a limited form of agency, requires us to acknowledge this new relationship. At the individual level, forming a productive relationship with these new digital tools requires us to adopt new habits, attitudes, and behaviors that enable us to make the most of these tools. At the enterprise level, the firm must also acknowledge this shift, and adopt new definitions of value that allow it to reward workers for contributing to the uniquely human ability to create new knowledge. Only if firms recognize this shift in how value is created, if they are willing to value employees for their ability to make sense of the world, will AI adoption deliver the value they promise.
You can find the entire essay over at Deloitte Insights.