Category Archives: Work, worker, workplace

Teaching creativity in the 21st century

In 2017 Deloitte Centre for the Edge hosted a public lecture by James C. Kaufman, PhD; a professor of educational psychology at the University of Connecticut as well as a creativity & education expert, where he discussed the challenges of teaching and assessing creativity. This is a 20 minute bite-sized version of the 90-minute lecture.

We noticed the similarity between creativity and our recent work on digital competency, which we published in “From coding to competence”. Both are depend more on attitudes and behaviours than knowledge and skills. Both are also tightly tied to context, and don’t transfer easily between domains.

The lecture is derived from Dr Kaufman’s cutting-edge psychological research and debunks common misconceptions about creativity, describe how learning environments can support creativity, while providing insights into teaching and assessing creativity within the established curriculum.

The lecture covers:

  • What is creativity?
  • Seeing creativity as a development trajectory and advancing along this trajectory.
  • Creativity across domains (not just ‘art’), and the ‘cost’ of creativity.
  • Measuring creativity
  • How can people become more creative?

The new division of labor: On our evolving relationship with technology

I, along with Alan Marshall and Robert Hillard, have a new essay published by Deloitte InsightsThe new division of labor: On our evolving relationship with technology1)Evans-Greenwood, P, Hillard, R, & Marshall, A 2019, ‘The new division of labor: On our evolving relationship with technology’, Deloitte Insights, <>.. This is the latest in an informal series that looks into how artificial intelligence (AI) is changing work. The other essays (should you be interested) are Cognitive collaboration,2)Guszcza, J, Lewis, H, & Evans-Greenwood, P 2017, ‘Cognitive collaboration: Why humans and computers think better together’, Deloitte Review, no. 20, viewed 14 October 2017, <>. Reconstructing work3)Evans-Greenwood, P, Lewis, H, & Guszcza, J 2017, ‘Reconstructing work: Automation, artificial intelligence, and the essential role of humans’, Deloitte Review, no. 21, <>. and Reconstructing jobs.4)Evans-Greenwood, P, Marshall, A, & Ambrose, M 2018, ‘Reconstructing jobs: Creating good jobs in the age of artificial intelligence’, Deloitte Insights, <>.

Over the last few essays we’ve argued that humans and AI might both think but they think differently, though in complimentary ways, and if we’re to make the most of these differences we need to approach work differently. This was founded on the realisation that there is no skill – when construed within a task – that is unique to humans. Reconstructing work proposed that rather than thinking about work in terms of products, processes and tasks, it might be more productive to approach human work as a process of discovering what problems need to be solved, with automation doing the problem solving. Reconstructing jobs took this a step further and explored how jobs might change if we’re to make the most of both human and AI-powered machine using this approach, rather than simply using the machine to replace humans.

This new essay, The new division of labour, looks at what is holding us back. It’s common to focus on what’s known as the “skills gap”, the gap between the knowledge and skills the worker has and those required by the new technology. What’s often forgotten is that there’s also an emotional angle. The introduction of the word processor, for example, streamlined the production of business correspondence, but only after managers became comfortable taking on the responsibility of preparing their own correspondence. (And there’s still a few senior managers around who have their emails printed out so that they can draft a reply on the back for their assistant to type.) Social norms and attitudes often need to change before a technology’s full potential can be realised.

We can see something similar with AI. This time, though, the transition is complicated as the new tools and systems are not passive tools anymore. We’re baking decisions into software then connecting these automated decisions to the levers that control our businesses: granting loans, allocating work and so on. These digital systems are no longer passive tools, they have some autonomy and, consequently, some agency. They’re not human, but they’re not “tools” in the traditional sense.

This has the interesting consequence that we relate to them as sort-of humans as their autonomy and agency affects our own. They’re consequently taking on roles in the organogram as we find ourselves working for, with and on machines. This also works the other way around, and machines find themselves working for, with and on humans. Consider how a ride-sharing driver has their work assigned to them, and their competence is measured, by an algorithm that is effectively their manager. A district nurse negotiates their schedule with a booking and work scheduling system. Or it might be more of a peer relationship, such as when a judge consult a software tool when determining a sentence. We might even find humans and machines teaching each other new tricks.

As with the word processors, we can only make the most of this new technology if we address the social issues. With the word processor it was managers seeing typing as being below their station. The challenge with AI is much more difficult though, as making the most of this new generation of technology requires us to value humans to do something other than complete tasks.

The essay uses the example of superannuation. Nobody wants retirement financial products, they want a happy retirement, the problem is that ‘happy retirement’ is no more than a vague idea for most of us. We need to go on a journey through sorting out if what we think will make us happy will actually make us happy, setting reasonable expectations, and adjusting our attitudes and behaviours to balance our life today with the retirement we want to work toward. This is something like a Socratic dialogue, a conversation with others where we create the knowledge of what ‘happy retirement’ means for us. Only then can we engage the robots-advisor to crunch the numbers and create an investment plan.

The problem is the disconnect between how the client and firm derive value from this journey. The client values discovering what happy retirement means, and adjusting their attitudes and behaviours to suit. The firm values investments made. This disconnect means that firms focus their staff on clients later in life, once the kids have left home and the house is paid off. The client, on the other hand, would realise the most value by engaging early to establish the attitudes and behaviours that will enable the magic of compound interest to work.

As we say in the conclusion to the report:

However, successfully adopting the next generation of digital tools, autonomous tools to which we delegate decisions and that have a limited form of agency, requires us to acknowledge this new relationship. At the individual level, forming a productive relationship with these new digital tools requires us to adopt new habits, attitudes, and behaviors that enable us to make the most of these tools. At the enterprise level, the firm must also acknowledge this shift, and adopt new definitions of value that allow it to reward workers for contributing to the uniquely human ability to create new knowledge. Only if firms recognize this shift in how value is created, if they are willing to value employees for their ability to make sense of the world, will AI adoption deliver the value they promise.

You can find the entire essay over at Deloitte Insights.

References   [ + ]

1. Evans-Greenwood, P, Hillard, R, & Marshall, A 2019, ‘The new division of labor: On our evolving relationship with technology’, Deloitte Insights, <>.
2. Guszcza, J, Lewis, H, & Evans-Greenwood, P 2017, ‘Cognitive collaboration: Why humans and computers think better together’, Deloitte Review, no. 20, viewed 14 October 2017, <>.
3. Evans-Greenwood, P, Lewis, H, & Guszcza, J 2017, ‘Reconstructing work: Automation, artificial intelligence, and the essential role of humans’, Deloitte Review, no. 21, <>.
4. Evans-Greenwood, P, Marshall, A, & Ambrose, M 2018, ‘Reconstructing jobs: Creating good jobs in the age of artificial intelligence’, Deloitte Insights, <>.

Reconstructing jobs

Some coauthors and I have a new report out: Reconstructing jobs: Creating good jobs in the age of artificial intelligence.  This essay builds on the previous two from our “future or work” series,  Cognitive collaboration and Reconstructing work, published on DU Press (now Deloitte Insights) as part of Deloitte Review #20 (DR20) and #21 (DR21) respectively.

Cognitive collaboration‘s main point was that there are synergies between humans and computers, and that solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. Reconstructing work built on this, pointing out the difference between human and machine was not in particular knowledge or skills exclusive to either; indeed, if we frame work in terms of prosecuting tasks than we must accept that there are no knowledge or skills required that are uniquely human. What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. This insight provided the title of the second essay – Reconstructing work – as it argued that we need to think differently about how we construct work if we want the make the most of the opportunities provided by AI.

This third essay in the series, Reconstructing jobs, takes a step back and looks these jobs of the future might look like. The narrative is built around a series of concrete examples – from contact centres through wealth management to bus drivers – to show how we might create this next generation of jobs. These are jobs founded on an new division of labour: humans creating new knowledge, making sense of the world to identify and delineate problems; AI plans solutions to these problems; and good-old automation to delivers. To do this we must create good jobs, as it is good jobs that make the most of our human abilities as creative problem identifiers. These jobs are also good for firms as, when combined suitably with AI, they will provide superior productivity. They’re also good job for the community, as increased productivity can be used to provide more equitable services and to support *learning by doing* within the community, a rising tide that lives all boats.

The essay concludes by pointing out that there is no inevitability about the nature of work in the future. As we say in the essay:Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable.

The question is then, what do we want these jobs of the future to look like?

Redefining education @ TAFE NSW >Engage 2017

C4tE AU was invited to TAFE NSW’s annual >Engage event to present a 15 minute overview of our Redefining education report, which had caught the attention of the event’s organisers.

The report ask a simple question:

In a world where our relationship with knowledge has changed – why remember what we can google? – should our relationship with education change as well?

and then chases this idea down the rabbit hole to realise that what we mean by “education” and “to be educated” need to change in response.

The presentation is a 15 minute TED format thing. You can find it on Vimeo.

The report is on Deloitte’s web site.

“Tiger, one day you will come to a fork in the road,” he said. “And you’re going to have to make a decision about which direction you want to go.” He raised his hand and pointed. “If you go that way you can be somebody. You will have to make compromises and you will have to turn your back on your friends. But you will be a member of the club and you will get promoted and you will get good assignments.”

Then Boyd raised his other hand and pointed another direction. “Or you can go that way and you can do something – something for your country and for your Air Force and for yourself. If you decide you want to do something, you may not get promoted and you may not get the good assignments and you certainly will not be a favorite of your superiors. But you won’t have to compromise yourself. You will be true to your friends and to yourself. And your work might make a difference.”

He paused and stared into the officer’s eyes and heart. “To be somebody or to do something.” In life there is often a roll call. That’s when you will have to make a decision. To be or to do. Which way will you go?”

—John Boyd from “Boyd: The fighter pilot who changed the art of war”

Image: Wikicommons

Reconstructing work

Some coauthors and I have a new(wish) report out – Reconstructing work: Automation, artificial intelligence, and the essential role of humans – on DU Press as part of Deloitte Review #21 (DR21). (I should note that I’ve been a bit lax in posting on this blog, so this is quite late.)

The topic of DR21 was ‘the future of work’. Our essay builds on the “Cognitive collaboration” piece published in the previous Deloitte Review (DR20).

The main point in Cognitive collaboration was that there are synergies between humans and computers. A solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. The poster child for this is freestyle chess where chess is a team sport with teams containing both humans and computers. Recently, during the development of our report on ‘should everyone learn how to code’ (To code to not to code, is that the question? out the other week, but more on that later), we found emerging evidence that this is a unique and teachable skill that crosses multiple domains.

With this new essay we started by thinking about how one might apply this freestyle chess model to more pedestrian work environments. We found that coming up with a clean division of labour between – breaking the problem into seperate tasks for human and machine – was clumsy at best. However if you think of AI as realising *behaviours* to solve *problems*, rather than prosecuting *tasks* to create *products*, then integrating human and machine is much easier. This aligns better with the nature of artificial intelligence (AI) technologies.

As we say is a forthcoming report:

AI or ‘cognitive computing’ […] are better thought of as automating behaviours rather than tasks. Recognising a kitten in a photo from the internet, or avoiding a pedestrian that has stumbled onto the road, might be construed as a task, though it is more natural to think of it as a behaviour. Task implies a piece of work to be done or undertaken, an action (a technique) we choose to do. Behaviour, on the other hand, implies responding to the changing world around us, a reflex. We don’t choose to recognise a kitten or avoid the pedestrian, though we might choose (or not) to hammer in a nail when one is presented. A behaviour is something we reflexively do in response to appropriate stimulus (an image of a kitten, or even a kitten itself poised in-front of us, or the errant pedestrian).

The radical conclusion from this is that there is no knowledge or skill unique to a human. That’s because knowledge and skill – in this context – are defined relative to a task. We’re at a point that if we can define a task then we can automate it (given cost-benefit) so consequently there are no knowledge or skills unique to humans.

What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. If we want to move forward, and deliver on the promise of AI and cognitive computing, then we need to shift the foundation of work. Hence the title: we need to “reconstruct work”.

The full essay is on the DP site, so head over and check it out.

Why remember what you can google?

google | ˈɡuːɡl |

verb [with object]

search for information about (someone or something) on the Internet using the search engine Google on Sunday she googled an ex-boyfriend | [no object] : I googled for a cheap hotel/flight deal.

googleable (also googlable) adjective

1990s: from Google, the proprietary name of the search engine.

MacOS Dictionary

‘Why remember what you can google?’ has become something of a catchphrase. Even more so now that many homes have voice assistants like Google Home and Amazon Alexia. It’s common, however, to feel some form of existential angst as if we need to google something then we wonder if we really understand it. Our natural impostor syndrome kicks in and we question if our hard-won knowledge and skills are really our own.

The other side of this is learned helplessness, where googling something might be helpful but we don’t know quite what to google for, or fail to realise that a search engine might be able to help us solve the problem in front of us if just we knew what question to ask. This is a common problem with digital technology, where students learn how to use particular tools to solve particular problems but are unable to generalise these skills. Our schools are quite good at teaching students how, given a question, to construct a query for a search engine. What we’re not helping the students with is understanding when or why they might use a search engine, or digital tools in general.

Both of these these problems – the existential angst and learn helplessness – stem from a misunderstanding of our relationship with technology.

Socrates mistrusted writing as he felt that it would make us forgetful, and that learning from a written text would limit our insight and wisdom into a subject as we couldn’t fully interrogate it. What he didn’t realise was that libraries of written texts provide us with access to more diverse points of view and enable us to explore the breadth of a subject, while treating the library as an extension of our memory means that we are limited to what we can refer to in the library rather than what we can remember ourselves.

We can see a similar phenomena with contemporary graduates, who typically have a more sophisticated understanding of the subjects they covered in their formal education than did earlier generations. This is not because they are smarter. Their deeper understanding is a result of them investing more of their time exploring a subject, and less of it in attempting to find and consume the information they need.

Consider a film school student. Our student might be told that some technique Hitchcock used might be of interest to them.

In the seventies this would necessitate a trip to the library-card catalogue, searching for criticism of Hitchcock’s films, flipping through books to determine which might be of interest, reading those that (potentially) are interesting, listing the films that contain good examples of the technique, and then searching the repertory theatres to see which are playing these old films. The entire journey from first mention to the student experimenting with the technique in their own work might take over a year and will require significant effort and devotion.

Compare this learning journey to what a student might do today. The mention by a lecturer on a Friday will result in the student spending a slow Saturday afternoon googling. They’ll work their way from general (and somewhat untrustworthy) sources such as Wikipedia and blog posts as they canvas the topic before consuming relevant criticism, some of which will be peer-reviewed journal and books though others might be in the form of video essays incorporating clips from the movies they mention. Any films of note are added to the queue of the student’s streaming service. Sunday is spent watching the films, and possibly rewatching the scenes where the technique is used. The entire journey – from first suggestion to the student grabbing a camera and editing tool to experiment – might take a weekend.

It’s not surprising the contemporary students emerge from their formal education with a more sophisticated command of their chosen domain: they’ve spent a greater proportion of their time investigating the breadth and depth of domain, rather than struggling to find the sources and references they need to feed their learning.

The existential angst we all feel stems from the fact that we have a different relationship with the new technology than the old. The relationship we have with the written word is different to the one we have with the spoken word. Similarly, the relationship we have with googled knowledge is different to the one we have with remembered knowledge. Learned helpless emerges when we fail to form a productive relationship with the new technology.

To integrate the written word into our work we need to learn how to read and write, a skill. To make our relationship with the written world productive, however, we need to change how we approach work, changing our attitudes and behaviours to make the most of the capabilities provided by this new technology while minimising the problems. Socrates was right, naively swapping the written word for the spoken would result in forgetfulness and a shallower understanding of the topic. If, however, we also adapt our attitudes and behaviours, forming a productive relationship with the new technology (as our film student has), then then we will have more information at our fingertips and a deeper command of that information.

The skill associated with ‘Why remember what you can google?’ is the ability to construct a search query from a question. Learned helplessness emerges when we don’t know what question to ask, or don’t realise that we could ask a question. Knowing when and why to use a search engine is as, if not more, important than knowing how to use a search engine.

To overcome this we need to create a library of questions that we might ask: a catalogue subjects or ideas that we’ve become aware of but don’t ‘know’, and strategies for constructing new questions. We might, for example, invest some time (an attitude) in watching TED talks during lunch time, or reading books and attending conferences looking for new ideas (both behaviours). We might ask colleagues for help only to discover that we can construct a query by combining the name of an application with a short description of what we are trying to achieve (“Moodle peer marking”). This library is not a collections of things that we know, it’s a collection we’ve curated of things that we’re aware of and which we might want to learn in the future.

The existential angst we feel, along with learned helplessness, are due to our tendency to view technology as something apart from us, an instrumental tool that we use. This is also why we fear the rise of the robots: if we frame our relationship with technology in terms of agent and instrument, then it’s natural to assume ever smarter tools will become the agent in our relationship, relegating us to the instrument.

Reality is much more complex though, and our relationship with technology is richer than agent and instrument. Our technology is and has always been part of us. If we want to avoid both existential angst and learned helplessness then we need to acknowledge that understanding when and why to use these new technologies, fostering the attitudes and behaviours that enable us to form a productive relationship with them, are as, if not more, important than simply learning how to use them.

To code or not to code: Mapping digital competence

We’re kicking off the next phase of our “Should everyone learn how to code?” project. This time around it’s a series of public workshops over late January and early February in Melbourne, Geelong, Sydney, Western Sydney, Hobart, Brisbane, and Adelaide. The purpose of the workshops is to try and create a mud-map describing what a digitally competent workforce might look like.

As the pitch goes…

Australia’s prosperity depends on equipping the next generation with the skills needed to thrive in a digital environment. But does this mean that everyone needs to learn how to code?

In the national series of round tables Deloitte Centre for the Edge and Geelong Grammar School hosted in 2016, the answer was “Yes, enough that they know what coding is.”

The greater concern, though, was ensuring that everyone is comfortable integrating digital tools into their work whatever that work might be, something that we termed ‘digital competence’. This concept was unpacked in an essay published earlier this year.

Now we’re turning our attention to the question: What does digital competence look like in practice, and how do we integrate it into the curriculum?

We are holding an invitation only workshop for industry and education to explore the following ideas:

  • What are the attributes of a digitally competent professional?
  • How might their digital competence change over their career?
  • What are the common attributes of digital competence in the workplace?
  • How might we teach these attributes?

If you’re interested in attending, or if you know someone who might be interested in attending, then contact me and we’ll add you to the list. Note that there’s only 24-32 places in each workshop and we want to ensure a diverse mix of people in each workshop, so we might not be able to fit everyone who’s interested, but we’ll do our best.

Cognitive collaboration

I have a new report out on DU PressCognitive Collaboration: Why humans and computers think better together – where a couple of coauthors and I wade into the “will AI destroy the future or create utopia” debate.

Our big point is that AI doesn’t replicate human intelligence, it replicates specific human behaviours, and the mechanisms behind these behaviours are different to those behind their human equivalents. It’s in these differences that opportunity lies, as there’s evidence that machine and human intelligence are complimentary, rather than in competition. As we say in the report “humans and machines are [both] better together”. The poster child for this is freestyle chess.

Eight years later [after Deep Blue defeated Kasparov in 1997], it became clear that the story is considerably more interesting than “machine vanquishes man.” A competition called “freestyle chess” was held, allowing any combination of human and computer chess players to compete. The competition resulted in an upset victory that Kasparov later reflected upon:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. . . . Human strategic guidance combined with the tactical acuity of a computer was overwhelming.1)Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, View in article

So rather than thinking of AI as our enemy, we should think of it as supporting us in our failings.

We’re pretty happy with the report – so happy that we’re already working on a follow on – so wander over to DU Press and check it out.

References   [ + ]

1. Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, View in article

You can’t democratise trust

I have a new post on the Deloitte Digital blog.

There’s been a lot of talk about using technology to democratise trust, and much of it shows a deep misunderstanding of just what trust is. It’s implicitly assumed that trust is a fungible asset, something that can be quantified, captured and passed around via technology. This isn’t true though.

As I point out in the post:

Trust is different to technology. We can’t democratise trust. Trust is a subjective measure of risk. It’s something we construct internally when we observe a consistent pattern of behaviour. We can’t create new kinds of trust. Trust is not a fungible factor that we can manipulate and transfer.

Misunderstanding trust means that technical solutions are proposed rather than tackling the real problem. As I conclude in the post:

If we want to rebuild trust then we need to solve the hard social problems, and create the stable, consistent and transparent institutions (be they distributed or centralised) that all of us can trust.

Technology can enable us to create more transparent institutions, but if these institutions fail to behave in a trustworthy manner then few will trust them. This is why the recent Ethereum hard fork is interesting. Some people wanted an immutable ledger, and they’re now all on ETC as they no longer trust ETH. Others trust the Ethereum Foundation to “do the right thing by them” and they’re now on ETH, and don’t trust ETC.