Category Archives: Topics

Teaching creativity in the 21st century

In 2017 Deloitte Centre for the Edge hosted a public lecture by James C. Kaufman, PhD; a professor of educational psychology at the University of Connecticut as well as a creativity & education expert, where he discussed the challenges of teaching and assessing creativity. This is a 20 minute bite-sized version of the 90-minute lecture.

We noticed the similarity between creativity and our recent work on digital competency, which we published in “From coding to competence”. Both are depend more on attitudes and behaviours than knowledge and skills. Both are also tightly tied to context, and don’t transfer easily between domains.

The lecture is derived from Dr Kaufman’s cutting-edge psychological research and debunks common misconceptions about creativity, describe how learning environments can support creativity, while providing insights into teaching and assessing creativity within the established curriculum.

The lecture covers:

  • What is creativity?
  • Seeing creativity as a development trajectory and advancing along this trajectory.
  • Creativity across domains (not just ‘art’), and the ‘cost’ of creativity.
  • Measuring creativity
  • How can people become more creative?

The new division of labor: On our evolving relationship with technology

I, along with Alan Marshall and Robert Hillard, have a new essay published by Deloitte InsightsThe new division of labor: On our evolving relationship with technology1)Evans-Greenwood, P, Hillard, R, & Marshall, A 2019, ‘The new division of labor: On our evolving relationship with technology’, Deloitte Insights, <>.. This is the latest in an informal series that looks into how artificial intelligence (AI) is changing work. The other essays (should you be interested) are Cognitive collaboration,2)Guszcza, J, Lewis, H, & Evans-Greenwood, P 2017, ‘Cognitive collaboration: Why humans and computers think better together’, Deloitte Review, no. 20, viewed 14 October 2017, <>. Reconstructing work3)Evans-Greenwood, P, Lewis, H, & Guszcza, J 2017, ‘Reconstructing work: Automation, artificial intelligence, and the essential role of humans’, Deloitte Review, no. 21, <>. and Reconstructing jobs.4)Evans-Greenwood, P, Marshall, A, & Ambrose, M 2018, ‘Reconstructing jobs: Creating good jobs in the age of artificial intelligence’, Deloitte Insights, <>.

Over the last few essays we’ve argued that humans and AI might both think but they think differently, though in complimentary ways, and if we’re to make the most of these differences we need to approach work differently. This was founded on the realisation that there is no skill – when construed within a task – that is unique to humans. Reconstructing work proposed that rather than thinking about work in terms of products, processes and tasks, it might be more productive to approach human work as a process of discovering what problems need to be solved, with automation doing the problem solving. Reconstructing jobs took this a step further and explored how jobs might change if we’re to make the most of both human and AI-powered machine using this approach, rather than simply using the machine to replace humans.

This new essay, The new division of labour, looks at what is holding us back. It’s common to focus on what’s known as the “skills gap”, the gap between the knowledge and skills the worker has and those required by the new technology. What’s often forgotten is that there’s also an emotional angle. The introduction of the word processor, for example, streamlined the production of business correspondence, but only after managers became comfortable taking on the responsibility of preparing their own correspondence. (And there’s still a few senior managers around who have their emails printed out so that they can draft a reply on the back for their assistant to type.) Social norms and attitudes often need to change before a technology’s full potential can be realised.

We can see something similar with AI. This time, though, the transition is complicated as the new tools and systems are not passive tools anymore. We’re baking decisions into software then connecting these automated decisions to the levers that control our businesses: granting loans, allocating work and so on. These digital systems are no longer passive tools, they have some autonomy and, consequently, some agency. They’re not human, but they’re not “tools” in the traditional sense.

This has the interesting consequence that we relate to them as sort-of humans as their autonomy and agency affects our own. They’re consequently taking on roles in the organogram as we find ourselves working for, with and on machines. This also works the other way around, and machines find themselves working for, with and on humans. Consider how a ride-sharing driver has their work assigned to them, and their competence is measured, by an algorithm that is effectively their manager. A district nurse negotiates their schedule with a booking and work scheduling system. Or it might be more of a peer relationship, such as when a judge consult a software tool when determining a sentence. We might even find humans and machines teaching each other new tricks.

As with the word processors, we can only make the most of this new technology if we address the social issues. With the word processor it was managers seeing typing as being below their station. The challenge with AI is much more difficult though, as making the most of this new generation of technology requires us to value humans to do something other than complete tasks.

The essay uses the example of superannuation. Nobody wants retirement financial products, they want a happy retirement, the problem is that ‘happy retirement’ is no more than a vague idea for most of us. We need to go on a journey through sorting out if what we think will make us happy will actually make us happy, setting reasonable expectations, and adjusting our attitudes and behaviours to balance our life today with the retirement we want to work toward. This is something like a Socratic dialogue, a conversation with others where we create the knowledge of what ‘happy retirement’ means for us. Only then can we engage the robots-advisor to crunch the numbers and create an investment plan.

The problem is the disconnect between how the client and firm derive value from this journey. The client values discovering what happy retirement means, and adjusting their attitudes and behaviours to suit. The firm values investments made. This disconnect means that firms focus their staff on clients later in life, once the kids have left home and the house is paid off. The client, on the other hand, would realise the most value by engaging early to establish the attitudes and behaviours that will enable the magic of compound interest to work.

As we say in the conclusion to the report:

However, successfully adopting the next generation of digital tools, autonomous tools to which we delegate decisions and that have a limited form of agency, requires us to acknowledge this new relationship. At the individual level, forming a productive relationship with these new digital tools requires us to adopt new habits, attitudes, and behaviors that enable us to make the most of these tools. At the enterprise level, the firm must also acknowledge this shift, and adopt new definitions of value that allow it to reward workers for contributing to the uniquely human ability to create new knowledge. Only if firms recognize this shift in how value is created, if they are willing to value employees for their ability to make sense of the world, will AI adoption deliver the value they promise.

You can find the entire essay over at Deloitte Insights.

References   [ + ]

1. Evans-Greenwood, P, Hillard, R, & Marshall, A 2019, ‘The new division of labor: On our evolving relationship with technology’, Deloitte Insights, <>.
2. Guszcza, J, Lewis, H, & Evans-Greenwood, P 2017, ‘Cognitive collaboration: Why humans and computers think better together’, Deloitte Review, no. 20, viewed 14 October 2017, <>.
3. Evans-Greenwood, P, Lewis, H, & Guszcza, J 2017, ‘Reconstructing work: Automation, artificial intelligence, and the essential role of humans’, Deloitte Review, no. 21, <>.
4. Evans-Greenwood, P, Marshall, A, & Ambrose, M 2018, ‘Reconstructing jobs: Creating good jobs in the age of artificial intelligence’, Deloitte Insights, <>.

Digitalizing the construction industry: A case study in complex disruption

I, along with a Robert Hillard and Peter Williams, have a new essay published by Deloitte Insights, Digitalizing the construction industry: A case study in complex disruption1)Evans-Greenwood, P et al. 2019, ‘Digitalizing the construction industry: A case study in complex disruption’, Deloitte Insights,<>.. The case study elaborates on one of the examples we used in Your next future.2)Evans-Greenwood, P & Leibowitz, D 2017, Your next future: Capitalising on disruptive change, Deloitte University Press, <>.

In that essay we made the distinction between simple disruption – disruption due to a particular disruptive technology, the thing the comes to mind first for most people when they think of disruption – and complex disruption – where the disruption is due to a confluence of (mainly social) factors. Think the telegraph (simple disruption) vs the global multi-modal container network (complex disruption). Many current disruptions – artificial intelligence, blockchain, etc – tend to be complex (rather than simple) disruption. We’re seeing an environmental shift, as individuals and firms realise that the current environment (with many things available cheap and on-demand) presents opportunities to find new ways to use old technologies to create new ‘disruptive’ operating models, rather than there being a massive wave of new technologies as many pundits claim.

One of examples we used to illustrate the shift was the building industry. There’s a lot of noise about technologies such as 3D printing or brick-laying robots disrupting the building industry, but this is unlikely as the industry’s product is the building process, not the buildings it produces. Builders will simply integrate these new technologies into their process if and when they become commercially viable. The invention of a new building process, however, where a builder uses old technologies in new ways to create a new, and superior, operating model has the potential to disrupt the industry.

Your next future mentioned a design for manufacture and assembly (DFMA) process – where a building is completely modelled in 3D before the model is split up and feed to numerically controlled machines in a factory, with the components shipped to the construction site for assembly – as potentially disruptive. Versions of the process current at the time of publication were roughly 30% faster than a conventional build (due to moving some work to the controlled environment of a factory where rain delays aren’t a problem, and enabling the optimisation of vertical transport on site). They were slightly cheaper, and had the potential to be much cheaper. And there’s the possibility to integrating new materials into the process, materials which couldn’t be used in a conventional process due to on-site restrictions.

Since that essay was published what was then a potential disruption looks like it might be about to tip into actual disruption. This is the subject of the case study.

In 2018 a project in the Melbourne CBD hit problems as the cranes and trucks required to move materials onto the site would block a lane that was the sole access to the homes of many local residents. The solution the builder (Hickory) came up with was to build at night: the machinery would arrive around 9 pm and lift DFMA components (via the Hickory Building System) onto the site, installing an entire floor in four-six hours. Once a floor is complete the floor below is weather-proof and there are no lives edges. The machines are gone before the residents wake. During the day the trades go through the completed floor and finish the interior. There was some skepticism as building is considered noisy, though a trial one night showed that the residents would hardly notice the nighttime construction.

And here’s where we might be seeing a potential complex disruption crystallise into actual disruption. The build proceeded, and the city council was so happy they are considering that all high-rise building to be done at night. This would, with the stroke of a pen, bar conventional builders from the market until they undertake the multi-year journey to develop their own operating model based a DFMA process.

The case study looks at the development of DFMA building processes, the challenges they faced and how they’ve been overcome, and the potential impact on the market. It also looks at how firms might also anticipate similar complex disruptions in their own market, pointing out that conventional market-scanning practices looking for disruptive technologies can actually be counter productive as they cannot predict complex disruption, and we’re in a market with there appears to be more complex disruption than simply disruption.

It’s an interesting story, and a local story which is nice, so head over the the Deloitte web site to read Digitalizing the construction industry: A case study in complex disruption.

References   [ + ]

1. Evans-Greenwood, P et al. 2019, ‘Digitalizing the construction industry: A case study in complex disruption’, Deloitte Insights,<>.
2. Evans-Greenwood, P & Leibowitz, D 2017, Your next future: Capitalising on disruptive change, Deloitte University Press, <>.

Your next future: Capitalising on disruptive change

I and a coauthor have a new report out on DU Press: Your next future: Capitalising on disruptive change.1)Evans-Greenwood, P & Leibowitz, D 2017, Your next future: Capitalising on disruptive change, Deloitte University Press, <>. Disruption is something we’d been puzzling for some time as it’s a fuzzy and poorly defined concept despite all the noise it generates. It’s also concerning that few, if any, of the theories have much predictive power.

Our contribution is fairly straight forward.

First we make that point that disruption, as the term is commonly used, covers a broad range of phenomena. This creates tension between our desire for a comprehensive definition, one encompassing this broad scope, and the need for a precise definition, so that we are all clear on what we’re talking about.  Many academic theories (such as Clayton Christensen’s) come unstuck when it’s pointed out that the theory might refer to some disruptive phenomenon, but they don’t account for many other phenomena that can also be considered disruptive.

Consequently we must acknowledge that disruption operates are at least three different levels of abstraction:

  • At the highest level are long-term whole-of-economy shifts that disrupt all of us. The shift from stocks to flows – which we try and measure in the Shift Index2)Evans-Greenwood, P & Williams, P 2014, Setting aside the burdens of the past: The possibilities of technology-driven change in Australia, Deloitte Australia, viewed 26 October 2017, <>. – is one of these.
  • At the mid-level are disruptions focused on a sector or industry. Our colleagues in the US have be cataloging these in the Patterns of Disruption series.3)Hagel, ,J, Seely Brown, J, Wooll, M, & de Maar, A 2015, Patterns of disruption: Anticipating disruptive strategies in a world of unicorns, black swans, and exponentials, Deloitte University Press, < anticipating-disruptive-strategy-of-market-entrants/>.
  • At the lowest level are the things that disrupt us, our firm.

It was the observation that value used to be objective and defined relative to the market, in terms of product feature-function, but now value is more commonly defined subjectively, relative to the firm and the firm-customer relationship, that prompted us to look at disruption with a wider lens and make this subjective disruption the subject of a our essay.

Next we wanted to create a model of disruption that was predictive, which could be fed into a strategy-formation process to enable a firm to identify concrete actions that would enable a firm to prepare for a (potential) disruption and either capitalise on it or defuse it (i.e. neuter the disruption). The resulting model relies on three observations.

  • Disruption is degenerate. A single outcome, a disruption, might be triggered by a large number of different processes. This means that will be impossible to understand disruption by identifying and analysing individual contributors without considering the complex relationships between them.
  • Disruption is constructive. While technology is important to a disruption, technology alone is not enough and we need to also consider social and commercial forces as well that come together to trigger a disruption.
  • Disruption is subjective. A new technology might disrupt our sector or industry, but it may not disrupt us. The reverse is also true. Our concern is disruption to our business, not markets (via patterns of disruption), the economy (via the Big Shift) or disruption in general.

The result a model that shows us why we we cannot predict disruption by identifying ‘disruptive technologies’, but which does enable us to do something about shaping how we approach disruption.

We’re pretty happy with the result, which you can find at DU Press.

References   [ + ]

1. Evans-Greenwood, P & Leibowitz, D 2017, Your next future: Capitalising on disruptive change, Deloitte University Press, <>.
2. Evans-Greenwood, P & Williams, P 2014, Setting aside the burdens of the past: The possibilities of technology-driven change in Australia, Deloitte Australia, viewed 26 October 2017, <>.
3. Hagel, ,J, Seely Brown, J, Wooll, M, & de Maar, A 2015, Patterns of disruption: Anticipating disruptive strategies in a world of unicorns, black swans, and exponentials, Deloitte University Press, < anticipating-disruptive-strategy-of-market-entrants/>.

Digital is the new ERP

We seem to have forgotten that the development of Enterprise Resource Planning (ERP) was more a response to regulatory pressure than a child of technical innovation. This is why many executives and board members are unsure why their firm needs an ERP (and the massive investment implied), as ERP’s primary purpose was to improve governance (and, consequently, reduced operational risk and cost) rather than to provide the firm with some new value-creating capability.

Just prior to ERP, a confluence of technical and non-technical factors had created a situation where a firm’s executives and board had little idea of the goings on beneath them. Important details were buried in spreadsheets, squirrelled away on desktop PCs, with only summary reports passed to the general ledger and data warehouses.

Without the compliance guide rails provided by Finance and IT it’s easy for lines of business to go astray. Not long after spreadsheet use became widespread, it was clear that the information in the general ledger which the executive and board were relying on to direct the company could not be trusted. While the firm appeared to be making money, how this profit was being generated was less certain. Nor was it clear what operational risks a firm might be implicitly accepting, unable to manage them.

At which point the regulator stepped in demanding improvements in governance and operations. Industry’s response was ERP: an integrated set of business processes that synchronise (in real time) departmental solutions with the general ledger, supported (and enforced) by information technology.

We seem to be approaching a similar situation with digital. Firms are finding that important details are buried in SaaS and online solutions outside the purview of the Finance or IT departments and which are only loosely integrated to core systems, and their systems of records are, well, no longer ‘systems of record’.

This state of affairs could be accidental. The business wants to do the right thing but finds it difficult to know what the right thing to do is. They’re operating in a complex and rapidly changing business environment with demanding customers, many (previously core) functions are outsourced to specialist partners and suppliers, and they don’t have complete visibility into everything that is done on their behalf. It’s also an environment where regulators are constantly tweaking the rules to try and shape firm behavior, making a firm’s ability to absorb constant regulatory change a skill in and of itself.

Less ethical groups see this disconnect between the general ledger and lines of business as an opportunity to shape the story reaching head office. Cosmetic accounting techniques might be used to temporarily remove liabilities from a balance sheet, or to inflate revenue or market capitalisation by, for example, abusing special-purpose entities via techniques such as round-tripping (where an unused asset is sold with the understanding that same or similar assets will be bought back at the same or a similar price), all hidden under the veil of a summary report periodically passed between the department and the general ledger. These are the types of behaviours that brought Enron and Lehman Brothers down.

The information silos of departmental computing, the paradigm before today’s ERP-enabled enterprise computing, drove business efficiency by enabling firms to manage larger volumes of data. LEO (the Lyon’s Electronic Office),1)Land, F n.d., The story of LEO – the World’s First Business Computer, <>. an example of an early (and possibly the first) general-purpose business computer in the world, elaborated orders phoned into head office by Lyon’s tea shops every afternoon, calculating production requirements, assembly instructions, delivery schedules, invoices, costings, and management reports. These departmental applications, however, didn’t enable managers to find or exploit opportunities between departmental silos.

Spreadsheets and desktop PCs changed this. A desktop PC on a line manager’s desk enabled the manager to download data from multiple departmental applications and smash the data together in a spreadsheet. The resulting insights enabled production to be streamlined, or identified opportunities for new products and services, reducing costs and creating new value for the firm. Success begets success, and more data was downloaded and spreadsheets created. Soon these spreadsheets became integral parts of business processes and morphed into operational tools, outside the purview of the departmental applications that drove the firm’s compliance and reporting processes. Often the only connection between these new business processes and the general ledger was a summary report uploaded periodically.

The solution, then, was to integrate these cross-department spreadsheets, and the new business processes they enabled, into the firm’s departmental applications. The result is what we know today as ERP.

Something similar is happening with ‘digital’.

Cloud and SaaS solutions’ low barriers to adoption, and a customer empowered to demand what they want at the price they want from a global pool of suppliers, is driving line of business managers to go outside the enterprise to meet their needs. It’s not that the required business processes don’t exist; it just takes too long to modify the business processes to support new products, supply chains, suppliers and partners. Managers find it easier to put a credit card into a SaaS solution than wait for the IT department to respond with a plan, cost and timeline.

Departments are building entire value chains outside the purview of Finance and IT, as they believe that this is the only way that they can effectively respond to market opportunities and threats. Often the only connection to the general ledger is a summary spreadsheet, capturing details from cloud solutions, uploaded every few weeks or so. While the firm might be making money, it’s not clear to the executive or board just how this money is being made. Nor the risks this creates. We’ve been here before.

If the regulators don’t see this as a problem today, they soon will, as there is clearly a risk that good actors will unintentionally do the wrong thing, and for bad actors to intentionally do the wrong thing. There’s also the emerging problem of third parties hiding in the shadows using your legitimate business to wash funds (just as Amazon and Airbnb have become a target for money launderers).2)Shah, S 2017, ‘Airbnb is reportedly being used to launder money’, Engadget, <>. Operational risk is escalating as firms transform themselves from asset managers into integrators of services and information. The networked environment firms these firms inhabit creates unique challenges, has all the asymmetrical risks of an online environment, and the lack of visibility is compounding associated risks.

The problem digital is creating is clearly similar in effect to that the one created by the introduction of spreadsheets and the desktop PC. The cause, however, is different. Rather than creating new business processes that span existing (departmental) ones, digital is resulting in duplicated business processes that run in parallel and which support particular products or initiatives within the firm. They are also combining internal and external services, reducing the control a firm has on the end-to-end process.

These processes are intended to be short lived, thrown together quickly and torn down just as quickly. A process might be required, for example, to support a new supply chain for a burger of the month, thrown up at the start of the month to bring in new suppliers and partners, and torn down at the end. The duplicated processes are to support short-lived business exceptions, not to span business silos.

It’s assumed that more precise and tightly defined processes, backed by teams focused on maintaining and updating these processes to make them ‘agile’, will bring the firm back into compliance. This is not working though.

So while the problem digital is creating is similar to that due to spreadsheets, the cause if different and consequently our solution must also be different. Indeed, one might see business processes as part of the problem rather than as part of the solution.

References   [ + ]

1. Land, F n.d., The story of LEO – the World’s First Business Computer, <>.
2. Shah, S 2017, ‘Airbnb is reportedly being used to launder money’, Engadget, <>.

Reconstructing jobs

Some coauthors and I have a new report out: Reconstructing jobs: Creating good jobs in the age of artificial intelligence.  This essay builds on the previous two from our “future or work” series,  Cognitive collaboration and Reconstructing work, published on DU Press (now Deloitte Insights) as part of Deloitte Review #20 (DR20) and #21 (DR21) respectively.

Cognitive collaboration‘s main point was that there are synergies between humans and computers, and that solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. Reconstructing work built on this, pointing out the difference between human and machine was not in particular knowledge or skills exclusive to either; indeed, if we frame work in terms of prosecuting tasks than we must accept that there are no knowledge or skills required that are uniquely human. What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. This insight provided the title of the second essay – Reconstructing work – as it argued that we need to think differently about how we construct work if we want the make the most of the opportunities provided by AI.

This third essay in the series, Reconstructing jobs, takes a step back and looks these jobs of the future might look like. The narrative is built around a series of concrete examples – from contact centres through wealth management to bus drivers – to show how we might create this next generation of jobs. These are jobs founded on an new division of labour: humans creating new knowledge, making sense of the world to identify and delineate problems; AI plans solutions to these problems; and good-old automation to delivers. To do this we must create good jobs, as it is good jobs that make the most of our human abilities as creative problem identifiers. These jobs are also good for firms as, when combined suitably with AI, they will provide superior productivity. They’re also good job for the community, as increased productivity can be used to provide more equitable services and to support *learning by doing* within the community, a rising tide that lives all boats.

The essay concludes by pointing out that there is no inevitability about the nature of work in the future. As we say in the essay:Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable.

The question is then, what do we want these jobs of the future to look like?

Redefining education @ TAFE NSW >Engage 2017

C4tE AU was invited to TAFE NSW’s annual >Engage event to present a 15 minute overview of our Redefining education report, which had caught the attention of the event’s organisers.

The report ask a simple question:

In a world where our relationship with knowledge has changed – why remember what we can google? – should our relationship with education change as well?

and then chases this idea down the rabbit hole to realise that what we mean by “education” and “to be educated” need to change in response.

The presentation is a 15 minute TED format thing. You can find it on Vimeo.

The report is on Deloitte’s web site.

“Tiger, one day you will come to a fork in the road,” he said. “And you’re going to have to make a decision about which direction you want to go.” He raised his hand and pointed. “If you go that way you can be somebody. You will have to make compromises and you will have to turn your back on your friends. But you will be a member of the club and you will get promoted and you will get good assignments.”

Then Boyd raised his other hand and pointed another direction. “Or you can go that way and you can do something – something for your country and for your Air Force and for yourself. If you decide you want to do something, you may not get promoted and you may not get the good assignments and you certainly will not be a favorite of your superiors. But you won’t have to compromise yourself. You will be true to your friends and to yourself. And your work might make a difference.”

He paused and stared into the officer’s eyes and heart. “To be somebody or to do something.” In life there is often a roll call. That’s when you will have to make a decision. To be or to do. Which way will you go?”

—John Boyd from “Boyd: The fighter pilot who changed the art of war”

Image: Wikicommons

Reconstructing work

Some coauthors and I have a new(wish) report out – Reconstructing work: Automation, artificial intelligence, and the essential role of humans – on DU Press as part of Deloitte Review #21 (DR21). (I should note that I’ve been a bit lax in posting on this blog, so this is quite late.)

The topic of DR21 was ‘the future of work’. Our essay builds on the “Cognitive collaboration” piece published in the previous Deloitte Review (DR20).

The main point in Cognitive collaboration was that there are synergies between humans and computers. A solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. The poster child for this is freestyle chess where chess is a team sport with teams containing both humans and computers. Recently, during the development of our report on ‘should everyone learn how to code’ (To code to not to code, is that the question? out the other week, but more on that later), we found emerging evidence that this is a unique and teachable skill that crosses multiple domains.

With this new essay we started by thinking about how one might apply this freestyle chess model to more pedestrian work environments. We found that coming up with a clean division of labour between – breaking the problem into seperate tasks for human and machine – was clumsy at best. However if you think of AI as realising *behaviours* to solve *problems*, rather than prosecuting *tasks* to create *products*, then integrating human and machine is much easier. This aligns better with the nature of artificial intelligence (AI) technologies.

As we say is a forthcoming report:

AI or ‘cognitive computing’ […] are better thought of as automating behaviours rather than tasks. Recognising a kitten in a photo from the internet, or avoiding a pedestrian that has stumbled onto the road, might be construed as a task, though it is more natural to think of it as a behaviour. Task implies a piece of work to be done or undertaken, an action (a technique) we choose to do. Behaviour, on the other hand, implies responding to the changing world around us, a reflex. We don’t choose to recognise a kitten or avoid the pedestrian, though we might choose (or not) to hammer in a nail when one is presented. A behaviour is something we reflexively do in response to appropriate stimulus (an image of a kitten, or even a kitten itself poised in-front of us, or the errant pedestrian).

The radical conclusion from this is that there is no knowledge or skill unique to a human. That’s because knowledge and skill – in this context – are defined relative to a task. We’re at a point that if we can define a task then we can automate it (given cost-benefit) so consequently there are no knowledge or skills unique to humans.

What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. If we want to move forward, and deliver on the promise of AI and cognitive computing, then we need to shift the foundation of work. Hence the title: we need to “reconstruct work”.

The full essay is on the DP site, so head over and check it out.

Why remember what you can google?

google | ˈɡuːɡl |

verb [with object]

search for information about (someone or something) on the Internet using the search engine Google on Sunday she googled an ex-boyfriend | [no object] : I googled for a cheap hotel/flight deal.

googleable (also googlable) adjective

1990s: from Google, the proprietary name of the search engine.

MacOS Dictionary

‘Why remember what you can google?’ has become something of a catchphrase. Even more so now that many homes have voice assistants like Google Home and Amazon Alexia. It’s common, however, to feel some form of existential angst as if we need to google something then we wonder if we really understand it. Our natural impostor syndrome kicks in and we question if our hard-won knowledge and skills are really our own.

The other side of this is learned helplessness, where googling something might be helpful but we don’t know quite what to google for, or fail to realise that a search engine might be able to help us solve the problem in front of us if just we knew what question to ask. This is a common problem with digital technology, where students learn how to use particular tools to solve particular problems but are unable to generalise these skills. Our schools are quite good at teaching students how, given a question, to construct a query for a search engine. What we’re not helping the students with is understanding when or why they might use a search engine, or digital tools in general.

Both of these these problems – the existential angst and learn helplessness – stem from a misunderstanding of our relationship with technology.

Socrates mistrusted writing as he felt that it would make us forgetful, and that learning from a written text would limit our insight and wisdom into a subject as we couldn’t fully interrogate it. What he didn’t realise was that libraries of written texts provide us with access to more diverse points of view and enable us to explore the breadth of a subject, while treating the library as an extension of our memory means that we are limited to what we can refer to in the library rather than what we can remember ourselves.

We can see a similar phenomena with contemporary graduates, who typically have a more sophisticated understanding of the subjects they covered in their formal education than did earlier generations. This is not because they are smarter. Their deeper understanding is a result of them investing more of their time exploring a subject, and less of it in attempting to find and consume the information they need.

Consider a film school student. Our student might be told that some technique Hitchcock used might be of interest to them.

In the seventies this would necessitate a trip to the library-card catalogue, searching for criticism of Hitchcock’s films, flipping through books to determine which might be of interest, reading those that (potentially) are interesting, listing the films that contain good examples of the technique, and then searching the repertory theatres to see which are playing these old films. The entire journey from first mention to the student experimenting with the technique in their own work might take over a year and will require significant effort and devotion.

Compare this learning journey to what a student might do today. The mention by a lecturer on a Friday will result in the student spending a slow Saturday afternoon googling. They’ll work their way from general (and somewhat untrustworthy) sources such as Wikipedia and blog posts as they canvas the topic before consuming relevant criticism, some of which will be peer-reviewed journal and books though others might be in the form of video essays incorporating clips from the movies they mention. Any films of note are added to the queue of the student’s streaming service. Sunday is spent watching the films, and possibly rewatching the scenes where the technique is used. The entire journey – from first suggestion to the student grabbing a camera and editing tool to experiment – might take a weekend.

It’s not surprising the contemporary students emerge from their formal education with a more sophisticated command of their chosen domain: they’ve spent a greater proportion of their time investigating the breadth and depth of domain, rather than struggling to find the sources and references they need to feed their learning.

The existential angst we all feel stems from the fact that we have a different relationship with the new technology than the old. The relationship we have with the written word is different to the one we have with the spoken word. Similarly, the relationship we have with googled knowledge is different to the one we have with remembered knowledge. Learned helpless emerges when we fail to form a productive relationship with the new technology.

To integrate the written word into our work we need to learn how to read and write, a skill. To make our relationship with the written world productive, however, we need to change how we approach work, changing our attitudes and behaviours to make the most of the capabilities provided by this new technology while minimising the problems. Socrates was right, naively swapping the written word for the spoken would result in forgetfulness and a shallower understanding of the topic. If, however, we also adapt our attitudes and behaviours, forming a productive relationship with the new technology (as our film student has), then then we will have more information at our fingertips and a deeper command of that information.

The skill associated with ‘Why remember what you can google?’ is the ability to construct a search query from a question. Learned helplessness emerges when we don’t know what question to ask, or don’t realise that we could ask a question. Knowing when and why to use a search engine is as, if not more, important than knowing how to use a search engine.

To overcome this we need to create a library of questions that we might ask: a catalogue subjects or ideas that we’ve become aware of but don’t ‘know’, and strategies for constructing new questions. We might, for example, invest some time (an attitude) in watching TED talks during lunch time, or reading books and attending conferences looking for new ideas (both behaviours). We might ask colleagues for help only to discover that we can construct a query by combining the name of an application with a short description of what we are trying to achieve (“Moodle peer marking”). This library is not a collections of things that we know, it’s a collection we’ve curated of things that we’re aware of and which we might want to learn in the future.

The existential angst we feel, along with learned helplessness, are due to our tendency to view technology as something apart from us, an instrumental tool that we use. This is also why we fear the rise of the robots: if we frame our relationship with technology in terms of agent and instrument, then it’s natural to assume ever smarter tools will become the agent in our relationship, relegating us to the instrument.

Reality is much more complex though, and our relationship with technology is richer than agent and instrument. Our technology is and has always been part of us. If we want to avoid both existential angst and learned helplessness then we need to acknowledge that understanding when and why to use these new technologies, fostering the attitudes and behaviours that enable us to form a productive relationship with them, are as, if not more, important than simply learning how to use them.