All posts by peg

Reconstructing jobs

Some coauthors and have a new report out: Reconstructing jobs: Creating good jobs in the age of artificial intelligence.  This essay builds on the previous two from our “future or work” series,  Cognitive collaboration and Reconstructing work, published on DU Press (now Deloitte Insights) as part of Deloitte Review #20 (DR20) and #21 (DR21) respectively.

Cognitive collaboration‘s main point was that there are synergies between humans and computers, and that solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. Reconstructing work built on this, pointing out the difference between human and machine was not in particular knowledge or skills exclusive to either; indeed, if we frame work in terms of prosecuting tasks than we must accept that there are no knowledge or skills required that are uniquely human. What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. This insight provided the title of the second essay – Reconstructing work – as it argued that we need to think differently about how we construct work if we want the make the most of the opportunities provided by AI.

This third essay in the series, Reconstructing jobs, takes a step back and looks these jobs of the future might look like. The narrative is built around a series of concrete examples – from contact centres through wealth management to bus drivers – to show how we might create this next generation of jobs. These are jobs founded on an new division of labour: humans creating new knowledge, making sense of the world to identify and delineate problems; AI plans solutions to these problems; and good-old automation to delivers. To do this we must create good jobs, as it is good jobs that make the most of our human abilities as creative problem identifiers. These jobs are also good for firms as, when combined suitably with AI, they will provide superior productivity. They’re also good job for the community, as increased productivity can be used to provide more equitable services and to support *learning by doing* within the community, a rising tide that lives all boats.

The essay concludes by pointing out that there is no inevitability about the nature of work in the future. As we say in the essay:Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable.

The question is then, what do we want these jobs of the future to look like?

Redefining education @ TAFE NSW >Engage 2017

C4tE AU was invited to TAFE NSW’s annual >Engage event to present a 15 minute overview of our Redefining education report, which had caught the attention of the event’s organisers.

The report ask a simple question:

In a world where our relationship with knowledge has changed – why remember what we can google? – should our relationship with education change as well?

and then chases this idea down the rabbit hole to realise that what we mean by “education” and “to be educated” need to change in response.

The presentation is a 15 minute TED format thing. You can find it on Vimeo.

The report is on Deloitte’s web site.

“Tiger, one day you will come to a fork in the road,” he said. “And you’re going to have to make a decision about which direction you want to go.” He raised his hand and pointed. “If you go that way you can be somebody. You will have to make compromises and you will have to turn your back on your friends. But you will be a member of the club and you will get promoted and you will get good assignments.”

Then Boyd raised his other hand and pointed another direction. “Or you can go that way and you can do something – something for your country and for your Air Force and for yourself. If you decide you want to do something, you may not get promoted and you may not get the good assignments and you certainly will not be a favorite of your superiors. But you won’t have to compromise yourself. You will be true to your friends and to yourself. And your work might make a difference.”

He paused and stared into the officer’s eyes and heart. “To be somebody or to do something.” In life there is often a roll call. That’s when you will have to make a decision. To be or to do. Which way will you go?”

—John Boyd from “Boyd: The fighter pilot who changed the art of war”

Image: Wikicommons

Reconstructing work

Some coauthors and I have a new(wish) report out – Reconstructing work: Automation, artificial intelligence, and the essential role of humans – on DU Press as part of Deloitte Review #21 (DR21). (I should note that I’ve been a bit lax in posting on this blog, so this is quite late.)

The topic of DR21 was ‘the future of work’. Our essay builds on the “Cognitive collaboration” piece published in the previous Deloitte Review (DR20).

The main point in Cognitive collaboration was that there are synergies between humans and computers. A solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. The poster child for this is freestyle chess where chess is a team sport with teams containing both humans and computers. Recently, during the development of our report on ‘should everyone learn how to code’ (To code to not to code, is that the question? out the other week, but more on that later), we found emerging evidence that this is a unique and teachable skill that crosses multiple domains.

With this new essay we started by thinking about how one might apply this freestyle chess model to more pedestrian work environments. We found that coming up with a clean division of labour between – breaking the problem into seperate tasks for human and machine – was clumsy at best. However if you think of AI as realising *behaviours* to solve *problems*, rather than prosecuting *tasks* to create *products*, then integrating human and machine is much easier. This aligns better with the nature of artificial intelligence (AI) technologies.

As we say is a forthcoming report:

AI or ‘cognitive computing’ […] are better thought of as automating behaviours rather than tasks. Recognising a kitten in a photo from the internet, or avoiding a pedestrian that has stumbled onto the road, might be construed as a task, though it is more natural to think of it as a behaviour. Task implies a piece of work to be done or undertaken, an action (a technique) we choose to do. Behaviour, on the other hand, implies responding to the changing world around us, a reflex. We don’t choose to recognise a kitten or avoid the pedestrian, though we might choose (or not) to hammer in a nail when one is presented. A behaviour is something we reflexively do in response to appropriate stimulus (an image of a kitten, or even a kitten itself poised in-front of us, or the errant pedestrian).

The radical conclusion from this is that there is no knowledge or skill unique to a human. That’s because knowledge and skill – in this context – are defined relative to a task. We’re at a point that if we can define a task then we can automate it (given cost-benefit) so consequently there are no knowledge or skills unique to humans.

What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. If we want to move forward, and deliver on the promise of AI and cognitive computing, then we need to shift the foundation of work. Hence the title: we need to “reconstruct work”.

The full essay is on the DP site, so head over and check it out.

Why remember what you can google?

google | ˈɡuːɡl |

verb [with object]

search for information about (someone or something) on the Internet using the search engine Google on Sunday she googled an ex-boyfriend | [no object] : I googled for a cheap hotel/flight deal.

DERIVATIVES
googleable (also googlable) adjective

ORIGIN
1990s: from Google, the proprietary name of the search engine.

MacOS Dictionary

‘Why remember what you can google?’ has become something of a catchphrase. Even more so now that many homes have voice assistants like Google Home and Amazon Alexia. It’s common, however, to feel some form of existential angst as if we need to google something then we wonder if we really understand it. Our natural impostor syndrome kicks in and we question if our hard-won knowledge and skills are really our own.

The other side of this is learned helplessness, where googling something might be helpful but we don’t know quite what to google for, or fail to realise that a search engine might be able to help us solve the problem in front of us if just we knew what question to ask. This is a common problem with digital technology, where students learn how to use particular tools to solve particular problems but are unable to generalise these skills. Our schools are quite good at teaching students how, given a question, to construct a query for a search engine. What we’re not helping the students with is understanding when or why they might use a search engine, or digital tools in general.

Both of these these problems – the existential angst and learn helplessness – stem from a misunderstanding of our relationship with technology.

Socrates mistrusted writing as he felt that it would make us forgetful, and that learning from a written text would limit our insight and wisdom into a subject as we couldn’t fully interrogate it. What he didn’t realise was that libraries of written texts provide us with access to more diverse points of view and enable us to explore the breadth of a subject, while treating the library as an extension of our memory means that we are limited to what we can refer to in the library rather than what we can remember ourselves.

We can see a similar phenomena with contemporary graduates, who typically have a more sophisticated understanding of the subjects they covered in their formal education than did earlier generations. This is not because they are smarter. Their deeper understanding is a result of them investing more of their time exploring a subject, and less of it in attempting to find and consume the information they need.

Consider a film school student. Our student might be told that some technique Hitchcock used might be of interest to them.

In the seventies this would necessitate a trip to the library-card catalogue, searching for criticism of Hitchcock’s films, flipping through books to determine which might be of interest, reading those that (potentially) are interesting, listing the films that contain good examples of the technique, and then searching the repertory theatres to see which are playing these old films. The entire journey from first mention to the student experimenting with the technique in their own work might take over a year and will require significant effort and devotion.

Compare this learning journey to what a student might do today. The mention by a lecturer on a Friday will result in the student spending a slow Saturday afternoon googling. They’ll work their way from general (and somewhat untrustworthy) sources such as Wikipedia and blog posts as they canvas the topic before consuming relevant criticism, some of which will be peer-reviewed journal and books though others might be in the form of video essays incorporating clips from the movies they mention. Any films of note are added to the queue of the student’s streaming service. Sunday is spent watching the films, and possibly rewatching the scenes where the technique is used. The entire journey – from first suggestion to the student grabbing a camera and editing tool to experiment – might take a weekend.

It’s not surprising the contemporary students emerge from their formal education with a more sophisticated command of their chosen domain: they’ve spent a greater proportion of their time investigating the breadth and depth of domain, rather than struggling to find the sources and references they need to feed their learning.

The existential angst we all feel stems from the fact that we have a different relationship with the new technology than the old. The relationship we have with the written word is different to the one we have with the spoken word. Similarly, the relationship we have with googled knowledge is different to the one we have with remembered knowledge. Learned helpless emerges when we fail to form a productive relationship with the new technology.

To integrate the written word into our work we need to learn how to read and write, a skill. To make our relationship with the written world productive, however, we need to change how we approach work, changing our attitudes and behaviours to make the most of the capabilities provided by this new technology while minimising the problems. Socrates was right, naively swapping the written word for the spoken would result in forgetfulness and a shallower understanding of the topic. If, however, we also adapt our attitudes and behaviours, forming a productive relationship with the new technology (as our film student has), then then we will have more information at our fingertips and a deeper command of that information.

The skill associated with ‘Why remember what you can google?’ is the ability to construct a search query from a question. Learned helplessness emerges when we don’t know what question to ask, or don’t realise that we could ask a question. Knowing when and why to use a search engine is as, if not more, important than knowing how to use a search engine.

To overcome this we need to create a library of questions that we might ask: a catalogue subjects or ideas that we’ve become aware of but don’t ‘know’, and strategies for constructing new questions. We might, for example, invest some time (an attitude) in watching TED talks during lunch time, or reading books and attending conferences looking for new ideas (both behaviours). We might ask colleagues for help only to discover that we can construct a query by combining the name of an application with a short description of what we are trying to achieve (“Moodle peer marking”). This library is not a collections of things that we know, it’s a collection we’ve curated of things that we’re aware of and which we might want to learn in the future.

The existential angst we feel, along with learned helplessness, are due to our tendency to view technology as something apart from us, an instrumental tool that we use. This is also why we fear the rise of the robots: if we frame our relationship with technology in terms of agent and instrument, then it’s natural to assume ever smarter tools will become the agent in our relationship, relegating us to the instrument.

Reality is much more complex though, and our relationship with technology is richer than agent and instrument. Our technology is and has always been part of us. If we want to avoid both existential angst and learned helplessness then we need to acknowledge that understanding when and why to use these new technologies, fostering the attitudes and behaviours that enable us to form a productive relationship with them, are as, if not more, important than simply learning how to use them.

To code or not to code: Mapping digital competence

We’re kicking off the next phase of our “Should everyone learn how to code?” project. This time around it’s a series of public workshops over late January and early February in Melbourne, Geelong, Sydney, Western Sydney, Hobart, Brisbane, and Adelaide. The purpose of the workshops is to try and create a mud-map describing what a digitally competent workforce might look like.

As the pitch goes…

Australia’s prosperity depends on equipping the next generation with the skills needed to thrive in a digital environment. But does this mean that everyone needs to learn how to code?

In the national series of round tables Deloitte Centre for the Edge and Geelong Grammar School hosted in 2016, the answer was “Yes, enough that they know what coding is.”

The greater concern, though, was ensuring that everyone is comfortable integrating digital tools into their work whatever that work might be, something that we termed ‘digital competence’. This concept was unpacked in an essay published earlier this year.

Now we’re turning our attention to the question: What does digital competence look like in practice, and how do we integrate it into the curriculum?

We are holding an invitation only workshop for industry and education to explore the following ideas:

  • What are the attributes of a digitally competent professional?
  • How might their digital competence change over their career?
  • What are the common attributes of digital competence in the workplace?
  • How might we teach these attributes?

If you’re interested in attending, or if you know someone who might be interested in attending, then contact me and we’ll add you to the list. Note that there’s only 24-32 places in each workshop and we want to ensure a diverse mix of people in each workshop, so we might not be able to fit everyone who’s interested, but we’ll do our best.

Welcome to the future, we have robots

I was interviewed by AlphaGeek podcast. This was as a result of presenting some of the C4tE’s work around AI, the future of work, and how this might change government service delivery, at the Digital Government Transformation Conference last November in Canberra, though the interview is wider ranging than that.

As the blurb says:

Peter Evans-Greenwood has deep experience as a CTO and tech strategist and is now a Fellow at Deloitte’s Centre for the Edge, helping organisations understand the digital revolution and how they can embrace the future. We get deep into artificial intelligence and the future of work. Will we still have jobs in the future? Peter is confident he has the answer.

The host piped in with:

Peter’s predictions are surprising but make total sense when he explains them.

You can find the podcast on the Alpha Transform web site.

Cognitive collaboration

I have a new report out on DU PressCognitive Collaboration: Why humans and computers think better together – where a couple of coauthors and I wade into the “will AI destroy the future or create utopia” debate.

Our big point is that AI doesn’t replicate human intelligence, it replicates specific human behaviours, and the mechanisms behind these behaviours are different to those behind their human equivalents. It’s in these differences that opportunity lies, as there’s evidence that machine and human intelligence are complimentary, rather than in competition. As we say in the report “humans and machines are [both] better together”. The poster child for this is freestyle chess.

Eight years later [after Deep Blue defeated Kasparov in 1997], it became clear that the story is considerably more interesting than “machine vanquishes man.” A competition called “freestyle chess” was held, allowing any combination of human and computer chess players to compete. The competition resulted in an upset victory that Kasparov later reflected upon:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. . . . Human strategic guidance combined with the tactical acuity of a computer was overwhelming.1)Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article

So rather than thinking of AI as our enemy, we should think of it as supporting us in our failings.

We’re pretty happy with the report – so happy that we’re already working on a follow on – so wander over to DU Press and check it out.

References   [ + ]

1. Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article

Cryptocurrencies are problems, not features

CBA announced an Ethereum-based bond market solution1)James Eyers (24 Jan 2017), Commonwealth Bank puts government bonds on a blockchain, Australia Financial Review.) It’s the usual sort of thing: it’s thought that blockchain and smart contracts will make everything so much easier and cheaper by improving transparency and making the exchange of goods (bond) and value (currency) atomic.

What caught my eye though was the following:

CBA created a digital currency to facilitate the payment for the bond through its blockchain, and Ms Gilder called on the RBA to consider issuing a digital version of the Australian dollar, which she said would provide the market with more confidence.

“For the blockchain to recognise its full potential as an asset register and a payments mechanism, you need a blockchain-friendly form of currency,” she said. “In the future, we would hope the RBA will look at issuing a centrally issued, blockchain-friendly digital currency, which would help because then the currency would be exactly the same as a fiat currency dollar in your account today just in blockchain form.”

James Eyers (24 Jan 2017), Commonwealth Bank puts government bonds on a blockchain, Australian Financial Review

As is all to often with this sort of thing, the proponents of the blockchain solution don’t understand how money works and consequentially don’t realise that statements like “a centrally issued, blockchain-friendly digital currency, which would help because then the currency would be exactly the same as a fiat currency dollar in your account today just in blockchain form” are just wrong.

To provide the atomic operation the article talks about (atomic asset and currency exchange), both asset and currency need to be blockchain native: blockchain needs to the the ‘database of record’ for both. Further, this means that the currency must to be issued on the same blockchain as the asset.

The most obvious solution is a private currency secured against some AUD held by an issuer / market maker. If we want our currency to be exactly the same as AUD then it must be backed by AUD – i.e. a unit of private currency represents a claim on a unit of AUD – otherwise we’re forced to deal with change rates.

The problem is that no-one will want to obtain the AUD required to issue enough private currency to support transactions in the market, so the solution isn’t economically viable. Imagine deploying a market-based solution that requires the market manager to hold the same amount of working capital as the total market valuation? That’s what they’re talking about.

The proposed “centrally issued, blockchain-friendly digital currency” doesn’t solve the problem as the currency wouldn’t live on the same blockchain. All payments would be off-chain via a gateway / oracle and therefore that security-value exchange would not be atomic, with enforcement all of value exchanges off-chain in the gateways / oracles. The nature of the currency doesn’t matter (“blockchain-friendly” is meaningless): for the operation to be atomic the currency and asset must be issued on the same blockchain.

We could support atomic transactions via Ethereum by issuing a currency on-chain (a “cryptocurrency”, as with Bitcoin) and then have an exchange rate between the AUD and on-chain currency. I doubt the bankers would find the currency risk acceptable though. Plus each market participant would need to maintain an account with enough on-chain currency to support their operations, so all we’ve really done is take the “working capital is total market value” requirement and spread it around the market participants, with an additional currency risk. I can’t see the market having a lot of confidence in that solution.

Consequently the blockchain doesn’t buy us much more than a bit of transparency, and there are cheaper and more efficient ways of supporting that without Ethererum. If we dump Ethererum and the cryptocurrency, and build a conventional distributed solution (R3 is default mode without a blockchain – smart contracts optional – should do), then the solution should be quite practical.

References   [ + ]

1. James Eyers (24 Jan 2017), Commonwealth Bank puts government bonds on a blockchain, Australia Financial Review.

You can’t democratise trust

I have a new post on the Deloitte Digital blog.

There’s been a lot of talk about using technology to democratise trust, and much of it shows a deep misunderstanding of just what trust is. It’s implicitly assumed that trust is a fungible asset, something that can be quantified, captured and passed around via technology. This isn’t true though.

As I point out in the post:

Trust is different to technology. We can’t democratise trust. Trust is a subjective measure of risk. It’s something we construct internally when we observe a consistent pattern of behaviour. We can’t create new kinds of trust. Trust is not a fungible factor that we can manipulate and transfer.

Misunderstanding trust means that technical solutions are proposed rather than tackling the real problem. As I conclude in the post:

If we want to rebuild trust then we need to solve the hard social problems, and create the stable, consistent and transparent institutions (be they distributed or centralised) that all of us can trust.

Technology can enable us to create more transparent institutions, but if these institutions fail to behave in a trustworthy manner then few will trust them. This is why the recent Ethereum hard fork is interesting. Some people wanted an immutable ledger, and they’re now all on ETC as they no longer trust ETH. Others trust the Ethereum Foundation to “do the right thing by them” and they’re now on ETH, and don’t trust ETC.