All posts by peg

Why remember what you can google?

google | ˈɡuːɡl |

verb [with object]

search for information about (someone or something) on the Internet using the search engine Google on Sunday she googled an ex-boyfriend | [no object] : I googled for a cheap hotel/flight deal.

DERIVATIVES
googleable (also googlable) adjective

ORIGIN
1990s: from Google, the proprietary name of the search engine.

MacOS Dictionary

‘Why remember what you can google?’ has become something of a catchphrase. Even more so now that many homes have voice assistants like Google Home and Amazon Alexia. It’s common, however, to feel some form of existential angst as if we need to google something then we wonder if we really understand it. Our natural impostor syndrome kicks in and we question if our hard-won knowledge and skills are really our own.

The other side of this is learned helplessness, where googling something might be helpful but we don’t know quite what to google for, or fail to realise that a search engine might be able to help us solve the problem in front of us if just we knew what question to ask. This is a common problem with digital technology, where students learn how to use particular tools to solve particular problems but are unable to generalise these skills. Our schools are quite good at teaching students how, given a question, to construct a query for a search engine. What we’re not helping the students with is understanding when or why they might use a search engine, or digital tools in general.

Both of these these problems – the existential angst and learn helplessness – stem from a misunderstanding of our relationship with technology.

Socrates mistrusted writing as he felt that it would make us forgetful, and that learning from a written text would limit our insight and wisdom into a subject as we couldn’t fully interrogate it. What he didn’t realise was that libraries of written texts provide us with access to more diverse points of view and enable us to explore the breadth of a subject, while treating the library as an extension of our memory means that we are limited to what we can refer to in the library rather than what we can remember ourselves.

We can see a similar phenomena with contemporary graduates, who typically have a more sophisticated understanding of the subjects they covered in their formal education than did earlier generations. This is not because they are smarter. Their deeper understanding is a result of them investing more of their time exploring a subject, and less of it in attempting to find and consume the information they need.

Consider a film school student. Our student might be told that some technique Hitchcock used might be of interest to them.

In the seventies this would necessitate a trip to the library-card catalogue, searching for criticism of Hitchcock’s films, flipping through books to determine which might be of interest, reading those that (potentially) are interesting, listing the films that contain good examples of the technique, and then searching the repertory theatres to see which are playing these old films. The entire journey from first mention to the student experimenting with the technique in their own work might take over a year and will require significant effort and devotion.

Compare this learning journey to what a student might do today. The mention by a lecturer on a Friday will result in the student spending a slow Saturday afternoon googling. They’ll work their way from general (and somewhat untrustworthy) sources such as Wikipedia and blog posts as they canvas the topic before consuming relevant criticism, some of which will be peer-reviewed journal and books though others might be in the form of video essays incorporating clips from the movies they mention. Any films of note are added to the queue of the student’s streaming service. Sunday is spent watching the films, and possibly rewatching the scenes where the technique is used. The entire journey – from first suggestion to the student grabbing a camera and editing tool to experiment – might take a weekend.

It’s not surprising the contemporary students emerge from their formal education with a more sophisticated command of their chosen domain: they’ve spent a greater proportion of their time investigating the breadth and depth of domain, rather than struggling to find the sources and references they need to feed their learning.

The existential angst we all feel stems from the fact that we have a different relationship with the new technology than the old. The relationship we have with the written word is different to the one we have with the spoken word. Similarly, the relationship we have with googled knowledge is different to the one we have with remembered knowledge. Learned helpless emerges when we fail to form a productive relationship with the new technology.

To integrate the written word into our work we need to learn how to read and write, a skill. To make our relationship with the written world productive, however, we need to change how we approach work, changing our attitudes and behaviours to make the most of the capabilities provided by this new technology while minimising the problems. Socrates was right, naively swapping the written word for the spoken would result in forgetfulness and a shallower understanding of the topic. If, however, we also adapt our attitudes and behaviours, forming a productive relationship with the new technology (as our film student has), then then we will have more information at our fingertips and a deeper command of that information.

The skill associated with ‘Why remember what you can google?’ is the ability to construct a search query from a question. Learned helplessness emerges when we don’t know what question to ask, or don’t realise that we could ask a question. Knowing when and why to use a search engine is as, if not more, important than knowing how to use a search engine.

To overcome this we need to create a library of questions that we might ask: a catalogue subjects or ideas that we’ve become aware of but don’t ‘know’, and strategies for constructing new questions. We might, for example, invest some time (an attitude) in watching TED talks during lunch time, or reading books and attending conferences looking for new ideas (both behaviours). We might ask colleagues for help only to discover that we can construct a query by combining the name of an application with a short description of what we are trying to achieve (“Moodle peer marking”). This library is not a collections of things that we know, it’s a collection we’ve curated of things that we’re aware of and which we might want to learn in the future.

The existential angst we feel, along with learned helplessness, are due to our tendency to view technology as something apart from us, an instrumental tool that we use. This is also why we fear the rise of the robots: if we frame our relationship with technology in terms of agent and instrument, then it’s natural to assume ever smarter tools will become the agent in our relationship, relegating us to the instrument.

Reality is much more complex though, and our relationship with technology is richer than agent and instrument. Our technology is and has always been part of us. If we want to avoid both existential angst and learned helplessness then we need to acknowledge that understanding when and why to use these new technologies, fostering the attitudes and behaviours that enable us to form a productive relationship with them, are as, if not more, important than simply learning how to use them.

To code or not to code: Mapping digital competence

We’re kicking off the next phase of our “Should everyone learn how to code?” project. This time around it’s a series of public workshops over late January and early February in Melbourne, Geelong, Sydney, Western Sydney, Hobart, Brisbane, and Adelaide. The purpose of the workshops is to try and create a mud-map describing what a digitally competent workforce might look like.

As the pitch goes…

Australia’s prosperity depends on equipping the next generation with the skills needed to thrive in a digital environment. But does this mean that everyone needs to learn how to code?

In the national series of round tables Deloitte Centre for the Edge and Geelong Grammar School hosted in 2016, the answer was “Yes, enough that they know what coding is.”

The greater concern, though, was ensuring that everyone is comfortable integrating digital tools into their work whatever that work might be, something that we termed ‘digital competence’. This concept was unpacked in an essay published earlier this year.

Now we’re turning our attention to the question: What does digital competence look like in practice, and how do we integrate it into the curriculum?

We are holding an invitation only workshop for industry and education to explore the following ideas:

  • What are the attributes of a digitally competent professional?
  • How might their digital competence change over their career?
  • What are the common attributes of digital competence in the workplace?
  • How might we teach these attributes?

If you’re interested in attending, or if you know someone who might be interested in attending, then contact me and we’ll add you to the list. Note that there’s only 24-32 places in each workshop and we want to ensure a diverse mix of people in each workshop, so we might not be able to fit everyone who’s interested, but we’ll do our best.

Welcome to the future, we have robots

I was interviewed by AlphaGeek podcast. This was as a result of presenting some of the C4tE’s work around AI, the future of work, and how this might change government service delivery, at the Digital Government Transformation Conference last November in Canberra, though the interview is wider ranging than that.

As the blurb says:

Peter Evans-Greenwood has deep experience as a CTO and tech strategist and is now a Fellow at Deloitte’s Centre for the Edge, helping organisations understand the digital revolution and how they can embrace the future. We get deep into artificial intelligence and the future of work. Will we still have jobs in the future? Peter is confident he has the answer.

The host piped in with:

Peter’s predictions are surprising but make total sense when he explains them.

You can find the podcast on the Alpha Transform web site.

To code or not to code, is that the question?

Over 2016-2017 Deloitte Centre for the Edge collaborated with Geelong Grammar School to run a national series of roundtables where we unpacked the common catchphrase “everyone should learn how to code” as we have noticed that there was no consensus on what ‘coding’ was, and it seemed to represent an aspiration more than a skill. We felt that the community had jumped from observation (digital technology is becoming increasingly important) to prescription (everyone should learn how to code) without considering what problem we actually wanted to solve.

What we found from the roundtables was interesting. First, yes, everyone should learn how to code a little, mainly to demystify it. Coding and computers are seen as something of a black art, and that shouldn’t be the case. A short compulsory coding course would also expose students to a skill and career that they might not have otherwise considered. However, the bigger problem lurking behind the catchphrase was the inability for many workers to productively engage with the technology. Many of us suffer from learned helplessness, where we’ve learnt that we need to use digital tools in particular ways to solve particular problems, and if we deviate from this then all manner of things go wrong. This needs to change.

The result of the roundtables were written up and published but Deloitte and Geelong Grammar School.

Cognitive collaboration

I have a new report out on DU PressCognitive Collaboration: Why humans and computers think better together – where a couple of coauthors and I wade into the “will AI destroy the future or create utopia” debate.

Our big point is that AI doesn’t replicate human intelligence, it replicates specific human behaviours, and the mechanisms behind these behaviours are different to those behind their human equivalents. It’s in these differences that opportunity lies, as there’s evidence that machine and human intelligence are complimentary, rather than in competition. As we say in the report “humans and machines are [both] better together”. The poster child for this is freestyle chess.

Eight years later [after Deep Blue defeated Kasparov in 1997], it became clear that the story is considerably more interesting than “machine vanquishes man.” A competition called “freestyle chess” was held, allowing any combination of human and computer chess players to compete. The competition resulted in an upset victory that Kasparov later reflected upon:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. . . . Human strategic guidance combined with the tactical acuity of a computer was overwhelming.1)Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article

So rather than thinking of AI as our enemy, we should think of it as supporting us in our failings.

We’re pretty happy with the report – so happy that we’re already working on a follow on – so wander over to DU Press and check it out.

References   [ + ]

1. Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article

Cryptocurrencies are problems, not features

CBA announced an Ethereum-based bond market solution1)James Eyers (24 Jan 2017), Commonwealth Bank puts government bonds on a blockchain, Australia Financial Review.) It’s the usual sort of thing: it’s thought that blockchain and smart contracts will make everything so much easier and cheaper by improving transparency and making the exchange of goods (bond) and value (currency) atomic.

What caught my eye though was the following:

CBA created a digital currency to facilitate the payment for the bond through its blockchain, and Ms Gilder called on the RBA to consider issuing a digital version of the Australian dollar, which she said would provide the market with more confidence.

“For the blockchain to recognise its full potential as an asset register and a payments mechanism, you need a blockchain-friendly form of currency,” she said. “In the future, we would hope the RBA will look at issuing a centrally issued, blockchain-friendly digital currency, which would help because then the currency would be exactly the same as a fiat currency dollar in your account today just in blockchain form.”

James Eyers (24 Jan 2017), Commonwealth Bank puts government bonds on a blockchain, Australian Financial Review

As is all to often with this sort of thing, the proponents of the blockchain solution don’t understand how money works and consequentially don’t realise that statements like “a centrally issued, blockchain-friendly digital currency, which would help because then the currency would be exactly the same as a fiat currency dollar in your account today just in blockchain form” are just wrong.

To provide the atomic operation the article talks about (atomic asset and currency exchange), both asset and currency need to be blockchain native: blockchain needs to the the ‘database of record’ for both. Further, this means that the currency must to be issued on the same blockchain as the asset.

The most obvious solution is a private currency secured against some AUD held by an issuer / market maker. If we want our currency to be exactly the same as AUD then it must be backed by AUD – i.e. a unit of private currency represents a claim on a unit of AUD – otherwise we’re forced to deal with change rates.

The problem is that no-one will want to obtain the AUD required to issue enough private currency to support transactions in the market, so the solution isn’t economically viable. Imagine deploying a market-based solution that requires the market manager to hold the same amount of working capital as the total market valuation? That’s what they’re talking about.

The proposed “centrally issued, blockchain-friendly digital currency” doesn’t solve the problem as the currency wouldn’t live on the same blockchain. All payments would be off-chain via a gateway / oracle and therefore that security-value exchange would not be atomic, with enforcement all of value exchanges off-chain in the gateways / oracles. The nature of the currency doesn’t matter (“blockchain-friendly” is meaningless): for the operation to be atomic the currency and asset must be issued on the same blockchain.

We could support atomic transactions via Ethereum by issuing a currency on-chain (a “cryptocurrency”, as with Bitcoin) and then have an exchange rate between the AUD and on-chain currency. I doubt the bankers would find the currency risk acceptable though. Plus each market participant would need to maintain an account with enough on-chain currency to support their operations, so all we’ve really done is take the “working capital is total market value” requirement and spread it around the market participants, with an additional currency risk. I can’t see the market having a lot of confidence in that solution.

Consequently the blockchain doesn’t buy us much more than a bit of transparency, and there are cheaper and more efficient ways of supporting that without Ethererum. If we dump Ethererum and the cryptocurrency, and build a conventional distributed solution (R3 is default mode without a blockchain – smart contracts optional – should do), then the solution should be quite practical.

References   [ + ]

1. James Eyers (24 Jan 2017), Commonwealth Bank puts government bonds on a blockchain, Australia Financial Review.

You can’t democratise trust

I have a new post on the Deloitte Digital blog.

There’s been a lot of talk about using technology to democratise trust, and much of it shows a deep misunderstanding of just what trust is. It’s implicitly assumed that trust is a fungible asset, something that can be quantified, captured and passed around via technology. This isn’t true though.

As I point out in the post:

Trust is different to technology. We can’t democratise trust. Trust is a subjective measure of risk. It’s something we construct internally when we observe a consistent pattern of behaviour. We can’t create new kinds of trust. Trust is not a fungible factor that we can manipulate and transfer.

Misunderstanding trust means that technical solutions are proposed rather than tackling the real problem. As I conclude in the post:

If we want to rebuild trust then we need to solve the hard social problems, and create the stable, consistent and transparent institutions (be they distributed or centralised) that all of us can trust.

Technology can enable us to create more transparent institutions, but if these institutions fail to behave in a trustworthy manner then few will trust them. This is why the recent Ethereum hard fork is interesting. Some people wanted an immutable ledger, and they’re now all on ETC as they no longer trust ETH. Others trust the Ethereum Foundation to “do the right thing by them” and they’re now on ETH, and don’t trust ETC.

Why is blockchain so wasteful?

I have a new post up on the Deloitte blog, coauthored with Robert Hillard.

As we point out in the post:

Bitcoin Miners are being paid somewhere between US $7-$9 to process each Bitcoin transaction.

To do this they’re consuming roughly 157% of a US household’s daily electricity usage per transaction. Those numbers don’t suggest a sustainable future for Bitcoin. They suggest an environmental disaster. And this is by design. So why is Bitcoin so wasteful?

The root of the problem is that in a permissionless and anonymous environment — where anyone can mine — you need to pay the miners, otherwise few will mine. We also know that miners will invest up to the margin (which looks to be around 20% for Bitcoin) to obtain this reward.

You can structure the mining algorithm to favour CAPEX or OPEX, though favouring OPEX is preferred, as it reduces the tendency to centralise. You can also play with where the resources are consumed, either direct in the mining process as with Proof of Work, or more indirectly via Proof of Stake. However, you cannot escape the fact that ultimately Bitcoin works because it consumes real world resources.

This leaves you trapped between two conflicting goals:

  • make the mining pool as large as possible to increase the security of the ledger
  • make the mining pool as small as possible to make the ledger more efficient

The only lever you have to pull is the size of the reward: either via seigniorage, or transaction fees.

Again, as we conclude in the post:

Bitcoin is wasteful as it must be wasteful to work. It isn’t actually waste, it’s really just the cost of securing Bitcoin’s ledger. It is, however, a rather high cost when compared to a more conventional, centralised solution.

Image: Mirko Tobias Schäfer

Can blockchain save the music industry?

I have a new post up at the Deloitte Digital blog: Can blockchain save the music industry?

One of the trends we’re seeing across industry is for the market to split in two – low cost, and high value – with the mid-market dying. The mass market, where everyone bought the same thing, is dying, and we’re transitioning to a market where individuals make their own trade-offs between high and low cost.

This makes me wonder if the attempts to modernised the old mass market music model will work. Mycelia and Mediachain are distribution strategies in a world where the mass market is dying.

The future for the music industry might lie elsewhere.

Image: Anefo Nationaal Archief.

The future of retail: The need for a new trust architecture

Deloitte ran a series of breakfasts recently for the retail community, and they kindly asked C4tE to participate. My contribution, which you can find at Scribd or embedded below, sprang out of our recent report The Future of Exchanging Value: Cryptocurrencies and the trust economy(FoEV) when, during a chance conversation, Robbie (the left-brained person who leads the Spatial team) pointed out that that we were arguing for a new trust architecture in retail.

The nutshell explanation of the idea is:

  • The current retail model is a constructed environment and shopping a learnt experience. This model is a response to the creation of mass market products and supply chains.
  • The model is build on there pillars: customers identifying a need, searching for a solution to the need, and then transacting with a merchant that they may not know or trust. Money – cash – facilitates this, as it enables us to transact with someone we don’t know and may never meet again.
  • However, a number of trends we saw in FoEV suggest that this model might be breaking down. The mid-market dies, consumers seized control of the customer-merchant relationship, peers replaced brands, value is now defined by the consumer rather than the producer, payments are moving away from the till, and shopping is becoming increasingly impulse driven.
  • What will retail look like in a world where need is never fully formed, search is irrelevant, and transactions are seen as distasteful? What is the new trust architecture?

See what you think of the presentation and feel free ping us if you have any thoughts.

The two reports mentioned in the presentation are:

Future of Retail – a New Trust Architecture by Peter Evans-Greenwood