All posts by peg

Digital is the new ERP

We seem to have forgotten that the development of Enterprise Resource Planning (ERP) was more a response to regulatory pressure than a child of technical innovation. This is why many executives and board members are unsure why their firm needs an ERP (and the massive investment implied), as ERP’s primary purpose was to improve governance (and, consequently, reduced operational risk and cost) rather than to provide the firm with some new value-creating capability.

Just prior to ERP, a confluence of technical and non-technical factors had created a situation where a firm’s executives and board had little idea of the goings on beneath them. Important details were buried in spreadsheets, squirrelled away on desktop PCs, with only summary reports passed to the general ledger and data warehouses.

Without the compliance guide rails provided by Finance and IT it’s easy for lines of business to go astray. Not long after spreadsheet use became widespread, it was clear that the information in the general ledger which the executive and board were relying on to direct the company could not be trusted. While the firm appeared to be making money, how this profit was being generated was less certain. Nor was it clear what operational risks a firm might be implicitly accepting, unable to manage them.

At which point the regulator stepped in demanding improvements in governance and operations. Industry’s response was ERP: an integrated set of business processes that synchronise (in real time) departmental solutions with the general ledger, supported (and enforced) by information technology.

We seem to be approaching a similar situation with digital. Firms are finding that important details are buried in SaaS and online solutions outside the purview of the Finance or IT departments and which are only loosely integrated to core systems, and their systems of records are, well, no longer ‘systems of record’.

This state of affairs could be accidental. The business wants to do the right thing but finds it difficult to know what the right thing to do is. They’re operating in a complex and rapidly changing business environment with demanding customers, many (previously core) functions are outsourced to specialist partners and suppliers, and they don’t have complete visibility into everything that is done on their behalf. It’s also an environment where regulators are constantly tweaking the rules to try and shape firm behavior, making a firm’s ability to absorb constant regulatory change a skill in and of itself.

Less ethical groups see this disconnect between the general ledger and lines of business as an opportunity to shape the story reaching head office. Cosmetic accounting techniques might be used to temporarily remove liabilities from a balance sheet, or to inflate revenue or market capitalisation by, for example, abusing special-purpose entities via techniques such as round-tripping (where an unused asset is sold with the understanding that same or similar assets will be bought back at the same or a similar price), all hidden under the veil of a summary report periodically passed between the department and the general ledger. These are the types of behaviours that brought Enron and Lehman Brothers down.

The information silos of departmental computing, the paradigm before today’s ERP-enabled enterprise computing, drove business efficiency by enabling firms to manage larger volumes of data. LEO (the Lyon’s Electronic Office),1)Land, F n.d., The story of LEO – the World’s First Business Computer, <https://warwick.ac.uk/services/library/mrc/explorefurther/digital/leo/story/>. an example of an early (and possibly the first) general-purpose business computer in the world, elaborated orders phoned into head office by Lyon’s tea shops every afternoon, calculating production requirements, assembly instructions, delivery schedules, invoices, costings, and management reports. These departmental applications, however, didn’t enable managers to find or exploit opportunities between departmental silos.

Spreadsheets and desktop PCs changed this. A desktop PC on a line manager’s desk enabled the manager to download data from multiple departmental applications and smash the data together in a spreadsheet. The resulting insights enabled production to be streamlined, or identified opportunities for new products and services, reducing costs and creating new value for the firm. Success begets success, and more data was downloaded and spreadsheets created. Soon these spreadsheets became integral parts of business processes and morphed into operational tools, outside the purview of the departmental applications that drove the firm’s compliance and reporting processes. Often the only connection between these new business processes and the general ledger was a summary report uploaded periodically.

The solution, then, was to integrate these cross-department spreadsheets, and the new business processes they enabled, into the firm’s departmental applications. The result is what we know today as ERP.

Something similar is happening with ‘digital’.

Cloud and SaaS solutions’ low barriers to adoption, and a customer empowered to demand what they want at the price they want from a global pool of suppliers, is driving line of business managers to go outside the enterprise to meet their needs. It’s not that the required business processes don’t exist; it just takes too long to modify the business processes to support new products, supply chains, suppliers and partners. Managers find it easier to put a credit card into a SaaS solution than wait for the IT department to respond with a plan, cost and timeline.

Departments are building entire value chains outside the purview of Finance and IT, as they believe that this is the only way that they can effectively respond to market opportunities and threats. Often the only connection to the general ledger is a summary spreadsheet, capturing details from cloud solutions, uploaded every few weeks or so. While the firm might be making money, it’s not clear to the executive or board just how this money is being made. Nor the risks this creates. We’ve been here before.

If the regulators don’t see this as a problem today, they soon will, as there is clearly a risk that good actors will unintentionally do the wrong thing, and for bad actors to intentionally do the wrong thing. There’s also the emerging problem of third parties hiding in the shadows using your legitimate business to wash funds (just as Amazon and Airbnb have become a target for money launderers).2)Shah, S 2017, ‘Airbnb is reportedly being used to launder money’, Engadget, <https://www.engadget.com/2017/11/27/airbnb-russian-money-laundering-scam/>. Operational risk is escalating as firms transform themselves from asset managers into integrators of services and information. The networked environment firms these firms inhabit creates unique challenges, has all the asymmetrical risks of an online environment, and the lack of visibility is compounding associated risks.

The problem digital is creating is clearly similar in effect to that the one created by the introduction of spreadsheets and the desktop PC. The cause, however, is different. Rather than creating new business processes that span existing (departmental) ones, digital is resulting in duplicated business processes that run in parallel and which support particular products or initiatives within the firm. They are also combining internal and external services, reducing the control a firm has on the end-to-end process.

These processes are intended to be short lived, thrown together quickly and torn down just as quickly. A process might be required, for example, to support a new supply chain for a burger of the month, thrown up at the start of the month to bring in new suppliers and partners, and torn down at the end. The duplicated processes are to support short-lived business exceptions, not to span business silos.

It’s assumed that more precise and tightly defined processes, backed by teams focused on maintaining and updating these processes to make them ‘agile’, will bring the firm back into compliance. This is not working though.

So while the problem digital is creating is similar to that due to spreadsheets, the cause if different and consequently our solution must also be different. Indeed, one might see business processes as part of the problem rather than as part of the solution.

References   [ + ]

1. Land, F n.d., The story of LEO – the World’s First Business Computer, <https://warwick.ac.uk/services/library/mrc/explorefurther/digital/leo/story/>.
2. Shah, S 2017, ‘Airbnb is reportedly being used to launder money’, Engadget, <https://www.engadget.com/2017/11/27/airbnb-russian-money-laundering-scam/>.

Reconstructing jobs

Some coauthors and I have a new report out: Reconstructing jobs: Creating good jobs in the age of artificial intelligence.  This essay builds on the previous two from our “future or work” series,  Cognitive collaboration and Reconstructing work, published on DU Press (now Deloitte Insights) as part of Deloitte Review #20 (DR20) and #21 (DR21) respectively.

Cognitive collaboration‘s main point was that there are synergies between humans and computers, and that solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. Reconstructing work built on this, pointing out the difference between human and machine was not in particular knowledge or skills exclusive to either; indeed, if we frame work in terms of prosecuting tasks than we must accept that there are no knowledge or skills required that are uniquely human. What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. This insight provided the title of the second essay – Reconstructing work – as it argued that we need to think differently about how we construct work if we want the make the most of the opportunities provided by AI.

This third essay in the series, Reconstructing jobs, takes a step back and looks these jobs of the future might look like. The narrative is built around a series of concrete examples – from contact centres through wealth management to bus drivers – to show how we might create this next generation of jobs. These are jobs founded on an new division of labour: humans creating new knowledge, making sense of the world to identify and delineate problems; AI plans solutions to these problems; and good-old automation to delivers. To do this we must create good jobs, as it is good jobs that make the most of our human abilities as creative problem identifiers. These jobs are also good for firms as, when combined suitably with AI, they will provide superior productivity. They’re also good job for the community, as increased productivity can be used to provide more equitable services and to support *learning by doing* within the community, a rising tide that lives all boats.

The essay concludes by pointing out that there is no inevitability about the nature of work in the future. As we say in the essay:Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable.

The question is then, what do we want these jobs of the future to look like?

Redefining education @ TAFE NSW >Engage 2017

C4tE AU was invited to TAFE NSW’s annual >Engage event to present a 15 minute overview of our Redefining education report, which had caught the attention of the event’s organisers.

The report ask a simple question:

In a world where our relationship with knowledge has changed – why remember what we can google? – should our relationship with education change as well?

and then chases this idea down the rabbit hole to realise that what we mean by “education” and “to be educated” need to change in response.

The presentation is a 15 minute TED format thing. You can find it on Vimeo.

The report is on Deloitte’s web site.

“Tiger, one day you will come to a fork in the road,” he said. “And you’re going to have to make a decision about which direction you want to go.” He raised his hand and pointed. “If you go that way you can be somebody. You will have to make compromises and you will have to turn your back on your friends. But you will be a member of the club and you will get promoted and you will get good assignments.”

Then Boyd raised his other hand and pointed another direction. “Or you can go that way and you can do something – something for your country and for your Air Force and for yourself. If you decide you want to do something, you may not get promoted and you may not get the good assignments and you certainly will not be a favorite of your superiors. But you won’t have to compromise yourself. You will be true to your friends and to yourself. And your work might make a difference.”

He paused and stared into the officer’s eyes and heart. “To be somebody or to do something.” In life there is often a roll call. That’s when you will have to make a decision. To be or to do. Which way will you go?”

—John Boyd from “Boyd: The fighter pilot who changed the art of war”

Image: Wikicommons

Reconstructing work

Some coauthors and I have a new(wish) report out – Reconstructing work: Automation, artificial intelligence, and the essential role of humans – on DU Press as part of Deloitte Review #21 (DR21). (I should note that I’ve been a bit lax in posting on this blog, so this is quite late.)

The topic of DR21 was ‘the future of work’. Our essay builds on the “Cognitive collaboration” piece published in the previous Deloitte Review (DR20).

The main point in Cognitive collaboration was that there are synergies between humans and computers. A solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. The poster child for this is freestyle chess where chess is a team sport with teams containing both humans and computers. Recently, during the development of our report on ‘should everyone learn how to code’ (To code to not to code, is that the question? out the other week, but more on that later), we found emerging evidence that this is a unique and teachable skill that crosses multiple domains.

With this new essay we started by thinking about how one might apply this freestyle chess model to more pedestrian work environments. We found that coming up with a clean division of labour between – breaking the problem into seperate tasks for human and machine – was clumsy at best. However if you think of AI as realising *behaviours* to solve *problems*, rather than prosecuting *tasks* to create *products*, then integrating human and machine is much easier. This aligns better with the nature of artificial intelligence (AI) technologies.

As we say is a forthcoming report:

AI or ‘cognitive computing’ […] are better thought of as automating behaviours rather than tasks. Recognising a kitten in a photo from the internet, or avoiding a pedestrian that has stumbled onto the road, might be construed as a task, though it is more natural to think of it as a behaviour. Task implies a piece of work to be done or undertaken, an action (a technique) we choose to do. Behaviour, on the other hand, implies responding to the changing world around us, a reflex. We don’t choose to recognise a kitten or avoid the pedestrian, though we might choose (or not) to hammer in a nail when one is presented. A behaviour is something we reflexively do in response to appropriate stimulus (an image of a kitten, or even a kitten itself poised in-front of us, or the errant pedestrian).

The radical conclusion from this is that there is no knowledge or skill unique to a human. That’s because knowledge and skill – in this context – are defined relative to a task. We’re at a point that if we can define a task then we can automate it (given cost-benefit) so consequently there are no knowledge or skills unique to humans.

What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. If we want to move forward, and deliver on the promise of AI and cognitive computing, then we need to shift the foundation of work. Hence the title: we need to “reconstruct work”.

The full essay is on the DP site, so head over and check it out.

Why remember what you can google?

google | ˈɡuːɡl |

verb [with object]

search for information about (someone or something) on the Internet using the search engine Google on Sunday she googled an ex-boyfriend | [no object] : I googled for a cheap hotel/flight deal.

DERIVATIVES
googleable (also googlable) adjective

ORIGIN
1990s: from Google, the proprietary name of the search engine.

MacOS Dictionary

‘Why remember what you can google?’ has become something of a catchphrase. Even more so now that many homes have voice assistants like Google Home and Amazon Alexia. It’s common, however, to feel some form of existential angst as if we need to google something then we wonder if we really understand it. Our natural impostor syndrome kicks in and we question if our hard-won knowledge and skills are really our own.

The other side of this is learned helplessness, where googling something might be helpful but we don’t know quite what to google for, or fail to realise that a search engine might be able to help us solve the problem in front of us if just we knew what question to ask. This is a common problem with digital technology, where students learn how to use particular tools to solve particular problems but are unable to generalise these skills. Our schools are quite good at teaching students how, given a question, to construct a query for a search engine. What we’re not helping the students with is understanding when or why they might use a search engine, or digital tools in general.

Both of these these problems – the existential angst and learn helplessness – stem from a misunderstanding of our relationship with technology.

Socrates mistrusted writing as he felt that it would make us forgetful, and that learning from a written text would limit our insight and wisdom into a subject as we couldn’t fully interrogate it. What he didn’t realise was that libraries of written texts provide us with access to more diverse points of view and enable us to explore the breadth of a subject, while treating the library as an extension of our memory means that we are limited to what we can refer to in the library rather than what we can remember ourselves.

We can see a similar phenomena with contemporary graduates, who typically have a more sophisticated understanding of the subjects they covered in their formal education than did earlier generations. This is not because they are smarter. Their deeper understanding is a result of them investing more of their time exploring a subject, and less of it in attempting to find and consume the information they need.

Consider a film school student. Our student might be told that some technique Hitchcock used might be of interest to them.

In the seventies this would necessitate a trip to the library-card catalogue, searching for criticism of Hitchcock’s films, flipping through books to determine which might be of interest, reading those that (potentially) are interesting, listing the films that contain good examples of the technique, and then searching the repertory theatres to see which are playing these old films. The entire journey from first mention to the student experimenting with the technique in their own work might take over a year and will require significant effort and devotion.

Compare this learning journey to what a student might do today. The mention by a lecturer on a Friday will result in the student spending a slow Saturday afternoon googling. They’ll work their way from general (and somewhat untrustworthy) sources such as Wikipedia and blog posts as they canvas the topic before consuming relevant criticism, some of which will be peer-reviewed journal and books though others might be in the form of video essays incorporating clips from the movies they mention. Any films of note are added to the queue of the student’s streaming service. Sunday is spent watching the films, and possibly rewatching the scenes where the technique is used. The entire journey – from first suggestion to the student grabbing a camera and editing tool to experiment – might take a weekend.

It’s not surprising the contemporary students emerge from their formal education with a more sophisticated command of their chosen domain: they’ve spent a greater proportion of their time investigating the breadth and depth of domain, rather than struggling to find the sources and references they need to feed their learning.

The existential angst we all feel stems from the fact that we have a different relationship with the new technology than the old. The relationship we have with the written word is different to the one we have with the spoken word. Similarly, the relationship we have with googled knowledge is different to the one we have with remembered knowledge. Learned helpless emerges when we fail to form a productive relationship with the new technology.

To integrate the written word into our work we need to learn how to read and write, a skill. To make our relationship with the written world productive, however, we need to change how we approach work, changing our attitudes and behaviours to make the most of the capabilities provided by this new technology while minimising the problems. Socrates was right, naively swapping the written word for the spoken would result in forgetfulness and a shallower understanding of the topic. If, however, we also adapt our attitudes and behaviours, forming a productive relationship with the new technology (as our film student has), then then we will have more information at our fingertips and a deeper command of that information.

The skill associated with ‘Why remember what you can google?’ is the ability to construct a search query from a question. Learned helplessness emerges when we don’t know what question to ask, or don’t realise that we could ask a question. Knowing when and why to use a search engine is as, if not more, important than knowing how to use a search engine.

To overcome this we need to create a library of questions that we might ask: a catalogue subjects or ideas that we’ve become aware of but don’t ‘know’, and strategies for constructing new questions. We might, for example, invest some time (an attitude) in watching TED talks during lunch time, or reading books and attending conferences looking for new ideas (both behaviours). We might ask colleagues for help only to discover that we can construct a query by combining the name of an application with a short description of what we are trying to achieve (“Moodle peer marking”). This library is not a collections of things that we know, it’s a collection we’ve curated of things that we’re aware of and which we might want to learn in the future.

The existential angst we feel, along with learned helplessness, are due to our tendency to view technology as something apart from us, an instrumental tool that we use. This is also why we fear the rise of the robots: if we frame our relationship with technology in terms of agent and instrument, then it’s natural to assume ever smarter tools will become the agent in our relationship, relegating us to the instrument.

Reality is much more complex though, and our relationship with technology is richer than agent and instrument. Our technology is and has always been part of us. If we want to avoid both existential angst and learned helplessness then we need to acknowledge that understanding when and why to use these new technologies, fostering the attitudes and behaviours that enable us to form a productive relationship with them, are as, if not more, important than simply learning how to use them.

To code or not to code: Mapping digital competence

We’re kicking off the next phase of our “Should everyone learn how to code?” project. This time around it’s a series of public workshops over late January and early February in Melbourne, Geelong, Sydney, Western Sydney, Hobart, Brisbane, and Adelaide. The purpose of the workshops is to try and create a mud-map describing what a digitally competent workforce might look like.

As the pitch goes…

Australia’s prosperity depends on equipping the next generation with the skills needed to thrive in a digital environment. But does this mean that everyone needs to learn how to code?

In the national series of round tables Deloitte Centre for the Edge and Geelong Grammar School hosted in 2016, the answer was “Yes, enough that they know what coding is.”

The greater concern, though, was ensuring that everyone is comfortable integrating digital tools into their work whatever that work might be, something that we termed ‘digital competence’. This concept was unpacked in an essay published earlier this year.

Now we’re turning our attention to the question: What does digital competence look like in practice, and how do we integrate it into the curriculum?

We are holding an invitation only workshop for industry and education to explore the following ideas:

  • What are the attributes of a digitally competent professional?
  • How might their digital competence change over their career?
  • What are the common attributes of digital competence in the workplace?
  • How might we teach these attributes?

If you’re interested in attending, or if you know someone who might be interested in attending, then contact me and we’ll add you to the list. Note that there’s only 24-32 places in each workshop and we want to ensure a diverse mix of people in each workshop, so we might not be able to fit everyone who’s interested, but we’ll do our best.

Welcome to the future, we have robots

I was interviewed by AlphaGeek podcast. This was as a result of presenting some of the C4tE’s work around AI, the future of work, and how this might change government service delivery, at the Digital Government Transformation Conference last November in Canberra, though the interview is wider ranging than that.

As the blurb says:

Peter Evans-Greenwood has deep experience as a CTO and tech strategist and is now a Fellow at Deloitte’s Centre for the Edge, helping organisations understand the digital revolution and how they can embrace the future. We get deep into artificial intelligence and the future of work. Will we still have jobs in the future? Peter is confident he has the answer.

The host piped in with:

Peter’s predictions are surprising but make total sense when he explains them.

You can find the podcast on the Alpha Transform web site.

To code or not to code, is that the question?

Over 2016-2017 Deloitte Centre for the Edge collaborated with Geelong Grammar School to run a national series of roundtables where we unpacked the common catchphrase “everyone should learn how to code” as we have noticed that there was no consensus on what ‘coding’ was, and it seemed to represent an aspiration more than a skill. We felt that the community had jumped from observation (digital technology is becoming increasingly important) to prescription (everyone should learn how to code) without considering what problem we actually wanted to solve.

What we found from the roundtables was interesting. First, yes, everyone should learn how to code a little, mainly to demystify it. Coding and computers are seen as something of a black art, and that shouldn’t be the case. A short compulsory coding course would also expose students to a skill and career that they might not have otherwise considered. However, the bigger problem lurking behind the catchphrase was the inability for many workers to productively engage with the technology. Many of us suffer from learned helplessness, where we’ve learnt that we need to use digital tools in particular ways to solve particular problems, and if we deviate from this then all manner of things go wrong. This needs to change.

The result of the roundtables were written up and published but Deloitte and Geelong Grammar School.

Cognitive collaboration

I have a new report out on DU PressCognitive Collaboration: Why humans and computers think better together – where a couple of coauthors and I wade into the “will AI destroy the future or create utopia” debate.

Our big point is that AI doesn’t replicate human intelligence, it replicates specific human behaviours, and the mechanisms behind these behaviours are different to those behind their human equivalents. It’s in these differences that opportunity lies, as there’s evidence that machine and human intelligence are complimentary, rather than in competition. As we say in the report “humans and machines are [both] better together”. The poster child for this is freestyle chess.

Eight years later [after Deep Blue defeated Kasparov in 1997], it became clear that the story is considerably more interesting than “machine vanquishes man.” A competition called “freestyle chess” was held, allowing any combination of human and computer chess players to compete. The competition resulted in an upset victory that Kasparov later reflected upon:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. . . . Human strategic guidance combined with the tactical acuity of a computer was overwhelming.1)Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article

So rather than thinking of AI as our enemy, we should think of it as supporting us in our failings.

We’re pretty happy with the report – so happy that we’re already working on a follow on – so wander over to DU Press and check it out.

References   [ + ]

1. Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article