“Tiger, one day you will come to a fork in the road,” he said. “And you’re going to have to make a decision about which direction you want to go.” He raised his hand and pointed. “If you go that way you can be somebody. You will have to make compromises and you will have to turn your back on your friends. But you will be a member of the club and you will get promoted and you will get good assignments.”
Then Boyd raised his other hand and pointed another direction. “Or you can go that way and you can do something – something for your country and for your Air Force and for yourself. If you decide you want to do something, you may not get promoted and you may not get the good assignments and you certainly will not be a favorite of your superiors. But you won’t have to compromise yourself. You will be true to your friends and to yourself. And your work might make a difference.”
He paused and stared into the officer’s eyes and heart. “To be somebody or to do something.” In life there is often a roll call. That’s when you will have to make a decision. To be or to do. Which way will you go?”
—John Boyd from “Boyd: The fighter pilot who changed the art of war”
The topic of DR21 was ‘the future of work’. Our essay builds on the “Cognitive collaboration” piece published in the previous Deloitte Review (DR20).
The main point in Cognitive collaboration was that there are synergies between humans and computers. A solution crafted by a human and computer in collaboration is superior to, and different from, a solution made either human or computer in isolation. The poster child for this is freestyle chess where chess is a team sport with teams containing both humans and computers. Recently, during the development of our report on ‘should everyone learn how to code’ (To code to not to code, is that the question? out the other week, but more on that later), we found emerging evidence that this is a unique and teachable skill that crosses multiple domains.
With this new essay we started by thinking about how one might apply this freestyle chess model to more pedestrian work environments. We found that coming up with a clean division of labour between – breaking the problem into seperate tasks for human and machine – was clumsy at best. However if you think of AI as realising *behaviours* to solve *problems*, rather than prosecuting *tasks* to create *products*, then integrating human and machine is much easier. This aligns better with the nature of artificial intelligence (AI) technologies.
As we say is a forthcoming report:
AI or ‘cognitive computing’ […] are better thought of as automating behaviours rather than tasks. Recognising a kitten in a photo from the internet, or avoiding a pedestrian that has stumbled onto the road, might be construed as a task, though it is more natural to think of it as a behaviour. Task implies a piece of work to be done or undertaken, an action (a technique) we choose to do. Behaviour, on the other hand, implies responding to the changing world around us, a reflex. We don’t choose to recognise a kitten or avoid the pedestrian, though we might choose (or not) to hammer in a nail when one is presented. A behaviour is something we reflexively do in response to appropriate stimulus (an image of a kitten, or even a kitten itself poised in-front of us, or the errant pedestrian).
The radical conclusion from this is that there is no knowledge or skill unique to a human. That’s because knowledge and skill – in this context – are defined relative to a task. We’re at a point that if we can define a task then we can automate it (given cost-benefit) so consequently there are no knowledge or skills unique to humans.
What separates us from the robots is our ability to work together to make sense of the world and create new knowledge, knowledge that can then be baked in machines to make it more precise and efficient. If we want to move forward, and deliver on the promise of AI and cognitive computing, then we need to shift the foundation of work. Hence the title: we need to “reconstruct work”.
search for information about (someone or something) on the Internet using the search engine Google on Sunday she googled an ex-boyfriend | [no object] : I googled for a cheap hotel/flight deal.
DERIVATIVES googleable (also googlable) adjective
ORIGIN 1990s: from Google, the proprietary name of the search engine.
‘Why remember what you can google?’ has become something of a catchphrase. Even more so now that many homes have voice assistants like Google Home and Amazon Alexia. It’s common, however, to feel some form of existential angst as if we need to google something then we wonder if we really understand it. Our natural impostor syndrome kicks in and we question if our hard-won knowledge and skills are really our own.
The other side of this is learned helplessness, where googling something might be helpful but we don’t know quite what to google for, or fail to realise that a search engine might be able to help us solve the problem in front of us if just we knew what question to ask. This is a common problem with digital technology, where students learn how to use particular tools to solve particular problems but are unable to generalise these skills. Our schools are quite good at teaching students how, given a question, to construct a query for a search engine. What we’re not helping the students with is understanding when or why they might use a search engine, or digital tools in general.
Both of these these problems – the existential angst and learn helplessness – stem from a misunderstanding of our relationship with technology.
Socrates mistrusted writing as he felt that it would make us forgetful, and that learning from a written text would limit our insight and wisdom into a subject as we couldn’t fully interrogate it. What he didn’t realise was that libraries of written texts provide us with access to more diverse points of view and enable us to explore the breadth of a subject, while treating the library as an extension of our memory means that we are limited to what we can refer to in the library rather than what we can remember ourselves.
We can see a similar phenomena with contemporary graduates, who typically have a more sophisticated understanding of the subjects they covered in their formal education than did earlier generations. This is not because they are smarter. Their deeper understanding is a result of them investing more of their time exploring a subject, and less of it in attempting to find and consume the information they need.
Consider a film school student. Our student might be told that some technique Hitchcock used might be of interest to them.
In the seventies this would necessitate a trip to the library-card catalogue, searching for criticism of Hitchcock’s films, flipping through books to determine which might be of interest, reading those that (potentially) are interesting, listing the films that contain good examples of the technique, and then searching the repertory theatres to see which are playing these old films. The entire journey from first mention to the student experimenting with the technique in their own work might take over a year and will require significant effort and devotion.
Compare this learning journey to what a student might do today. The mention by a lecturer on a Friday will result in the student spending a slow Saturday afternoon googling. They’ll work their way from general (and somewhat untrustworthy) sources such as Wikipedia and blog posts as they canvas the topic before consuming relevant criticism, some of which will be peer-reviewed journal and books though others might be in the form of video essays incorporating clips from the movies they mention. Any films of note are added to the queue of the student’s streaming service. Sunday is spent watching the films, and possibly rewatching the scenes where the technique is used. The entire journey – from first suggestion to the student grabbing a camera and editing tool to experiment – might take a weekend.
It’s not surprising the contemporary students emerge from their formal education with a more sophisticated command of their chosen domain: they’ve spent a greater proportion of their time investigating the breadth and depth of domain, rather than struggling to find the sources and references they need to feed their learning.
The existential angst we all feel stems from the fact that we have a different relationship with the new technology than the old. The relationship we have with the written word is different to the one we have with the spoken word. Similarly, the relationship we have with googled knowledge is different to the one we have with remembered knowledge. Learned helpless emerges when we fail to form a productive relationship with the new technology.
To integrate the written word into our work we need to learn how to read and write, a skill. To make our relationship with the written world productive, however, we need to change how we approach work, changing our attitudes and behaviours to make the most of the capabilities provided by this new technology while minimising the problems. Socrates was right, naively swapping the written word for the spoken would result in forgetfulness and a shallower understanding of the topic. If, however, we also adapt our attitudes and behaviours, forming a productive relationship with the new technology (as our film student has), then then we will have more information at our fingertips and a deeper command of that information.
The skill associated with ‘Why remember what you can google?’ is the ability to construct a search query from a question. Learned helplessness emerges when we don’t know what question to ask, or don’t realise that we could ask a question. Knowing when and why to use a search engine is as, if not more, important than knowing how to use a search engine.
To overcome this we need to create a library of questions that we might ask: a catalogue subjects or ideas that we’ve become aware of but don’t ‘know’, and strategies for constructing new questions. We might, for example, invest some time (an attitude) in watching TED talks during lunch time, or reading books and attending conferences looking for new ideas (both behaviours). We might ask colleagues for help only to discover that we can construct a query by combining the name of an application with a short description of what we are trying to achieve (“Moodle peer marking”). This library is not a collections of things that we know, it’s a collection we’ve curated of things that we’re aware of and which we might want to learn in the future.
The existential angst we feel, along with learned helplessness, are due to our tendency to view technology as something apart from us, an instrumental tool that we use. This is also why we fear the rise of the robots: if we frame our relationship with technology in terms of agent and instrument, then it’s natural to assume ever smarter tools will become the agent in our relationship, relegating us to the instrument.
Reality is much more complex though, and our relationship with technology is richer than agent and instrument. Our technology is and has always been part of us. If we want to avoid both existential angst and learned helplessness then we need to acknowledge that understanding when and why to use these new technologies, fostering the attitudes and behaviours that enable us to form a productive relationship with them, are as, if not more, important than simply learning how to use them.
We’re kicking off the next phase of our “Should everyone learn how to code?” project. This time around it’s a series of public workshops over late January and early February in Melbourne, Geelong, Sydney, Western Sydney, Hobart, Brisbane, and Adelaide. The purpose of the workshops is to try and create a mud-map describing what a digitally competent workforce might look like.
As the pitch goes…
Australia’s prosperity depends on equipping the next generation with the skills needed to thrive in a digital environment. But does this mean that everyone needs to learn how to code?
In the national series of round tables Deloitte Centre for the Edge and Geelong Grammar School hosted in 2016, the answer was “Yes, enough that they know what coding is.”
The greater concern, though, was ensuring that everyone is comfortable integrating digital tools into their work whatever that work might be, something that we termed ‘digital competence’. This concept was unpacked in an essay published earlier this year.
Now we’re turning our attention to the question: What does digital competence look like in practice, and how do we integrate it into the curriculum?
What are the attributes of a digitally competent professional?
How might their digital competence change over their career?
What are the common attributes of digital competence in the workplace?
How might we teach these attributes?
If you’re interested in attending, or if you know someone who might be interested in attending, then contact me and we’ll add you to the list. Note that there’s only 24-32 places in each workshop and we want to ensure a diverse mix of people in each workshop, so we might not be able to fit everyone who’s interested, but we’ll do our best.
Over 2016-2017 Deloitte Centre for the Edge collaborated with Geelong Grammar School to run a national series of roundtables where we unpacked the common catchphrase “everyone should learn how to code” as we have noticed that there was no consensus on what ‘coding’ was, and it seemed to represent an aspiration more than a skill. We felt that the community had jumped from observation (digital technology is becoming increasingly important) to prescription (everyone should learn how to code) without considering what problem we actually wanted to solve.
What we found from the roundtables was interesting. First, yes, everyone should learn how to code a little, mainly to demystify it. Coding and computers are seen as something of a black art, and that shouldn’t be the case. A short compulsory coding course would also expose students to a skill and career that they might not have otherwise considered. However, the bigger problem lurking behind the catchphrase was the inability for many workers to productively engage with the technology. Many of us suffer from learned helplessness, where we’ve learnt that we need to use digital tools in particular ways to solve particular problems, and if we deviate from this then all manner of things go wrong. This needs to change.
Our big point is that AI doesn’t replicate human intelligence, it replicates specific human behaviours, and the mechanisms behind these behaviours are different to those behind their human equivalents. It’s in these differences that opportunity lies, as there’s evidence that machine and human intelligence are complimentary, rather than in competition. As we say in the report “humans and machines are [both] better together”. The poster child for this is freestyle chess.
Eight years later [after Deep Blue defeated Kasparov in 1997], it became clear that the story is considerably more interesting than “machine vanquishes man.” A competition called “freestyle chess” was held, allowing any combination of human and computer chess players to compete. The competition resulted in an upset victory that Kasparov later reflected upon:
The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. . . . Human strategic guidance combined with the tactical acuity of a computer was overwhelming.Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010, www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/. View in article
So rather than thinking of AI as our enemy, we should think of it as supporting us in our failings.
There’s been a lot of talk about using technology to democratise trust, and much of it shows a deep misunderstanding of just what trust is. It’s implicitly assumed that trust is a fungible asset, something that can be quantified, captured and passed around via technology. This isn’t true though.
As I point out in the post:
Trust is different to technology. We can’t democratise trust. Trust is a subjective measure of risk. It’s something we construct internally when we observe a consistent pattern of behaviour. We can’t create new kinds of trust. Trust is not a fungible factor that we can manipulate and transfer.
Misunderstanding trust means that technical solutions are proposed rather than tackling the real problem. As I conclude in the post:
If we want to rebuild trust then we need to solve the hard social problems, and create the stable, consistent and transparent institutions (be they distributed or centralised) that all of us can trust.
Technology can enable us to create more transparent institutions, but if these institutions fail to behave in a trustworthy manner then few will trust them. This is why the recent Ethereum hard fork is interesting. Some people wanted an immutable ledger, and they’re now all on ETC as they no longer trust ETH. Others trust the Ethereum Foundation to “do the right thing by them” and they’re now on ETH, and don’t trust ETC.
Computers are at the heart of the economy, and coding is at the heart of computers. Australia’s prosperity depends on equipping the next generation with the skills they need to thrive in this environment, but does this mean that we need to teach everyone how to code? Coding has a proud role in digital technology’s past, but is it an essential skill in the future? Our relationship with technology is evolving and coding, while still important, is just one of the many new skills that will be required.
Tyler Cowen has an article over at MIT Technology Review, Measured and Unequal, that discusses how improved measurement of workers might be a fundamental driver of inequality in the workplace of the future.
Consider journalism. In the “good old days,” no one knew how many people were reading an article like this one, or an individual columnist. Today a digital media company knows exactly how many people are reading which articles for how long, and also whether they click through to other links. The exactness and the transparency offered by information technology allow us to measure value fairly precisely.
The result is that many journalists turn out to be not so valuable at all. Their wages fall or they lose their jobs, while the superstar journalists attract more Web traffic and become their own global brands. Some even start their own media companies, as did Nate Silver at FiveThirtyEight and Ezra Klein at Vox. In this case better measurement boosts income inequality more or less permanently.
The assumption behind this sort of piecework measurement is that all the value realised by an article is due to the sweat and toil of a more-talented-than-usual journalist. If your article gets the clicks, then it must be because you are so good at what you do.
Unfortunately the world is not so simple.
We might choose to build our organisations around this sort of idea (and indeed, BuzzFeed et al work this way) but it tends to foster a short term and overly transactional view of work that ignores a lot of the value that workers, or a community of workers might create.
The first problem is the obsessive focus on outputs, on the assumption that the worker is responsible for all the value created. Outputs depend on inputs, and not just the worker’s skills. You can’t make a silk purse out of a sow’s ear, as the saying goes.
While the worker might be skilled, their work is also dependant on the quality of the materials they have to work with. Take the journalism example. A manager somewhere is splitting up the work, either by handing out the story ideas or by allocating topics to individuals. Not all ideas or topics are equal. It’s possible for someone to come from outside this system by finding a new approach—as Nate Silver did with a data-drivern approach—but that’s the exception rather than the rule. It’s more typical for the quality of the value of the outputs to be bound by the quality of the inputs, not the effort of the individual.
We see something similar in sales. It’s easy to sell in a rising market, and a booming market will see many sales people getting large commissions for no reason other than turning up. In a down market, though, its a different story, and we punish some of our best people for working hard just to bring anything into the business.
If we want to reward individuals based on their contribution then we need to quantify the amount of value they added, rather than the amount of value they lucked into. If we don’t then we’ll create a feeding frenzy for the juicy bits of work, while other less attractive (but possibly no less important in the overall scheme of things) get ignored.
Unfortunately it’s surprisingly difficult to measure value-add for many workers as it can be challenging to gauge the quality of the materials that they have to work with. A good example of this are the efforts in the US to measure teachers on the value they add in the class room, efforts which are struggling as it seems nearly impossible to objectively measure the quality of the students that they have to work with. There’s just too many variables.
Second is the problem of cumulative advantage. Success typically brings more success for no other reason than you were successful. Consider the opportunities created when you win an Oscar. The Oscars are an annual competition, so they’re awarded even if the year’s releases aren’t particularly good (such as if there’s a writers strike during most of the past year).
It doesn’t matter how you win the Oscar—either by creating great art and a big box office success, or simply be being the best of a bad lot—the attention that the Oscars garners you brings you to the attention of the world and the opportunities start flowing in. This improves the quality of the materials you can choose to work with. You might break the VW emissions story due to dumb luck, but it results in more story ideas flowing your way. You might not be the best journalist, you might not even be the journalist best positioned to make the most of the idea, but the idea is yours none the less.
Entire careers are built on the back of a lucky break followed by cumulative advantage. While this is good for the few lucky individuals, it’s not so good for the firm as it means that the firm might not be making the most of the materials at its command (though picking winners does make it easier for management). Nor is it much good for the equally talented individuals who weren’t quite so lucky.
Third is the problem of context. It’s rare, these days, to work in isolation. The context we’re in provides us with resources and connections that we couldn’t get elsewhere, or even just a boss that we can work with. While we might thrive in one environment, we struggle in others. One good example is star analysts, who often struggle when they leave the firm where they built their reputation. Some of that value in the outputs created might be the result of a productive work culture or effective management structure and team, factors that are the result of the everyone’s contributions, and not just the contributions the individual creating the deliverable.
Mr Cowen’s problem is that he has mistaken ease for cost. It’s cheaper than ever to measure all sorts of factors associated with work. At the same time, work has evolved making it hard know what to measure. While it might be cheap to generate all sorts of stats on worker activity, it’s not easy to tie these back to productivity.Aside, that is, for work situations which are explicitly configured as piece work, such as Uber drivers.
The root cause of this a recent shift (possibly sometime around 2005) from value being defined by the producer, to being defined by the consumer. The emergence of the consumer internet put the consumer in control as it enabled the consumer to have more information on a product than the merchant or producer, and the ability to source the product from any merchant around the globe. This was followed by the more recent emergence of social media, enabling consumers to turn to their peer, rather then brands.
Value used to be defined in terms of product features and functions, and we could measure a worker’s productivity by their contribution to creating these features and functions. Frederick Taylor started the trend by measuring how long it took for a man to unload a cart. The modern version is the basis of Mr Cowen’s article: counting the number and reach of articles carrying a byline, or worker surveillance where everything a worker types at a computer, everything they do is logged, recorded, and measured.
Value today is defined by a customer’s relationship to a product. Value is relative and shifting because it is a function of an expanding choice space for consumers. While all your workers contribute to creating this value, it’s not always obvious how to quantify their contribution.Their contribution might also be different for each customer, as relative value means that each customer could possibly conceive value differently.
Any retailer who heads down the omnichannel path, for example, needs to deal with the challenging of aligning a salesforce measured on their sales with a strategy that has sales skipping across multiple channels and contact points as the customer learns about the firm, develops their own understanding of what value is created, and winds their way to a decision. When you consider this it’s not surprising the Apple’s stores (some of the most profitable in the world) are not measured on sales, and fall under the marketing budget.
In the mean time we have many firms racing to quantify and optimise individual tasks that their workers undertake. This might drive improvements in a short term and overly instrumentalist definition of productivity, and result in a few lucky individuals receiving large pay checks. In the longer term the same strategy is destroying the value created for the customer, and possibly taking the firm’s future with it.
We used to be defined by what we knew. But today, knowing too much can be a liability.
Google, for example, is putting its trust in (potentially uncredentialled) “capable generalists” rather than “experts”.Laszlo Bock, Google’s Vice-President of People Operations, at The Economist’s Ideas Economy: Innovation Forum on March 28th 2013 in Berkeley, California. … Continue reading Expertise still matters for narrowly focused highly-technical roles but Google has found that in most instances a capable generalist will arrive at the same solution as an expert, while in some cases they will come up with a new solution that is superior to those proposed by the experts.
Expertise, and being an expert, implies having the hard-won knowledge and skills that make you a reliable judge of what is best or wisest to do. It’s an inherently backwards-looking concept, ascribing value to individuals based on their ability to accumulate experience and then generalise from it, taking generic solutions that have worked in the past and applying them to specific problems encountered today.
This is an approach that worked well in the past when knowledge and skills were expensive and difficult to acquire, and the problems we tackled later in our career were similar to those encountered at the start. Society has spent centuries reorganising work and dividing it into ever more narrowly defined specialisations to enable individuals to focus on, and develop expertise in, specific jobs.
Take the case of the Brunels in the 1800s: Marc, who built the first tunnel under the Thames,Marc Brunel was, in the early 1800s, the engineer responsible for the first tunnel to be dug under a substantial river. and his son, Isambard, creator of the Great Britain.Isambard Brunel built the SS Great Britain, in the late 1800s, the longest ship in the world at her time and the first iron steamer with a screw propeller. Both Marc and Isambard roamed across architecture, and civil and mechanical engineering, designing everything from buildings and manufacturing processes through railways to steam engines and ships, covering most of the technologies we associate with the industrial revolution.
Overtime all these technologies became increasing complicated and entailed, requiring you to acquire more and more knowledge and skills before you could be productive and contribute your own ideas and findings. The ground covered by the two Brunels has been divided into a range of highly specialised disciplines, each with their own narrowly defined education and credentialing process.
Digital technology, however, is changing our relationship with knowledge and, consequently, with expertise. The pithy version of this is “it’s not what you know, it’s what you can google”. By allowing us to easily capture and transmit knowledge, and by providing new means of communicating with our peers, the growth of digital technology is tipping the balance of power from narrowly defined expertise to more broadly defined capability. Knowledge is available on demand via online resources and social media while skills are being captured in software packages, shifting what used to be stocks to flows.The shift from stocks to flows @ PEG The generalist is no longer at a disadvantage to the specialist, as most (if not all) specialist knowledge and skills are available on-demand.
I heard a nice example of this a while ago when I was listening to Film Buff’s ForecastFilm Buff’s Forecast @ RRR. The show was interviewing a director who also lectured at a local university. The director opined that the current graduating class had a lot more sophisticated understand of film, and were more sophisticated in their approach to their work, than he and his class were back in the early seventies. In his view this wasn’t because they current class were inherently smarter. It was because the majority of their time at university was invested in exploring the possibilities provided by film as a medium, and developing an understanding of what they might do within the medium. This is in contrast to the director’s class back in the early seventies, when the majority of a student’s time was spent finding, accessing, and internalising knowledge stocks.
The example the director gave was of a student being directed to some technique that Alfred Hitchcock used.Unfortunately I don’t remember which technique was mentioned. Back in the seventies this would have implied many afternoons spent in the stacks at the library looking for film criticism that discussed the technique, so that the student could develop an understanding of it and know in which films the best (and worst) examples could be seen, followed by a search of the rep theatres to find screenings of key films.
That same understanding can be obtained via an afternoon on the couch browsing the internet with the following day spent streaming films from Netflix.
Today we invest our time exploring the problem we’re trying to solve, and the context we’re solving it in, rather pouring most of our effort into finding the information we need.
We’re also increasingly finding ourselves asked to solve new problems, create new products and services, and, in some cases, even rethink how entire industries and sectors of the economy work. This is what we commonly refer to as digital disruption, even though that term fails to capture the full extent of the social change that is bearing down on us.
Take the construction industry for example. Technology has been used to streamline or automate many tasks making today’s construction industry a different beast to the construction industry of our grandparents, but it is still an industry that adheres to a fundamental craft-based paradigm, with skilled trades people working onsite to create bespoke buildings.
A range of technological and social changes are about to transform the construction industry from a craft-based paradigm to a flexible-manufacturing paradigm, skipping over the traditional industrial paradigm in the process.
My favourite example of this is Unitised Buildinghttp://www.unitisedbuilding.com who have developed a new construction process (as opposed to a technology) that enables them to construct a mid-rise building in a fraction of the time and with a fraction of the money, of a traditional approach. This building system is completely digitised, with the building design in 3D modelling tools before the design is broken down and sent to numerically controlled machines for part fabrication and assembly on the shop floor. Assembled modules are trucked to the construction site where one is lifted into place every eight minutes after which the various connectors snapped together and gaps plastered over. A process that took months now takes weeks and the cost is shafted in the process.
The shift from craft to flexible manufacturing has a dramatic impact on the skills required from the workforce, moving from deep expertise in building to general design, digital modelling and construction skills. The focus has shifted from needing people who can work within the established building system (people with deep expertise who can generalise experience and then apply these general solutions to specific problems) to people who can work to develop and improve a new building system (people with broader skills who can find new problems to be solves, and solutions to these new problems).
A similar trend can been seen across all sectors. We’re moving from working in the system that is a business, to working on the system. The consequence of this is that its becoming more important to have the general capabilities and breadth of experience that enable us to develop and improve the system in novel directions, than it is to have deep, highly entailed experience in working within the current system. There will always be a need for narrowly focused expertise in highly technical areas, but in the majority of cases the generalist now has an advantage over the specialist.
This raises an interesting conundrum. While you might not need to know as much as you did in the past, it’s not clear just how much you do need to know now. This is a particular problem for educators and firms as they want to arm the individuals under their care with the knowledge and skills required to be successful in the workplace. Teaching too little means that the individual will not be effective at what they do. Teaching too much implies that we are wasting the individual’s time (and money, in many cases).
Focusing on understanding how much to teach might be asking the wrong question though. In many cases the only person who can judge how much knowledge is enough will be the individual, as “how much is enough” will be determined by the problem that they are trying to solve and the context that they are trying to solve it in.
We need to break down the problem a bit more if we’re to understand what question we should be asking.
First, we do know that you need enough knowledge to be dangerous; to be conversant in the domain, to be able to understand and describe the problem, and to be able to interact and discuss what you are doing with the others who you are collaborating or working with. That film director mentioned above needs to be able to understand the criticism that they are reading, knowing the key concepts, technical terms and idioms that form the language of film. Similarly for our flexible-manufacturing building system, where you would need to understand the basic language of building, digital design, and flexible manufacturing if you expect to be productive and contribute.
Second, we need to equip the individual with the tools they need to manage their own knowledge and their access to knowledge. If the only person who can determine how much knowledge is enough is the individual, then we need to empower them by providing them with the tools they need to manage knowledge for themselves.
This can be further broken down into the following.
You need to understand limits of your current knowledge (or, put another way, you need to know when to go looking for new knowledge). This may be as simple of coming across new terms and concepts that you don’t understand, through to having the sensitivity to realise that your lack of progress in a task is due to the knowledge (the ideas and skills) that you’re applying being insufficient, and you need to find a new approach that is based on different knowledge.
You need to be aware of what additional knowledge you might draw on, so that you can you can reach out and pull it in as needed. This is a process of eliminating the unknown unknowns: reading blogs, going to conferences, participating in communities of practice, and even having conversations at the water cooled, so that your aware of the other ideas out there in the community, and other other individuals who are working in related areas. You can only draw on new knowledge if you’re aware that it exists, which means you must invest some time in scanning the environment around you for new ideas and fellow travellers.
You also need the habits of mind – the attitudes and behaviours – that lead you to reach out when you realise that you’re knowledge isn’t up to the task as had, explore the various ideas that you’re aware of (or use this awareness to discover new ideas), and then pull in and learn the knowledge required.
Finally, you need to be working in a context where all this is possible. To many work environments are setup in a way that prevents individuals from either investing time in exploring what is going on around them (and eliminating unknown unknowns), taking time out from the day-to-day to learn what they need to learn on-demand, or from taking what they’ve learnt and doing something different (deviating from the defined, approved and rewarded process).
So question we asked at the start of this post – How much do you need to know? – is clearly the wrong question to be asking.
Rather than focus trying to know (or teach) everything that might be relevant (the old competence model) we need to move up a level, focusing on metacognition. This means providing people with the tools needed to manage knowledge their own: fostering the sensitivity required to know when knowledge and skills have run out, creating time and space so that they can invest in their own knowledge management, and encouraging the habits of mind that mean that they have the ability and attitude to do something about it.
Image: Isambard Kingdom Brunel preparing the launch of ‘The Great Eastern by Robert Howlett