The hype for generative AI doesn’t seem to be dying off. This is unsurprising as—unlike the metaverse, blockchain, and crypto—the technology is providing demonstrable benefits. We’re clearly in the installation phase where mad experimentation is the rule rather than the exception.
A lot of the mad experimentation we’re seeing is focused on either integrating new things into a LLM, or on jamming a LLM into some existing solution to ‘revolutionise’ it. There’s some great stuff in there—a wealth of new LLM-powered creative tools is enabling us to unleash our artistic urgers. On the other hand, integrating a LLM with an online learning platform is useful, but unlikely to be revolutionary.
One thing that is interesting, and might even be revolutionary, is using a LLM to drive a technical tool. We might, for example, make a tax consultant into a junior data scientist by providing them with something like Tableau GPT (picking just one of many examples in this space). The tool-trained LLM will lead our tax agent through the process of clustering (or otherwise analysing) data to find the patterns. Similarly, a supply chain expert might use a suitably equipped LLM to create a real time reporting dashboard, or an internal web application. We can unleash our inner Jeff Koons (an artist famous for relying on the technical ability of hired assistants) or Quentin Tarantino (who, thanks to the advice of Terry Gilliam, realised that he could execute on his vision if he could explain it). If we know enough to explain to the LLM what we’re trying to achieve, then it will manage the technical details for us, clustering the data or roughing out the template code.
The immediate opportunity this creates is to enable the creation of smaller, cross functional teams. Not every team needs a dedicated data scientist to do data science, to just one example. We can now address problems that we would have previously let go as too small to be worthwhile. Opportunities that were too risky to explore due to the cost involved, might now be approachable. Then, if a problem or opportunity expands, becoming too complex, too large for our LLM-supported workers, a suitably skilled expert can be brought into the project.
Adopting tools like Tableau GPT presents us with a choice. We can take the cost saving and use a smaller workforce to deliver the same volume of work with fewer staff. Or we can expand our addressable market, possibly even hiring, to address opportunities (both old and new) that we couldn’t in the past. It’s our choice as to which path we take.
There’s also a longer-term opportunity provided by generative AI: to change our approach to training and development.
The challenge with learning a new domain—such as data science—is in collecting enough practical knowledge to be able to do something useful. There’s a difference between knowing a thing (like k-nearest neighbour clustering) and being able to muster the tools and techniques to implement the thing you know. This lump-of-knowledge problem is why we’ve focused on training—classroom teaching—prior to being on the job.
Access to a tool-equipped LLM can help you over this hump by managing the required block-and-tackle for us. With an understanding of what you want done and the ability explain this to the LLM, you can leverage the LLM to execute on your vision. This provides us with an opportunity to rethink our approach to training.
There’s always been dissenters to train-then-work paradigm. A great example is John Seddon’s work with call centres, which has focused on facilitating in-place learning at the expense of classroom teaching. Seddon’s approach is to quickly train new workers on the 20% of calls that represent 80% of the call volume, before transitioning to a call centre designed to facilitate further learning in place and on demand.
A conventional approach to call routing is for workers to only work on calls that they were trained for. A worker encountering an unfamiliar call would pass it to an expert for resolution. Seddon’s approach to leave the call with the worker and bring in an expert (another worker who knows what to do), treating the call as a learning opportunity. The first time the worker encounters the new call type, the expert solves the problem in front of them. The second time the worker has a go with the expert cleaning up the details. The third call is handled by the worker while the expert provides quality assurance. (Practically these are three phases, rather than three calls, but you get the idea.)
We could take a similar approach with technical domains.
Our tax agent, mentioned earlier, is trained to recognise when a suite of entry level data science techniques can be useful, and we provide them with an LLM-powered data science tool. This enables the tax agent to engage with data science in their day-to-day work, when the opportunity arises. If they encounter a more challenging problem then they can call on an expert, first to help guide then through problem, later to provide quality assurance.
The key to learning is access to a stream of high-quality problems–learning opportunities—not fancy teaching technologies. LLMs help us juice this by reducing the effort required to learn an apply a new skill, making our tax agent productive from day one while providing a clear path to develop expertise in this new domain, should that be a direction they want to go.
Another way of putting this is that while LLMs might not revolutionise teaching, they might enable us to radically change learning.