The limits of generative AI

Whilst cruising the interwebs I can across the follow, which nicely captures the limits of large language (LLMs) models.

In the persuasive practice of Derrida, Paul de Man and others, it took language not as a reminder of secret structure but as the home of a recurring crisis of meaning, a place where interpretation learned that it could never end. It did not hold, as many of its detractors thought it did, that there was no reality apart from language, and it’s wrong to translate Derrida’s famous ‘Il n’y a pas de hors-texte’ as ‘there is nothing outside the text.’ A hors-texte is an unnumbered page in a printed book. Derrida is saying that even the unnumbered pages count, just as an outlaw, in French an hors-la-loi, has everything to do with the law, since it makes him what he is. More crudely, we might say that interpretation is theoretically endless, but this claim itself needs interpretation. Endless is not the same as pointless; and what is endless in theory is often stopped easily enough in practice. We may think – I do think – that the reasons for stopping are usually more interesting than the empty possibility of going on for ever, although then it would be worth asking whether those reasons are practical or theoretical.
Wood, Michael. 2016. “We Do It All the Time.” London Review of Books, February 4. https://www.lrb.co.uk/the-paper/v38/n03/michael-wood/we-do-it-all-the-time.

Continue reading

Distributed stupidity

There’s been a recent up tic in interest in the ethics of AI, and the challenge of AI alignment. Particularly given the challenges at OpenAI, the consequences of which are still appearing the news. Many pundits think that we’re on the cusp of creating an artificial general intelligence (AGI), or that AGI is already here. There’s talk of the need for regulations, or even an “AI pause”, so that we can get this disruptive technology under control. Or, at least, prevent the extinction of humanity.

AGI is certainly a good foundation for building visions of dystopian futures (or utopian future, if you choose), though we do appear to reading a lot into the technology’s potential. Tools such as large language models (LLMs) are powerful tools and definitely surprising (for many) but (as we’ve written before) they don’t appear to be the existential threat many assume.

Continue reading

Even the effects already discovered are due to chance and experiment rather than to the sciences; for our present sciences are nothing more than peculiar arrangements of matters already discovered, and not methods for discovery or plans for new operations.

Aphorism VIII. Francis Bacon, Novum Organum, Book 1, 1620

Continue reading

Where will LLMs take us?

Not a week seems to pass by without some surprising news concerning large-language models (LLMs). Most recently it was when an LLM trained for other purposes played chess at a reasonable level. This seemingly constant stream of surprising news has led to talk that LLMs are the next general-purpose technology—a technology that affects an entire economy—and will usher in new era of rapid productivity growth. They might even accelerate global economic growth by an order of magnitude, as the Industrial Revolution did, providing us with a Fifth Industrial Revolution.

Continue reading

The coming wave

This book describes itself as the work of ‘the ultimate insider’. This seems rather apt as it provides us with a glimpse of what the technocratic chattering class are saying about the current AI moment. Unfortunately it doesn’t provide us with insight into how this current moment will play out as the view from inside appears to be is quite poor, lacking the perspective need to really grapple with this question.

Continue reading

The trust deficit between workers and organizations isn’t personal. It’s systemic.

We have a new essay published by Deloitte InsightsThe trust deficit between workers and organizations isn’t personal. It’s systemic. Trust is widely acknowledged as a key contributor to workplace performance. What is rarely acknowledged, however, is that there are both interpersonal and organisational aspects to trust. While the interpersonal trust between a manager and their subordinates is important, what is likely more important is how workers trust managers as representatives of the firm’s bureaucracy.

Continue reading

Forever ten years away

Why do some the technologies always seem to be ten years away? We’re not talking about the science fiction dreaming of faster than light travel or general AI and the singularity. Those ten years apply to technologies that forever seem to be just out of reach, just beyond our current technical capabilities, like nuclear fusion (as opposed to fission) or quantum computing. Researchers make incremental progress and we’re told that (once the technology works) its going to change everything, but despite this incremental progress estimates of when the technology will be commercialised and so available to the public always seem to be in the ballpark of ‘ten years’.

Continue reading