The limits of generative AI

Whilst cruising the interwebs I can across the follow, which nicely captures the limits of large language (LLMs) models.

In the persuasive practice of Derrida, Paul de Man and others, it took language not as a reminder of secret structure but as the home of a recurring crisis of meaning, a place where interpretation learned that it could never end. It did not hold, as many of its detractors thought it did, that there was no reality apart from language, and it’s wrong to translate Derrida’s famous ‘Il n’y a pas de hors-texte’ as ‘there is nothing outside the text.’ A hors-texte is an unnumbered page in a printed book. Derrida is saying that even the unnumbered pages count, just as an outlaw, in French an hors-la-loi, has everything to do with the law, since it makes him what he is. More crudely, we might say that interpretation is theoretically endless, but this claim itself needs interpretation. Endless is not the same as pointless; and what is endless in theory is often stopped easily enough in practice. We may think – I do think – that the reasons for stopping are usually more interesting than the empty possibility of going on for ever, although then it would be worth asking whether those reasons are practical or theoretical.

Wood, Michael. 2016. “We Do It All the Time.” London Review of Books, February 4.

Large language models work on language, text, and cannot refer to hors-texted, the real, experience that eludes our attempts to represent it, to reify it in words. This means that LLMs are easily caught in the post-structuralist criss of meaning, leading to confabulation and so on. We can improve their performance by being more explicit, by including more detail in the prompt, improving training data sets, or integrating external knowledge sources. We can’t, however, ‘fix’ the problem.