There’s been a recent up tic in interest in the ethics of AI, and the challenge of AI alignment. Particularly given the challenges at OpenAI, the consequences of which are still appearing the news. Many pundits think that we’re on the cusp of creating an artificial general intelligence (AGI), or that AGI is already here. There’s talk of the need for regulations, or even an “AI pause”, so that we can get this disruptive technology under control. Or, at least, prevent the extinction of humanity.
AGI is certainly a good foundation for building visions of dystopian futures (or utopian future, if you choose), though we do appear to reading a lot into the technology’s potential. Tools such as large language models (LLMs) are powerful tools and definitely surprising (for many) but (as we’ve written before) they don’t appear to be the existential threat many assume.
Continue reading