There’s been a recent up tic in interest in the ethics of AI, and the challenge of AI alignment. Particularly given the challenges at OpenAI, the consequences of which are still appearing the news. Many pundits think that we’re on the cusp of creating an artificial general intelligence (AGI), or that AGI is already here. There’s talk of the need for regulations, or even an “AI pause”, so that we can get this disruptive technology under control. Or, at least, prevent the extinction of humanity.
AGI is certainly a good foundation for building visions of dystopian futures (or utopian future, if you choose), though we do appear to reading a lot into the technology’s potential. Tools such as large language models (LLMs) are powerful tools and definitely surprising (for many) but (as we’ve written before) they don’t appear to be the existential threat many assume.
It’s always useful to remind oneself, at times like this, of Kranzberg’s Fourth Law:
Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.Melvin Kranzberg1
New technology, any technology in fact, doesn’t drive us toward a particular future, and there is nothing inevitable about the AI-powered dystopias (or utopias) that AGI et al inspire. New technology provides us with new affordances, creating new opportunities, and it is what we decide to make of these opportunities that determines which future we find ourselves in.
The decisions we make matter more than the technology use. The current wave of AI-enabled solutions certainly are powerful, but their power is not the cause for our concern. What is a cause for concern is how we’re using all manner of technology (not just AI) to automate decisions, and baking these decisions into bureaucratic processes that control peoples’ lives. And we’re doing this at scale.
We’re […] creating a [bureaucratic] landscape dominated by overlapping decisioning networks. It’s not that the individual decisions being automated are necessarily problematic on their own (though they may be, and we need guardrails to help ensure that this isn’t the case). Rather, problematic behavior often emerges when automated decisions are integrated and affect each other directly, something we might consider distributed stupidity—situations where emergent unintended consequences and clashes between automated decisions result in “smart” systems going bad.
Unlike the challenge of AI alignment, distributed stupidity is already here, and it might even be endemic. From RoboDebt in Australia3 through the UK Post Office scandal,4 there’s clearly an algorithmic moral hazard that we’re blind to (or which we’re wilfully ignoring).
This is not a technology problem—it’s a human one. Regulating a particular technology (such as AI) will not help. Nor will improved guidelines (such as those propose to ensure AI alignment) make much of a difference, as existing current examples of distributed stupidity were (in part) the result of decision makers ignoring policy, guidelines, and even laws.
One possible solution we like is round tabling, getting executives responsible for risk, legal, IT, privacy, etc. together to consider how a decision or initiative in an area may impact others and thrashing it out together, rather than circular signoffs which can take forever. When automated decisioning is being considered it makes sense to have the diverse lenses of an organisation inspecting it.
Another option we’ve seen suggested is to appoint a Chief Safety Officer (CSO) to significant organisations, both public and private. This would be modelled on Ship Safety Officers (SSO),5 who are responsible for identifying current and potential hazards to health, safety and the environment, and accountable to the ships owners (and through them, applicable policies and regulations) rather than the executive.
One interesting quirk of the role is that should the SSO declare an incident on a ship, or part of ship, then everyone including the captain must follow the directions of the SSO until the incident is over. A CSO could function in a similar way. The CSO has the freedom to inspect all operations and projects in the organisation to determine if they adhere to the organisation’s charter, policies, design and operational standards, and any relevant regulations. If they find a problem, then the organisation (or part of it) can be declared ‘unsafe’ and operations (or the project) halts until the problems are rectified.
A CSO halting work to rectify problems is a very blunt tool—akin to stop-work orders in construction.6 A more granular approach could be to treat management as a profession in the same manner as doctors and engineers are.7
As part of your training as an engineer it is made clear that you need to follow best practice, and to keep a diary of how you applied best practice. Should you design a bridge, for example, which later collapses, then your diary is your only protection. If your diary shows that you followed a suitable design process, considered all relevant factors, were correct in any calculations, and provided suitable instructions to the construction team, then the collapse is not your fault. If, on the other hand, you ignored best practice, refused reports pointing out defects in the design, used novel tools and techniques without first doing due diligence to ensure that they were suitable, or made significant mistakes in your calculations, then you can be held personally liable for the collapse.8
Modern organisations are complex beasts, an extended ecosystem of interrelated client, partner, supplier, and regulator relationships. This is also an environment saturated in technology: over the past 20 years we’ve transitioned to a business (and institutional) environment where we use digital tools, to an one defined by them. While the dominance of technology means that technology is what we first consider when trying to understand problems—failures, or even just biases—it’s the human decisions that are the root cause. These decisions need to be scrutinised, either by a diverse group of experts, or by an independent executive who is accountable to the community, and not just the organisation. Or managers in significant organisations should be considered a profession, like an engineer or doctor. A professional manager will be able to show how the organisation they helm follows best practice, adheres to applicable policy, process, and regulation. An unprofessional manager will find themselves liable for the problems caused by algorithmic moral hazard in their organisations, their organisation’s distributed stupidity.
- Kranzberg, Melvin. “Technology and History: ‘Kranzberg’s Laws.’” Technology and Culture 27, no. 3 (1986): 544–60. https://doi.org/10.2307/3105385. ↩︎
- Evans-Greenwood, Peter, Rob Hason, Sophie Goodman, and Dennis Gentilin. “A Moral License for AI: Ethics as a Dialogue between Firms and Communities.” Deloitte Insights, August 7, 2020. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/artificial-intelligence-impact-on-society.html. ↩︎
- Australia. Report: Royal Commission into the Robodebt Scheme. Parliamentary Paper (Australia. Parliament) 2023. Brisbane: Royal Commission into the Robodebt Scheme, 2023. https://robodebt.royalcommission.gov.au/publications/report. ↩︎
- Malik, Kenan. “What Makes a Very British Miscarriage of Justice? Contempt for the ‘Little People.’” The Observer, January 14, 2024, sec. Opinion. https://www.theguardian.com/commentisfree/2024/jan/14/post-office-grenfell-windrush-scandals-contempt-lives-destroyed. ↩︎
- Cult of Sea. “Safety Officer Onboard – Definition, Duties and Powers,” December 17, 2020. https://cultofsea.com/safety/safety-officer-onboard/. ↩︎
- Procore. “Stop Work Orders: What Contractors Need to Know.” Accessed January 19, 2024. https://www.procore.com/library/stop-work-order. ↩︎
- … other than software engineers. ↩︎
- Viglucci, Andres. “Repercussions of Failed FIU Bridge in 2018 on Key Players and Where They Are Now.” Miami Herald, March 19, 2023. https://www.miamiherald.com/news/local/community/miami-dade/article272940335.html. ↩︎