We have a new essay published in Deloitte Insights, A moral license for AI: Ethics as a dialogue between firms and communities. This collaboration with CSIRO’s Data61 looks into the challenge of creating ethical AI, picking apart the problems and proposing a way forward. There’s a launch event on the 2nd of September, 2020, which you can register for via Zoom.
Initially the focus of work on ethical AI was on to regulating the technology, but this failed to bear fruit. The focus then shifted to defining the principles, requirements, technical standards and best practices that it’s hoped will result in ethical AI. While progress has been steady and there has been a global convergence around principles for ethical AI, there remain substantive differences on what these principles mean in practice.
The problem (as we discuss in Framing the challenge) is that ethics and AI is a bit like thermodynamics, in that you can’t win, you can’t break even, and you can’t leave the game. From the conclusion of that section:
We can’t win, because if we choose to frame “ethical” in terms of a single social world—an assumed secular society—then we must privilege that social world over others. We can’t break even, because even if we can find a middle ground, a bridge between social worlds, our technical solution will be rife with exceptions, corner cases, and problems that we might consider unethical. Nor can we leave the game, banning or regulating undesirable technologies, because what we’re experiencing is a shift from a world containing isolated automated decisions to one largely defined by the networks of interacting automated decisions it contains.
We’re caught between the impossible task of defining “fair” or “ethical” algorithmically (a blind spot for the technologists) and the assumption that everyone sees the same world as we do ourselves but just approach it with different values, when this is not necessarily the case (a blind spot for many social
commentators). Rather than trying to create ethical AI we need to address a different challenge: understanding when an imperfect solution in an (already) imperfect world is good enough that it is, on balance, preferable to the imperfect world on its own.
As we admit in the essay’s conclusion, while the essay is notionally about “ethical AI” it never addresses the question of ethics and AI directly, attempting to define what is and isn’t ethical (an impossible task). Instead, it proposes that firms need to work with communities they touch to obtain and maintain an moral license for AI. The outline of a framework to do this is provided, integrating ideas from social license to operate, requirements modelling, sociology, and general morphological analysis to guide a firm’s interactions with the communities it touches. Such a framework could also provide a starting point for regulating ethical AI.
The essay concludes by point out that:
Ethical AI—the development of regulation, techniques, and methodologies to manage the bias and failings of particular technologies and solutions—isn’t enough on its own. Ethics are the rules, actions, or behaviors that we’ll use to get there. Our goal should be moral AI. We must keep a clear view of our ends as well as our means. In a diverse, open society, the only way to determine if we should do something is to work openly with the community that will be affected by our actions to gain their trust and then acceptance for our proposal.