Unpacking Transparency to Support Ethical AI
Ensuring that developments in Artificial Intelligence (AI) are trustworthy is a challenge facing the public and policy community alike. Transparency has been proposed as one mechanism for ensuring trustworthy AI, but it’s unclear what transparency actually is, or how it can be achieved in practice.
There are five common strategies for providing transparency: risk management; openness; explainability; evaluation and testing; and disclosure and consent. While each of these strategies can be helpful in itself, transparency is most likely to engender trust when multiple strategies are used, and when limitations such as accountability are also considered. Building off analysis of different transparency options and limitations, two case studies illustrate how transparency can work in practice, and elucidate key questions to ask when thinking about how to provide meaningful transparency to support trustworthy and ethical AI.
About the Authors
Science and Technology Innovation Program
The Science and Technology Innovation Program (STIP) brings foresight to the frontier. Our experts explore emerging technologies through vital conversations, making science policy accessible to everyone. Read more