Skip to main content
Support

Executive Summary

Ensuring that developments in Artificial Intelligence (AI) are trustworthy is a challenge facing the public and policy community alike. Transparency has been proposed as one mechanism for ensuring trustworthy AI, but it’s unclear what transparency actually is, or how it can be achieved in practice.

There are five common strategies for providing transparency: risk management; openness; explainability; evaluation and testing; and disclosure and consent. While each of these strategies can be helpful in itself, transparency is most likely to engender trust when multiple strategies are used, and when limitations such as accountability are also considered. Building off analysis of different transparency options and limitations, two case studies illustrate how transparency can work in practice, and elucidate key questions to ask when thinking about how to provide meaningful transparency to support trustworthy and ethical AI.

Unpacking Transparency to S... by The Wilson Center


Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.  Read more