Successful AI Ethics & Governance at Scale: Bridging The Interpretation Gap

Principles that generalize require professionals who specialize

Photo by Una Laurencic

AI ethics and governance has become a noisy space.

At last count, the OECD tracker counts over 1,800 national-level documents on initiatives, policies, frameworks, and strategies as of September, 2024 (and there seems to be consultants and influencers opining on every one).

However, as Mittelstadt (2021) succinctly puts in a way that only academic understatement can, principles alone cannot guarantee ethical AI.

Despite the abundance of high-level guidance, there remains a notable gap between policy and real-world implementation. But why is this the case, and how should data science and AI leaders think about it?

In this series, I aim to advance the maturity of practical AI ethics and governance within organizations by breaking down this gap into three components, and draw from research and real world experience to propose strategies and structures that have worked in implementing AI ethics and governance capabilities at scale.

The first gap I cover is the interpretation gap, which arises from the challenge of applying principles expressed in vague language such as ‘human centricity’ and…