Skip to main content

Wie viele Websites verwenden auch die Websites der BSA Cookies, um das effiziente Funktionieren dieser Websites sicherzustellen und unseren Nutzern die bestmögliche Erfahrung zu bieten. In unserer Cookie-Erklärung erfahren Sie mehr darüber, wie wir Cookies verwenden und wie Sie die Cookie-Einstellungen Ihres Browsers ändern können. Wenn Sie diese Seite weiterhin verwenden, ohne Ihre Cookie-Einstellungen zu ändern, stimmen Sie unserer Verwendung von Cookies zu.


Confronting Bias: BSA’s Framework to Build Trust in AI


Tremendous advances in artificial intelligence are quickly transforming expectations about how the technology may reshape the world and prompting important conversations about equity. While AI can be a force for good, there is a growing recognition that it can also perpetuate (or even exacerbate) existing social biases in ways that may systematically disadvantage members of historically marginalized communities. As AI is integrated into business processes that can have enormous impacts on people’s lives, there is a critical need to ensure that organizations are designing and deploying these systems in ways that account for the potential risks of unintended bias.

The Framework is a tool for ensuring that AI is accountable by design and can be used by organizations of all types to manage the risk of bias throughout a system’s lifecycle. Built on a vast body of research and informed by the experience of leading AI developers, the Framework:

  • Outlines a process for performing impact assessments to identify and mitigate potential risks of bias
  • Identifies existing best practices, technical tools, and resources for mitigating specific AI bias risks that can emerge throughout an AI system’s lifecycle
  • Sets out key corporate governance structures, processes, and safeguards that are needed to implement and support an effective AI risk management program

Download PDF

Download A4 PDF

Report Translations: Japanese, Korean

Summary Translation: Japanese

Confronting Bias: BSA’s Framework to Build Trust in AI cover


[email protected]

Alle Kontakte anzeigen