CDT’s AI Governance Lab develops and promotes adoption of robust, technically-informed solutions for the effective regulation and governance of AI systems.
The Lab provides public interest expertise in rapidly developing policy and technical conversations, to advance the interests of individuals whose lives and rights are impacted by AI. We place a particular emphasis on the rights and interests of historically marginalized people, who are often disproportionately harmed by poorly designed and implemented systems and face systemic barriers in countering those harms.
The AI Governance Lab is led by experts experienced in guiding the responsible development of AI products and services and in developing governance structures for AI-powered systems. They leverage CDT’s leadership in AI policy to engage directly with companies and multistakeholder initiatives, support public interest advocates, and guide policymakers on the effective governance of AI.
Statement of Core Activities
The Lab uses five modes of engagement to pursue its goals:
-
1.
Developing, analyzing, prototyping, and amplifying best practices for AI governance. The Lab works in close collaboration with academic researchers, practitioners, and other stakeholders to define and promote implementable solutions.
-
2.
Advocating for the adoption of responsible governance solutions through multi-stakeholder initiatives and direct-to-company engagement. The Lab works with practitioners to define, shape, and implement standards and norms around priorities like AI auditing and safety evaluation.
-
3.
Supporting CDT’s policy teams in advising policymakers on how to achieve effective auditing, safety, and accountability solutions through legislation, regulation, funding, and government-endorsed best practices
-
4.
Strengthening public interest advocates, particularly civil rights organizations, researchers focused on disinformation, and other groups representing impacted communities.
-
5.
Building bridges and access points for the research community. The Lab hosts fellowships and pursues collaborations with researchers working on topics including fairness, accountability, and transparency in AI, as well as information integrity and AI safety.
AI Governance Lab Leadership Team
Miranda Bogen
Founding Director, CDT AI Governance Lab
Kevin Bankston
Senior Advisor, AI Governance
CDT AI Governance Lab Advisory Committee
The AI Governance Lab Advisory Committee brings in-depth and cross-sectoral expertise in applied solutions that advance the responsible development and deployment of AI-powered systems.
Searchable Database of CDT’s Work on AI
The CDT AI Policy tracker brings together all of CDT’s work on AI, enabling quick review of CDT’s AI-related policy positions, archives previous positions that may have changed, and promotes the discovery of common themes in CDT’s AI work.