AI Fundamental Research - Managing AI Bias | NIST Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Fundamental Research - Managing AI Bias

A crucial principle, for both humans and machines, is to avoid harmful bias that may lead to discrimination and other harmful impacts. To achieve this goal, it is critical to design and develop AI systems by taking multiple sources of bias into account, and with impact in mind. The purpose of NIST’s work in AI bias is to enhance methods for bringing context into the evaluation of AI systems - across use cases and sectors - and improving our understanding of negative impacts and harms.

Identifying and Managing AI Bias

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), released in March 2022, reflects public comments NIST received on its draft version. Part of a larger NIST effort to manage AI risks through trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing.  

Managing AI/ML Bias in Context 

Automated decision-making is appealing because artificial intelligence (AI)/machine learning (ML) systems ​can produce more consistent, traceable, and repeatable decisions compared to humans; however, these systems ​may have negative consequences, for example discriminatory outcomes. Harmful bias can manifest in AI/ML systems used to support automated decision making and lead to unfair results, negatively impacting individuals, potentially rippling throughout society, and leading to distrust of AI-based technology and institutions that rely on it.

A project carried out by NIST’s National Cybersecurity Center of Excellence will develop recommended guidance and practices to promote fair and positive outcomes and benefit users of AI services in the credit underwriting domain. final description of the project draft description of the project has been issued. 

A Workshop on Mitigating Bias in AI explored related issues in depth and informs the project.

Past Activities

AI RMF Workshop 2 Day 3, Panels 9-13
Held March 29-31, 2022, the first two days of the workshop addressed all aspects of the AI Risk Management Framework; day 3 allowed for a deeper dive of issues related to mitigating harmful bias in AI.

NIST Proposes Approach for Reducing Risk of Bias in Artificial Intelligence (News Release 6/22/2021)

Bias in AI Workshop
NIST hosted a virtual workshop on August 18, 2020, to develop a shared understanding of bias in AI, what it is, and how to measure it.

 

 

Bias in AI
Bias in AI
NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to agreement on understanding and measuring bias in AI systems.

Contact Information: ai-bias [at] list.nist.gov (ai-bias[at]list[dot]nist[dot]gov)

Contacts

Created April 6, 2020, Updated June 6, 2023