Skip to main content

How to Identify and Change Bias in AI Development and Application

Georgia Tech Scheller Professor Deven Desai explores how to combat language bias in AI development in his article for the MIT Sloan Management Review.
Artificial Intelligence (AI) has quickly integrated into society, from facial recognition software for criminal investigations, to human interactions with digital assistants and customer service bots. The way AI is developed in labs opposed to when it is used in the field is the topic of discussion by Deven Desai, associate professor of law and ethics at the Georgia Tech Scheller College of Business, and Ayanna Howard, dean of the College of Engineering at The Ohio State University, in their article published in the MIT Sloan Management Review.

Desai and Howard use the context of bias in AI to draw distinctions between pursuing basic science and applying science to products and people. They note the urgency of getting an AI product to market can lead to dangerous and ethical problems such as when a computer vision product misidentifies potential criminal suspects based on race. The two authors offer suggestions for researchers and developers on how to identify AI bias and how to rectify the problem by looking for trouble spots early in the hardware, employing third-party auditing services to offer technical accountability, using third-party testing to catch unintended outcomes before the rush to release a product, and establishing an internal ethics committee with a direct reporting line to a C-level officer.

Read the full article: “Taming AI’s Can/Should Problem.”
Featured in this Story
Deven Desai
Sue and John Staton Professorship in Business Law
stripes

This website uses cookies. For more information review our Cookie Policy

Source