In a new paper, researchers from Carnegie Mellon University and Stevens Institute of Technology show a new way of thinking about the fair impacts of AI decisions.
They draw on a well-established tradition known as social welfare optimization, which aims to make decisions fairer by focusing on the overall benefits and harms to individuals. This method can be used to evaluate the industry standard assessment tools for AI fairness, which look at approval rates across protected groups.
“In assessing fairness, the AI community tries to ensure equitable treatment for groups that differ in economic level, race, ethnic background, gender, and other categories,” explained John Hooker, professor of operations research at the Tepper School of Business at Carnegie Mellon, who coauthored the study and presented the paper at the International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR) on May 29 in Uppsala, Sweden. The paper received the Best Paper Award.
Imagine a situation where an AI system decides who gets approved for a mortgage or who gets a job interview. Traditional fairness methods might only ensure that the same percentage of people from different groups get approved.
No comments:
Post a Comment