U.S. Congress Targets Biases of Artificial Intelligence
U.S. Congress wants study into potential biases of face recognition technology.
MEMBERS of Congress have asked the Federal Trade Commission, Federal Bureau of Investigation, and Equal Employment Opportunity Commission if agencies have studied biases of artificial intelligence algorithms used for commerce, surveillance, and hiring.
Senators Kamala Harris, Patty Murray, and Elizabeth Warren specifically ask the agency to determine whether this technology could violate the Civil Rights Act of 1964, the Equal Pay Act of 1963, or the Americans with Disabilities Act of 1990.
“We are concerned by the mounting evidence that these technologies can perpetuate gender, racial, age, and other biases,” a letter to the FTC says. “As a result, their use may violate civil rights laws and could be unfair and deceptive.”
Earlier in the year Charles Isbell, executive associate dean at the Georgia Institute of Technology, testified about the biases he’s seen for nearly 30 years of working in AI research.
“I was breaking all of [my classmate’s] facial recognition software because apparently all the pictures they were taking were of people with significantly less melanin than I have,” Isbell said.
Meanwhile, AI researcher Joy Buolamwini told Quartz Congress’ move was a major step in alerting federal agencies to the dangers of bias.
“Government agencies will need to ramp up their ability to scrutinize AI-enabled systems for harmful bias that may go undetected under the guise of machine neutrality,” Buolamwini said.