The Internet of Rich People’s Things
Object recognition algorithms sold by tech companies, including Google, Microsoft, and Amazon, perform worse when asked to identify items from lower-income countries.
These are the findings of a new study conducted by Facebook’s AI lab, which shows that AI bias can not only reproduce inequalities within countries, but also between them.
In the study (which we spotted via Jack Clark’s Import AI newsletter), researchers tested five popular off-the-shelf object recognition algorithms — Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson — to see how well each program identified household items collected from a global dataset.
The dataset included 117 categories (everything from shoes to soap to sofas) and a diverse array of household incomes and geographic locations (from a family in Burundi making $27 a month to a family in Ukraine with a monthly income of $10,090).
The researchers found that the object recognition algorithms made around 10 percent more errors when asked to identify items from a household with a $50 monthly income compared to those from a household making more than $3,500. The absolute difference in accuracy was even greater: the algorithms were 15 to 20 percent better at identifying items from the US compared to items from Somalia and Burkina Faso.
These findings were “consistent across a range of commercial cloud services for image recognition,” write the authors….