AI algorithms far from neutral in India

Governments are increasingly using artificial intelligence and machine learning in decision-making. But are their underlying algorithms suitable for countries such as India? A recent study says they may not be, since they were designed for Western societies. For example, such algorithms may fail to recognize religious or caste biases and could treat oppressed minorities unfairly.

The study, by Nithya Sambasivan and others at Google Research, US, is based on interviews with 36 academics from various fields and activists working with marginalized communities.

Many of these algorithms work on the assumption that the available data is representative of the society. But in Indian datasets, those with internet access are overrepresented, which is just 50% of the population. This means safety apps that invite users to identify unsafe areas in a city will mark Dalit and Muslim areas as unsafe, reflecting the prejudices of the app’s middle- and upper-class users.

Read more

You may also like

More in IT

Comments are closed.