One of the powerful way Artificial Intelligence ‘learns’ is by using neural networks. Neural Networks are trained with a large number of examples where the result is known. The Neural Network adjusts until it gives the same result as the human ‘teacher’.
However, there’s a trap. If that source material contains biases – such as modeling Police ‘stop and frisk’ – then whatever biases are in the learning material will be contained in the subsequent AI modeling. This is the subject of an article in Nature: There is a blind spot in AI research and also the praise of Cathy O’Neil’s book Weapons of Math Destruction that not only brings up that issue, but the problem of “proxies”.
Proxies, in this context, are data sources that are used in AI programs that are not the actual data, but rather something that approximates the data: like using zip code as a proxy for income or ethnicity.
Based on O’Neil’s book, I’d say the authors of the Nature article are too late. There are already institutionalized biases in very commonly used algorithms in finance, housing, policing and criminal policy.
https://techcrunch.com/2017/07/21/why-the-future-of-deep-learning-depends-on-finding-good-data/