Magical thinking about machine learning won’t bring the reality of AI any closer

Date: 
2018-08-05
Publisher: 
Guardian
Author: 
John Naughton

Unchecked flaws in algorithms, and even the technology itself, should put a brake on the escalating use of big data.

Machine learning rapidly found its way into traffic forecasting, “predictive” policing (in which ML highlights areas where crime is “more likely”), decisions about prisoner parole, and so on. Among the rationales for this feeding frenzy are increased efficiency, better policing, more “objective” decision-making and, of course, providing more responsive public services.

Critics have pointed out that the old computing adage “garbage in, garbage out” also applies to ML. If the data from which a machine “learns” is biased, then the outputs will reflect those biases. And this could become generalised: we may have created a technology that – however good it is at recommending films you might like – may actually morph into a powerful amplifier of social, economic and cultural inequalities.

In all of this sociopolitical criticism of ML, however, what has gone unchallenged is the idea that the technology itself is technically sound – in other words that any problematic outcomes it produces are, ultimately, down to flaws in the input data. But now it turns out that this comforting assumption may also be questionable.

Add new comment

Credentials (your e-mail address will not be shown publicly)