Combating Technochauvinism

Programmers often make the mistake of substituting popular for good. This error has implications for all computational decision-making that involves subjective judgments of quality.

Namely: a human can perceive a difference between the concepts popular and good. A human can identify things that are popular but not good (like ramen burgers or racism) or good but not popular (like income taxes or speed limits) and rank them in a socially appropriate manner. (Of course, there are also things like exercise and babies that are both popular and good.) A machine, however, can only identify things that are popular using criteria specified in an algorithm. The machine cannot autonomously identify the quality of the popular items.

The Facebook algorithm prioritizes popularity. So does the YouTube algorithm. That’s why kids get all kinds of inappropriate recommendations from YouTube, and why the Facebook algorithm surfaces fake news or pages devoted to phony pharmaceuticals. These things are popular with users. Users are not necessarily humans, however. Fake followers and click fraud have been problems since the very beginning of the internet; bots can be used to make posts and videos appear more popular, allowing bad actors to game any recommendation system.

There is a particular mindset that says that algorithms are superior to human judgment. The same mindset argues using technology is always the best strategy. I call this technochauvinism.

Read More at MIT Press

Read the rest at MIT Press