Isn’t it great to live in an age when machines can do anything? Cars drive themselves, jetliners land themselves, and smartphones do just about everything but tuck us into bed.
Recognition software can read our moods and even catch us telling lies. (That’s a good thing, right?) Programs can analyze our handwriting and predict our likes, dislikes, and likely actions by tracking our digital footprints. Soon, Amazon may be filling orders for us that we haven’t even placed yet.
In the workplace, software programs may start deciding who gets hired or promoted based on models constructed from data gathered about the highest performing employees. This may include variables based on medical history, psychological markers, and virtual clues to everything about us including age, gender, political leanings, and sexual preference.
In a recent Ted Talk, Zeynep Tufekci acknowledges that these programs may make decisions more objectively than humans do. But she cautions that machines trained to infer and predict are only as good as their programming, and will of necessity reflect the biases of their programmers — which could mean compounding, not eliminating, bias.
What’s more, the algorithms that produce this kind of “machine learning” don’t allow for human insight and intuition. It’s all statistical analysis, which turns probabilities into absolutes with no assessment by human reasoning and without allowing room for appeal to a higher authority.
The more troubling issue is our willingness to abdicate the responsibility implicit in free choice. In a culture that has long conflated judgment with judgmentalism, it’s hardly surprising to find how eager people are to reduce every decision to a binary option and thereby eliminate all shades of gray from the mix. And if that’s not enough, we can simply block any information that doesn’t conform to our way of thinking.