Artificial Intelligence and Racial Bias

   Artificial intelligence (AI) is a growing technology drawing a lot of attention and investment from the major tech giants. Much like the race to space, mastering AI is a multibillion-dollar industry. All the usual tech giant suspects; Alphabet, Apple, Facebook, Microsoft, Amazon, etc., are all major players in this technology. Mastering A.I. it is expected to reap rewards for decades to come. However, Fortune Magazine exposes a rarely mentioned dark side of A.I. The same technology expected to bring trillions of dollars in economic growth suffers from one of humanities’ biggest shortfalls; bias.

“But for all of their enormous potential, A.I.-powered systems have a dark side. Their decisions are only as good as the data that humans feed them.”

  The July edition of Fortune’s A.I. Special Report has a very interesting article titled \”Unmasking A.I.’s Bias Problem\” by Jonathan Vanian. Vanian points out that with all the deep-learning techniques and sophisticated algorithms you can’t escape the human element. “But for all of their enormous potential, A.I.-powered systems have a dark side. Their decisions are only as good as the data that humans feed them.” It’s no wonder when you break the term down to its most basic definitions. Here’s how Merriam Webster’s online dictionary defines the two keywords.

artificial adjective (1) humanly contrived…often on a human model
intelligence noun (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one\’s environment or to think abstractly as measured by objective criteria (such as tests)

  Deirdre Mulligan, an associate professor at the University of California sums it up that even the most powerful algorithms haven\’t been optimized for a definition of fairness. She goes on to say how computers are optimized to do tasks. When we step into the realm of thinking, reacting, and human-level decision making we enter a realm we will never master. We can be deceived by Siri, Alexa, or Google Assistant into believing these technologies are neutral or innocuous. Quite the contrary.

   Fortune highlights an example where developers build an A.I. program to scan the characteristics of a company\’s best performers.  If the deep-learning this program pulls from is based on the highest-ranking executives being white males, then the program might not take into consideration the company\’s past. Will it take into consideration that discriminatory practices may have limited the growth of other races or genders?  Instead of negating those biases, A.I. might amplify it.

   Does this mean we should abandon AI? Not in the least. If anything, we need to explore these technologies even deeper. One area is what Fortune calls \”black box\” systems. All manufacturers claim proprietorship over the algorithms they develop. These algorithms eliminate transparency. This lack of transparency means the public isn\’t entitled to know what measures or deep-learning techniques form the basis for these products. Vanian sums it up this way. \”More transparency and openness about the data that goes into A.I.\’s black-box systems will help researchers spot bias faster and solve problems more quickly.\”

   How ironic, or maybe, how egotistical to think that we can program a perfect intelligence without the inherent biases we all have. I think we\’ve seen this story before. It was the theme of Mary Shelley\’s famous writing; Frankenstein. Let\’s make sure the doctor picks the correct brain this time.