
It seems that everywhere one turns today artificial intelligence (AI) is being added to every aspect of daily life. Whether it be the arts, education, entertainment, search, or the workplace – AI is everywhere.
Often, those of us who are distinctly dubious about the claims that are being made about the current generation of AI, more appropriately labeled machine learning, can often feel like Cassandra of myth – fated never to be believed. At worst we are labeled as luddites, rather than as people who believe that technologies should earn their places in our lives and societies rather than being instantly adopted after being told by people hoping to get rich that they work great and everything will be fine.
Ms. Schellmann’s exhaustive exploration of AI in the workplace is pretty damning.
It catalogs how Human Resource (HR) departments have been adopting technologies that are often little understood by their users and are often working under misapprehensions as to the scientific backing of the ideas behind these tools. The fundamental problem is often one of garbage in – garbage out; a phrase that has been with us from the dawn of the computer age. For more on this I recommend the excellent “Weapons of Math Deception” by Cathy O’Neil which I reviewed here. The majority of AI tools are black boxes that we can’t look inside to see how they work. The manufacturers consider the algorithm’s inside these black boxes proprietary intellectual property. Without being able to look inside the magic black box, it is often impossible to know whether an algorithm is biased inherently, whether it is being trained on biased data, or just plain wrong.
One of the things that comes up again and again in “The Algorithm” is AI’s, or the people that program it, inability to know the difference between correlation and causation. Just because a company’s best managers all played baseball, does not mean that baseball should be a prerequisite for being a manager – particularly if it means that an AI would overlook someone who played softball – which is essentially the same sport. When one considers the fact that men tend to play baseball, and woman tend to play softball, it is easy to see just how problematic these correlations can be.
The problems with correlation and causation are of course magnified when junk science are involved. Tones of voice, language usage, and facial expressions, are being used in virtual one-way interviews for hiring and have little to no science behind them. In one highly memorable section of the book, Ms. Schellmann speaks German to an AI tool, reading from a Wikipedia entry, which is assessing her customer service skills and quality of English. The tool rates her highly in customer service and English even though she is speaking a different language and does not even try to answer the questions being asked.
Where the book falls down a little, but probably says more about the sad state of business thinking, is on personality testing. The author seems to accept as scientifically valid that employees can be categorized as one of a few simple types. You can read my review of “The Personality Brokers” by Merve Emre here for more on this nonsense and dangerous business tool. As Ms. Schellmann rightly states in her take down of how AI handles personality testing, but could actually just apply to all personality testing; “we’d be better off categorizing by star sign.”
It is disturbing just how much AI has already invaded the hiring space in the HR offices at large companies and gives one pause as these tools become more mainstream. While it is true that it is often not the AI software itself that is the problem, but how the humans that wield such technologies choose to use them. There is also the problem of how hard it is for a human employee to challenge a decision that is made by an algorithm – which by its very nature is a secret. The developers will often say that these tools should not be the final word in hiring or firing; but the knowing wink and smile behind these statements tells us everything we need to know.
Ms. Schellmann’s work is laser focused on human resources, an area where bias has been and often is a significant problem. The idea of a tool that can be used to eliminate bias, and that companies want to use tools like this, is not inherently a bad idea – in fact it is admirable. The problem is that bias in hiring is often unconscious bias and tools that are wielded by those who are not aware of their own biases are most likely fated to continue to have these biases and therefore affect the process. In addition, it is often difficult to impossible for candidates or employees to challenge decisions by managers which they may feel have been affected by bias. How much more difficult is it when it is not a human making the decision or recommendation? A tool of which we cannot ask the most basic of questions: what were you thinking?
This is an important work for our time – hopefully one not fated to be a Cassandra.
