Archives for posts with tag: mathamatics

It is a hard time to be a skeptic about Artificial Intelligence (A.I.) or to give it its more proper title in its current iteration: Machine Learning. What do I mean by hard time? Well, there are plenty who accuse those who do not wholly embrace A.I. tools as being modern Luddites, people against any kind of progress (that is a slanderous gross misinterpretation of the position the Luddites held, but I digress…). Just look at the stock market and all the money pouring into A.I. research say the true believers. For those who have never heard of a bubble; I have a bridge to sell you.

Then we could ask, what do I mean by skeptic? This is a surprisingly nuanced question when it comes to Machine Learning. I believe Machine Learning can do some interesting and useful things in our world. However, I do not believe that we are in any way asking the right questions or placing the right guardrails to protect those without whom these machine learning tools would not exist. I’m talking about those whose work is used to train A.I., are given no credit, and stand to suffer the most from a race to the bottom to find a machine that can do a good enough job to replace a costly human and make someone else a billionaire. I’m not a skeptic about Machine Learning. I am skeptical about people and our seemingly limitless capacity to exploit any opportunity, disguise it as something else, and then abdicate any responsibility for the consequences.

Mathematician Marcus Du Sautoy in an entertaining book, The Creativity Code: Art and Innovation in the Age of AI, acts as a proponent of Machine Learning. At the same time the author is having a self-confessed existential crisis over whether he is being put out of a job as a mathematician by A.I.  Ultimately, the book fails due to the author’s lack of an ethical framework for this discussion. Written in 2019, that’s before the days of Chat GPT kiddies, Mr. Du Sautoy uses Eva Lovelace as a jumping off point for his existential exploration of all things Machine Learning.

Eva Lovelace, born in 1815, was an English writer and mathematician and is frequently called the first computer programmer. She was also a colleague of Charles Babbage, the inventor of the Difference Engine and proposer of its follow up the Analytical Engine. It is Lovelace who is credited with the intellectual leap of understanding that the Analytical Engine was not just a calculation machine. That once a machine understood numbers it could be applied to all sorts of subjects where numbers could take the place of other values. She is also famously known for a quote seeming to pour scorn on A.I.

 “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”

Mr. Du Sautoy ultimately believes that Ada Lovelace was mistaken, but I feel this is more down to interpretation rather than to clinical facts. What the author does rightly acknowledge is that data is the fuel of A.I. That access to data will probably be the oil of the 21st century. Where he fails is in not grappling with the consequences of data as fuel and its ethical ramifications. “Don’t worry about all those people in the Middle East, they don’t matter considering all that oil that’s right under their feet,” the writer seems to be saying.

In the Creativity Code, there is some interesting exploration of how the use of algorithms is teaching us about how humans think about subjects and how we go about creativity. To unlock the human algorithm. It is particularly insightful to recognize that the creative leap is not to create new things, but to recognize when one of those new things may have value to others.

While Mr. Du Sautoy worries about his own profession, he is all to ready to write off whole armies of other creative people because he does not consider the work they do to have value. Whether that is to write business articles or reports, or to write background royalty free music. He fails to realize that it is this “bread and butter” creative work that allows writers and composers to work on projects more dear to their hearts. The author seems to believe that this “drudge work” is holding them back from doing more interesting things. No, it’s the money these creatives charge that has an impact on the bottom line and Machine Learning is cheaper. These creative people will not have more time for more interesting work. They will be unemployed. That the previous work of the creatives is used by machine learning as part of its training data, its fuel, is of course just salt in the wound.

Indeed, Mr. Du Sautoy blithely admits that he asked a Machine Learning tool to write a section of the book for him. In a fit of worry about plagiarism, he hunts down an almost identical article on the internet – but then keeps it in the book saying; “if I get sued for plagiarism, we can then agree that this is a bad idea.”

This book probably suffers from being a book of its time, before there was seemingly endless hype and not enough skepticism surrounding Machine learning. And it is such a shame as the book is genuinely entertaining. The section on the game Go in particular raises some interesting questions. However, the lack of ethical awareness is unforgivable and tarnishes this otherwise interesting and entertaining volume.

There are few people who would argue with the statement that math is at the heart for most of our modern world. What is less well understood is what happens when that math goes wrong. And it does. All the time!

Mr. Parker’s highly amusing and thought-provoking book is about math and computers, but what becomes clearer as the book goes on is that this is also a book about systems and how and why systems can fail. There are lots of examples of people adding up numbers incorrectly or trying to take shortcuts to make the math simpler, which in turn leads to devastating and sometimes lethal consequences. However, it the subtler applications of mathematics where “Humble Pi” really scores.

For example, looking at 30- or 40-year-old kitchen appliance, still in use, is often accompanied by a phrase such as “they don’t make things today like they used to.” While this might seem obvious at first glance given that we are talking about an appliance working well beyond its expected lifespan, this is actually an example of “Survivor Bias.” If we looked at how many of the appliances had been manufactured, and then looked at how many were still in daily use, the chances are that we would recognize that this surviving appliance is an outlier and that the vast majority of the appliances have actually long been replaced or broken down. It is only the existence of this surviving outlier that prompts the idea even though we would likely not comment on its existence were more of the appliances in existence. The appliance’s rarity generates a false narrative that can only be understood by understanding the underlying math of the number of appliances produced.

For managers there is much to take away from Humble Pi. Mr. Parker encourages us to look at systems like layers of sliced Swiss cheese. All systems should be made of multiple layers – the checks and balances of any good system. But it is important to understand that there are possibilities for mistakes in every layer of a system – the holes in the cheese. The challenge as designers of systems is to ensure that the holes in each layer do not align. The author uses the example of two different nurses in a hospital performing a complicated drug calculation the same way and both making the same math mistake leading to a medical error.

Related to this idea of errors being a natural part of a system is the impact of a lack of tolerance for errors on new employee training. If managers terminate employees for making mistakes, the people who are left to train new employees are those who are must less likely to make mistakes. These are probably the worst people to train new employees who are obviously more prone to making mistakes. If instead, we teach employees to work a system that can detect mistakes and provide feedback, a system where the holes do not line up, then we will overall have far less mistakes – even when people are new. As the books says, humans can be very resourceful in finding ways to make mistakes.

This is not just a book about rounding errors, and why you should turn your computer off regularly. It is a book about what it means to be human in a world that relies and is built on mathematics, which humans are inherently not very good at. It is a fun and interesting read that will stay with you long after you put it down.