Archives for posts with tag: A.I.

It is a hard time to be a skeptic about Artificial Intelligence (A.I.) or to give it its more proper title in its current iteration: Machine Learning. What do I mean by hard time? Well, there are plenty who accuse those who do not wholly embrace A.I. tools as being modern Luddites, people against any kind of progress (that is a slanderous gross misinterpretation of the position the Luddites held, but I digress…). Just look at the stock market and all the money pouring into A.I. research say the true believers. For those who have never heard of a bubble; I have a bridge to sell you.

Then we could ask, what do I mean by skeptic? This is a surprisingly nuanced question when it comes to Machine Learning. I believe Machine Learning can do some interesting and useful things in our world. However, I do not believe that we are in any way asking the right questions or placing the right guardrails to protect those without whom these machine learning tools would not exist. I’m talking about those whose work is used to train A.I., are given no credit, and stand to suffer the most from a race to the bottom to find a machine that can do a good enough job to replace a costly human and make someone else a billionaire. I’m not a skeptic about Machine Learning. I am skeptical about people and our seemingly limitless capacity to exploit any opportunity, disguise it as something else, and then abdicate any responsibility for the consequences.

Mathematician Marcus Du Sautoy in an entertaining book, The Creativity Code: Art and Innovation in the Age of AI, acts as a proponent of Machine Learning. At the same time the author is having a self-confessed existential crisis over whether he is being put out of a job as a mathematician by A.I.  Ultimately, the book fails due to the author’s lack of an ethical framework for this discussion. Written in 2019, that’s before the days of Chat GPT kiddies, Mr. Du Sautoy uses Eva Lovelace as a jumping off point for his existential exploration of all things Machine Learning.

Eva Lovelace, born in 1815, was an English writer and mathematician and is frequently called the first computer programmer. She was also a colleague of Charles Babbage, the inventor of the Difference Engine and proposer of its follow up the Analytical Engine. It is Lovelace who is credited with the intellectual leap of understanding that the Analytical Engine was not just a calculation machine. That once a machine understood numbers it could be applied to all sorts of subjects where numbers could take the place of other values. She is also famously known for a quote seeming to pour scorn on A.I.

 “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”

Mr. Du Sautoy ultimately believes that Ada Lovelace was mistaken, but I feel this is more down to interpretation rather than to clinical facts. What the author does rightly acknowledge is that data is the fuel of A.I. That access to data will probably be the oil of the 21st century. Where he fails is in not grappling with the consequences of data as fuel and its ethical ramifications. “Don’t worry about all those people in the Middle East, they don’t matter considering all that oil that’s right under their feet,” the writer seems to be saying.

In the Creativity Code, there is some interesting exploration of how the use of algorithms is teaching us about how humans think about subjects and how we go about creativity. To unlock the human algorithm. It is particularly insightful to recognize that the creative leap is not to create new things, but to recognize when one of those new things may have value to others.

While Mr. Du Sautoy worries about his own profession, he is all to ready to write off whole armies of other creative people because he does not consider the work they do to have value. Whether that is to write business articles or reports, or to write background royalty free music. He fails to realize that it is this “bread and butter” creative work that allows writers and composers to work on projects more dear to their hearts. The author seems to believe that this “drudge work” is holding them back from doing more interesting things. No, it’s the money these creatives charge that has an impact on the bottom line and Machine Learning is cheaper. These creative people will not have more time for more interesting work. They will be unemployed. That the previous work of the creatives is used by machine learning as part of its training data, its fuel, is of course just salt in the wound.

Indeed, Mr. Du Sautoy blithely admits that he asked a Machine Learning tool to write a section of the book for him. In a fit of worry about plagiarism, he hunts down an almost identical article on the internet – but then keeps it in the book saying; “if I get sued for plagiarism, we can then agree that this is a bad idea.”

This book probably suffers from being a book of its time, before there was seemingly endless hype and not enough skepticism surrounding Machine learning. And it is such a shame as the book is genuinely entertaining. The section on the game Go in particular raises some interesting questions. However, the lack of ethical awareness is unforgivable and tarnishes this otherwise interesting and entertaining volume.

Image of George Orwell by Gordon Johnson from Pixabay

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke.

In a remarkable about face for a technology company, Amazon has confirmed that it is moving away from its “just walk out” technology at its Amazon Fresh stores. The technology boasted that it used a mixture of cameras, sensors, and artificial intelligence (A.I.) to know what consumers had put in their baskets and to accurately bill its customers without all that tedious checking out and interacting with another human being at the grocery store.

Image Copyright Amazon.com used under fair use for criticism, comment, or news reporting.

What was actually happening was that up to 1,000 people in India were watching and tagging videos to ensure that customers were billed correctly. Amazon has apparently laid off almost its entire development team of this “technology” and will start to phase out this service from its existing Amazon Fresh stores. This is all the more surprising after Amazon’s experience with A.I. recruitment. In 2015 Amazon had to abandon an A.I. résumé reading project due to being unable to stop it from discriminating against women. It was seen by many as a humiliating comedown for the tech giant.

Image Copyright Amazon.com used under fair use for criticism, comment, or news reporting.

“Pay no attention to that man behind the curtain!” — The Wonderful Wizard of Oz, L.Frank Baum.

While many will smirk at Amazon’s second major public A.I. failure, and I have to admit to being one of those people, there is a bigger issue here which Amazon should be commended for. It is the lifting the vail on A.I. tools that are not some magic that comes out of the ether. They often require human intervention to be usable- both in front and behind the keyboard. In addition A.I., or more accurately Machine Learning , need examples of human labor in the thousands, if not millions, to be trained. The training of these A.I. “models” has become a contentious subject for those with an interest in A.I. both as supporters and critics.   

“Any technology distinguishable from magic is insufficiently advanced” – Barry Gehm’s corollary to Arthur C. Clarke’s original quote.

The main issue with machine learning is that the A.I. industry, almost without exception, sees art, music, writing, film, and pretty much the entire internet as fair game for training A.I. models, which they in turn sell to us in the guise of generative A.I. Those of us on the other side (waves hand in air to indicate exactly where I stand on this subject in case you had not already guessed) say that copyright does not work that way. Derivative works are still derivative.

 It is indeed hilarious to watch companies such as Disney try to navigate this brave new world. On the one hand, Disney has tried to argue that generative A.I. is fine for them to use to create new works based on the work of artists they have employed in the past. But Disney has then complained about possible copyright infringement when someone else has tried the same trick with copyrighted works they own.

Image Copyright Walt Disney Company used under fair use for criticism, comment, or news reporting.

The lawyer who used ChatGPT to write a legal brief might want the machines to infringe a bit more. To his cost, literally, the lawyer found out that the pesky machine had just made up all the cases that it sited in its argument which he signed his name to. He was sanctioned and fined after he was found out. I just love that generative A.I. tools hallucinate (the developers term, not mine).

One of my favorite activities these days is to ask A.I. peddlers what they use to train their models. Indeed, I had a most entertaining afternoon doing just that at this year’s Western Veterinary Conference. Amongst the answers I received were “none of your business – who are you” (my favorite), “medical records from a university,” and “the internet.” None of the vendors I spoke to were willing to discuss privacy, copyright, or what happens if they are no longer allowed to train their models that way. One gets the distinct impression of building on borrowed land.

The latest darling of the A.I. generation tools is Sora, which creates beautiful full motion video from text prompts and is from the OpenAI stable. However, in a recent interview with the Wallstreet Journal, Mira Murati, OpenAI’s Chief Technology Officer, refused to answer questions about where Sora’s data set for modeling came from. Murati also refused to say whether the data set that Sora used included YouTube and Instagram videos – stating that she “did not know.” That in turn has led to some serious questions about licensing, as YouTube’s CEO Neal Mohan, confirmed that OpenAI using YouTube content for modeling purposes would be a violation of YouTube’s terms of service.  

“Thou shall not make a machine in the likeness of a human mind” – Dune, Frank Herbert

There is a temptation to label those who speak out about our current infatuation with A.I. tools and criticize the foundations those tools are built on as luddites. While our current use of the word brings to mind hoards of unemployed mill workers bent on smashing “the spinning jenny,” the truth about the Luddites is actually far more nuanced and carries a message for today.  The Luddites did not hate all machines, they in fact were fine with most and just wanted them run by workers who had gone through apprenticeships and were paid decent wages. The Luddites main concern were manufacturers who used machines in “a fraudulent and deceitful manner” notes Kevin Binfield in his book “Writings of the Luddites.” Outsourcing the cashing out of grocery shopping to a developing country, and labeling it as new technology, is a tactic the Luddites would have been all too familiar with and would have been happy to march against.

While I am not advocating for a Butlerian Jihad as Herbert described as the backdrop for Dune, there is merit in the context he provides to the proscription on thinking machines.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” – Dune, Frank Herbert.

As author SJ Sindu wrote on Twitter (I refuse to call it X on general principles); “We don’t need AI to make art. We need AI to write emails and clean the house and deliver the groceries so humans can make more art.”

A.I. art needs human art to model itself on and the pushback from artists and consumers is already significant. When the argument over modeling reaches the courts, the damage may already be done. Only then will we see the parallels between the creative arts and A.I. that we saw in the 2000s with Napster / Pirate Bay and music. Will it be too late to put this tool back in its box?

A healthy skepticism when it comes to A.I., I think is all important. And not just a skepticism for what A.I. can do but for the intentions of those that wield it.

A.I. will need to be “open” and not just open as in the name of a for profit corporation. Its models will need to be transparent and be able to be questioned. As I wrote about in my review of Hilke Schellmann’s book on A.I. in hiring and Human Resources “The Algorithm”; …it is often difficult to impossible for candidates or employees to challenge decisions by managers which they may feel have been affected by bias. How much more difficult is it when it is not a human making the decision or recommendation? A tool of which we cannot ask the most basic of questions: what were you thinking?

Footnotes and links would be a great start. But most generative A.I. companies consider this proprietary information and therefore refuse to provide what would seem a most obvious step when it comes to trust. That, in fact, is exactly why authors use footnotes and links, to allow others to follow their thinking on how they reached their conclusions. I’ve tried to add as many links and footnotes as I can to this article without becoming burdensome.

I am not a Luddite in the modern sense, but I do share a lot of the same concerns of the Luddites of old. We only need to look at our world to see why we should be concerned. It is a world where poor people in the developing world watch us shop so that we can pretend we are living in a magic future where machines do all the work. Where the drudgery of making art has been taken away from us so it can be sold back to us by corporations owned by billionaires.

I’m not sure I want A.I. to write my emails, but I can think of plenty of things that I’d like it to undertake. I already use it in a number of ways. I’ve used A.I. images in my books (although I probably will not do so in the future). I currently feel that A.I. has to earn its place in my world by proving its benefits not just to me, but the world as a whole. Will the undertakings of A.I. be for the benefit of people? Currently, that seems to be the last thing on the developers’ minds.

“The tune had been haunting London for weeks past. It was one of countless similar songs published for the benefit of the proles by a sub-section of the Music Department. The words of these songs were composed without any human intervention whatever on an instrument known as a versificator. But the woman sang so tunefully as to turn the dreadful rubbish into an almost pleasant sound.” – 1984, George Orwell

Want to get really scared and hopeful at the same time?

Scary Smart is a low level dive into the technology of Artificial Intelligence (A.I.) and deep level dive into the ethics and morality of those who are most responsible for how A.I. will turn out:

Us.

A.I. may seem like the new buzz term with its adoption into our daily lives through products like ChatGPT’s Open A.I. platform and Bing’s real question search algorithm; however, A.I. is baked into almost everything we do with technology. Every app on our phones and every social media platform we interact with has A.I.’s fingerprints all over them.

Mr. Gawat’s premise in “Scary Smart” is that A.I. is a child. And the best way to predict what kind of teen and adult we will get with A.I. is to be good parents. A brilliant initial example from our current comic book obsessed culture is Superman. What kind of Superman would Clark Kent have become of Jonathan and Martha Kent were greedy, selfish, and aggressive? There is no doubt that A.I. is already smarter than humans in many specialized areas, but what happens when A.I. becomes just generally smarter than the smartest human and has access to all the knowledge of humanity through the internet?

Unfortunately, humanity is not doing a very good job of raising A.I. as a child. From our methods of creating and improving these machine intelligences all the way through to the tasks that we are giving them perform, we are emphasizing our worst instincts: To create wealth, surveille our citizenry, gamble, and coming to a battlefield near you soon – to kill people.

We as a society, may feel we have no choice but to use A.I. in this way. If a foreign power, or terrorists, use A.I. controlled drones which are smarter and more efficient than any human, the only way to fight back may be to use A.I. in a similar fashion. But what does that teach our new artificial children? A.I.’s already have a disturbing habit of developing their own language when they communicate together and of finding ways to communicate with each other. What happens when and A.I. who has been taught to ruthlessly buy and sell shares to maximize short term profits starts to talk to an A.I. that has been taught to ruthlessly kill its enemies when they are shown to it?

The author’s excellent example of what might happen is the world’s reaction to the outbreak of COVID-19: Ignore the problem, try to blame someone else, and ultimately overreact upending our society. We may try to put the A.I. genie back in the bottle through pulling the plug or lockdowns, but we will fail. A.I’s will be faster, smarter, and have more knowledge than any human or group of humans. While a lot of this may seem like the dream of Hollywood Blockbusters, Mo Gawat is at pains to explain that there is little disagreement within the A.I. community that these risks are real. How real is where the disagreements start.

The possible solution to these issues, the author postulates, lies not with the developers but with users and how we define our relationship with A.I. In history, master slave relationships have not ended well for the masters – with good reason. How we interact and decide to use A.I. will define what kind of parents we will be to this fledgling new intelligence. A new intelligence that although they may start out separately will share information and communicate with each other so quickly, and with access to the memories and experiences of all those who have come before them, that it will be impossible to not consider them one single intelligence.

This then leads us to what kind of example will we set for these new children? While A.I.s have already shown that they can develop a sense of morals, and not in a good way, by their interactions with users they will also learn from our interactions with each other. Machine morality may very well not be programed by developers, but learned from observing and interacting with us. What are machines already learning from our social media, search habits, and politics?

This is thought provoking and important work essentially on morals and ethics withing the framework of A.I. that occasionally reads like a Arnold Schwarzenegger movie. If we ignore the topics it raises, we deserve our fate.

And while Mr. Gawat paints a hopeful portrait, he also shows us just how bleak things could get.