“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke.
In a remarkable about face for a technology company, Amazon has confirmed that it is moving away from its “just walk out” technology at its Amazon Fresh stores. The technology boasted that it used a mixture of cameras, sensors, and artificial intelligence (A.I.) to know what consumers had put in their baskets and to accurately bill its customers without all that tedious checking out and interacting with another human being at the grocery store.
What was actually happening was that up to 1,000 people in India were watching and tagging videos to ensure that customers were billed correctly. Amazon has apparently laid off almost its entire development team of this “technology” and will start to phase out this service from its existing Amazon Fresh stores. This is all the more surprising after Amazon’s experience with A.I. recruitment. In 2015 Amazon had to abandon an A.I. résumé reading project due to being unable to stop it from discriminating against women. It was seen by many as a humiliating comedown for the tech giant.
“Pay no attention to that man behind the curtain!” — The Wonderful Wizard of Oz, L.Frank Baum.
While many will smirk at Amazon’s second major public A.I. failure, and I have to admit to being one of those people, there is a bigger issue here which Amazon should be commended for. It is the lifting the vail on A.I. tools that are not some magic that comes out of the ether. They often require human intervention to be usable- both in front and behind the keyboard. In addition A.I., or more accurately Machine Learning , need examples of human labor in the thousands, if not millions, to be trained. The training of these A.I. “models” has become a contentious subject for those with an interest in A.I. both as supporters and critics.
“Any technology distinguishable from magic is insufficiently advanced” – Barry Gehm’s corollary to Arthur C. Clarke’s original quote.
The main issue with machine learning is that the A.I. industry, almost without exception, sees art, music, writing, film, and pretty much the entire internet as fair game for training A.I. models, which they in turn sell to us in the guise of generative A.I. Those of us on the other side (waves hand in air to indicate exactly where I stand on this subject in case you had not already guessed) say that copyright does not work that way. Derivative works are still derivative.
It is indeed hilarious to watch companies such as Disney try to navigate this brave new world. On the one hand, Disney has tried to argue that generative A.I. is fine for them to use to create new works based on the work of artists they have employed in the past. But Disney has then complained about possible copyright infringement when someone else has tried the same trick with copyrighted works they own.
The lawyer who used ChatGPT to write a legal brief might want the machines to infringe a bit more. To his cost, literally, the lawyer found out that the pesky machine had just made up all the cases that it sited in its argument which he signed his name to. He was sanctioned and fined after he was found out. I just love that generative A.I. tools hallucinate (the developers term, not mine).
One of my favorite activities these days is to ask A.I. peddlers what they use to train their models. Indeed, I had a most entertaining afternoon doing just that at this year’s Western Veterinary Conference. Amongst the answers I received were “none of your business – who are you” (my favorite), “medical records from a university,” and “the internet.” None of the vendors I spoke to were willing to discuss privacy, copyright, or what happens if they are no longer allowed to train their models that way. One gets the distinct impression of building on borrowed land.
The latest darling of the A.I. generation tools is Sora, which creates beautiful full motion video from text prompts and is from the OpenAI stable. However, in a recent interview with the Wallstreet Journal, Mira Murati, OpenAI’s Chief Technology Officer, refused to answer questions about where Sora’s data set for modeling came from. Murati also refused to say whether the data set that Sora used included YouTube and Instagram videos – stating that she “did not know.” That in turn has led to some serious questions about licensing, as YouTube’s CEO Neal Mohan, confirmed that OpenAI using YouTube content for modeling purposes would be a violation of YouTube’s terms of service.
“Thou shall not make a machine in the likeness of a human mind” – Dune, Frank Herbert
There is a temptation to label those who speak out about our current infatuation with A.I. tools and criticize the foundations those tools are built on as luddites. While our current use of the word brings to mind hoards of unemployed mill workers bent on smashing “the spinning jenny,” the truth about the Luddites is actually far more nuanced and carries a message for today. The Luddites did not hate all machines, they in fact were fine with most and just wanted them run by workers who had gone through apprenticeships and were paid decent wages. The Luddites main concern were manufacturers who used machines in “a fraudulent and deceitful manner” notes Kevin Binfield in his book “Writings of the Luddites.” Outsourcing the cashing out of grocery shopping to a developing country, and labeling it as new technology, is a tactic the Luddites would have been all too familiar with and would have been happy to march against.
While I am not advocating for a Butlerian Jihad as Herbert described as the backdrop for Dune, there is merit in the context he provides to the proscription on thinking machines.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” – Dune, Frank Herbert.
As author SJ Sindu wrote on Twitter (I refuse to call it X on general principles); “We don’t need AI to make art. We need AI to write emails and clean the house and deliver the groceries so humans can make more art.”
A.I. art needs human art to model itself on and the pushback from artists and consumers is already significant. When the argument over modeling reaches the courts, the damage may already be done. Only then will we see the parallels between the creative arts and A.I. that we saw in the 2000s with Napster / Pirate Bay and music. Will it be too late to put this tool back in its box?
A healthy skepticism when it comes to A.I., I think is all important. And not just a skepticism for what A.I. can do but for the intentions of those that wield it.
A.I. will need to be “open” and not just open as in the name of a for profit corporation. Its models will need to be transparent and be able to be questioned. As I wrote about in my review of Hilke Schellmann’s book on A.I. in hiring and Human Resources “The Algorithm”; …it is often difficult to impossible for candidates or employees to challenge decisions by managers which they may feel have been affected by bias. How much more difficult is it when it is not a human making the decision or recommendation? A tool of which we cannot ask the most basic of questions: what were you thinking?
Footnotes and links would be a great start. But most generative A.I. companies consider this proprietary information and therefore refuse to provide what would seem a most obvious step when it comes to trust. That, in fact, is exactly why authors use footnotes and links, to allow others to follow their thinking on how they reached their conclusions. I’ve tried to add as many links and footnotes as I can to this article without becoming burdensome.
I am not a Luddite in the modern sense, but I do share a lot of the same concerns of the Luddites of old. We only need to look at our world to see why we should be concerned. It is a world where poor people in the developing world watch us shop so that we can pretend we are living in a magic future where machines do all the work. Where the drudgery of making art has been taken away from us so it can be sold back to us by corporations owned by billionaires.
I’m not sure I want A.I. to write my emails, but I can think of plenty of things that I’d like it to undertake. I already use it in a number of ways. I’ve used A.I. images in my books (although I probably will not do so in the future). I currently feel that A.I. has to earn its place in my world by proving its benefits not just to me, but the world as a whole. Will the undertakings of A.I. be for the benefit of people? Currently, that seems to be the last thing on the developers’ minds.
“The tune had been haunting London for weeks past. It was one of countless similar songs published for the benefit of the proles by a sub-section of the Music Department. The words of these songs were composed without any human intervention whatever on an instrument known as a versificator. But the woman sang so tunefully as to turn the dreadful rubbish into an almost pleasant sound.” – 1984, George Orwell