Archives for posts with tag: Artificial Intelligence

This article is going to use the term AI, even though the more accurate and less marketing friendly term “machine learning” is the term I much prefer. But this article is about you, dear reader, not me.

Reason to Worry #1: Mid-Level Practitioners

I should preface this section by stating that in theory I have no issues with the idea of the creation of a midlevel practitioner in the vein of Nurse Practitioners in the human world. My main concerns are with the fact that the veterinary profession has decidedly steered away from this kind of thing in the past; I’m looking at you Veterinary Technician Specialists (VTS). Show me an LVT / RVT / CVT with a VTS in dentistry who can’t extract any teeth and I’ll show you a missed opportunity.

Colorado State University (CSU) has become ground zero in the midlevel practitioner debate. The idea of a Veterinary Professional Associate (VPA) was proposed as early as 2009 by a member of CSU and an alliance of multiple non-profit animal welfare / rescue groups. This alliance gathered enough signatures for a proposition which was passed despite significant opposition from just about every veterinary professional body. A more in-depth retelling and an examination of the issues can be found here: https://www.avma.org/news/veterinary-professional-associate-role-moves-ahead

My other concern is that there is so little appetite for a midlevel practitioner in the profession that my “spiddy sense” starts to tingle as to what else might come of this VPA.

More on this later…

Reason to Worry #2: The Erosion of the VCPR

Across the country, before, during, and after the pandemic, moves were made to reduce the needs and requirements of the Veterinary Client Patient Relationship (VCPR).

Ostensively, to allow the use of telemedicine to initiate treatment without the need for a physical exam of the patient. While there are some champions of telemedicine from within the profession, clients only seem to have a stomach for it if it does not cost anything or if it allows them to buy medications online.

If the pandemic taught us anything it was that Zoom is a poor substitute for meeting in person.  Meanwhile, the push to allow telemedicine to replace an exam continues..

Reason to Worry #3: AI medical record writing is not what you think.

It seems like every cloud-based PMS software and every veterinary startup is selling a service that takes the conversation from the exam room and writes up medical records in a format that every vet board will love. Sounds like the perfect product: cheap, quick, and removes the drudgery of a task that just about every veterinarian hates – a task that takes time away from patients and clients.

Ignoring the inevitable veterinary board cases where the AI service just gets things wrong and the DVM did not double check – there is where these services are going and what they will turn into.

Machine Learning requires data to learn from. It takes large data sets and as AI commentator Subhasish Baidya states that AI currently is “decent summarization engines and lukewarm guessing machines.”

As Apple recently stated we are a long way off from “Thinking Machines” and the hype about Artificial General Intelligence is misplaced.

So if AI needs large data sets in order to work, so what? It just makes the product better right?

But what if the end product is actually something else entirely?

What else could a machine that learns what is talked about in an exam room do? If the medical record is meant to reflect the diagnostic process, and we are even very nice as to correct AI tools for writing the record when they get things wrong, how long before they starts suggesting the diagnosis for us?

At this year’s WVC conference I was told that it would launch this year.

A Problematic Veterinary Triad

Suggesting a diagnosis based on existing data is not particularly new. The issue is, and I know I start to sound like a conspiracy theorist here, the other two reasons to worry. Because if I can have a midlevel practitioner or even a credentialed veterinary technician perform the exam and talk to the client, and have the results reviewed by an AI that’s reasonably good at coming up with what might be wrong, why do I need a DVM?

Well the practice acts for one I hear you say! Well, my response is to remember about all that weakening of the VCPR? Why does the vet have to be on site? They could be in a different state or even a different country.

We are devaluing what it means to be a veterinarian and the role that they have to play in the care of pets.

I wish that I was super smart and that I could say that nobody else was thinking in these terms and I could claim my tech bro title. That way I could make my AI startup and combine it with my chain of low-cost veterinary clinics bankrolled by venture capitalists which I could then turn around and sell for billions. If I am… well then tech bro’s you’re welcome to my idea – my ethics can’t stomach it.

When I talk to vet students about this problematic triad they are horrified – literally horrified. When I talk to people who think about the future of veterinary medicine, they say “of course” and then tell me how they are planning to leverage these things.

When I talk to practice owners, they either reject the premise or shrug their shoulders and say “so what.” Nobody is looking to make AI models that replace upper management at the moment. We are the ones who buy those tools – tech bros are not stupid in that way.

When I talk to AI companies at trade shows (one of my favorite pastimes these days) and ask where they got their modeling data they are surprisingly evasive – particularly when you bring up the ownership of records and privacy.

The fundamental issue is that using machine learning to reduce the need for a DVM onsite, or the number of DVMs will come down to how much money is saves / generates. It’s a rare company that puts anything ahead of the bottom line. Particularly as those companies get larger.

A common saying from the AI world is that AI will not replace you but that a human using AI will. I hate this saying because it is so disingenuous. If I employ 10 technicians with AI tools and a DVM in another state to review everything, to replace 10 DVMs I am technically in line with this quote. But nobody would agree that AI has not replaced the 10 DVMs. Even if I just gave those same 10 DVMs those same AI tools their productivity is not going to increase to the level where the technicians and AI don’t make more sense from a purely economic standpoint.

Reason Not to Worry #1: AI is Self-Limiting

Ignoring the lawsuits about copyright infringement in the training of machine learning models for the time being, AI always needs new data to “learn” new things. Who is going to provide this new data for the diagnoses of new conditions or new treatments if we are just relying on an AI to make the diagnosis in the first place?

I also feel that the reliance on AI to write records will increase the reliance on AI tools that will summarize records into a few simple sentences. I have enough faith in my fellow humans to hope that the result of this will just be recognition that simple records are just better in the first place and why don’t we just write them that way. The alternative is complete madness when data is kept in some arcane format that no one actually reads.

In addition, the “hallucination problem” with AI does not seem to be anywhere close to being solved. For those who are unaware, AI’s “hallucinate” wrong data all the time. In technical circles we call this “getting things wrong.” Yes, you heard right; AI’s get things wrong all the time. There are numerous lawyers who have been cited by judges for submitting AI briefs that contain references to cases that just don’t exist.

The AI world calls these missteps “hallucinations” to make their products seem better than they are. More complex and “thoughtful.” What they actually mean by hallucination is that the AI got things wrong and they don’t know why.

Reason Not to Worry #2: Human Interactions Matter

There will be value in not using AI. Just like there is value in not allowing your work to be scraped by AI. Just like in film, music, and art, the use of AI is distinctly frowned upon because the consequences of doing so are so harmful for everyone involved. Why pay to use a tool, made by someone in Silicon Valley, that would not exist without the theft of material that the tool must have used in order to work?

Likewise some clients, admittedly not all, will value face-to-face interactions with their veterinarians as long as we make it worth what we are charging. If COVID taught us nothing else it is that a virtual appointment, like a virtual meeting, is a sorry excuse for the real thing. Why would veterinary medicine be any different? Medical records that read like they were written by a human and are understandable will have far more value than those that might be more technically proficient but don’t reflect the personality of the DVM.

In fact, humans are so much better at these interactions than AI that a surprising number of AI startups and tools are actually just low wage humans working in other countries remotely.

Reason Not to Worry #3: The Power of Community

While the midlevel practitioner for veterinary medicine bill was passed in Colorado, nobody seemed particularly happy about it. An alphabet soup of state and national organizations came out against the idea of midlevel practitioners and this bill in particular. Even the vet school at Colorado State, from what I can tell, was not enthused about being connected to this new position.

If the profession can fight back against the midlevel practitioner it can fight back against other things such as remote DVMs and hospitals just staffed by technicians all the way through to AI’s role in the diagnostic process. It might even win some of these fights and we will be stronger as a profession if we get used to fighting for what we believe in.

I do actually think machine learning does have a role in veterinary medicine – just like I think it has a role in business in general. My issue is that we are giving little to no thought to the consequences of using these tools wherever we can squeeze them into.

Part of the thought behind these six points is that I do believe that it will probably all work out in the end. It is the damage done to the profession in the meantime that concerns me most. That it might be too difficult to undo that damage and far too late to avoid the suffering caused – whether its lower wages, missed diagnosis, or a radically changed business model for the average veterinary practice which will now lack the skills needed to reject using AI even if it wanted to.

I’ll leave you with a final thought. If AI is writing all your emails so that you don’t have to write them and summarizing all your emails so that you don’t have to read them, would you then have the critical thinking skills to know when the AI had made a mistake? Why would we think veterinary medicine would be any different? I’m not suggesting that all technology is bad, but I think this quote, often attributed to folklore hero John Henry, says it best;

“When a machine does the work of a man, it takes something away from the man.”

Image by aytuguluturk from Pixabay

Image of George Orwell by Gordon Johnson from Pixabay

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke.

In a remarkable about face for a technology company, Amazon has confirmed that it is moving away from its “just walk out” technology at its Amazon Fresh stores. The technology boasted that it used a mixture of cameras, sensors, and artificial intelligence (A.I.) to know what consumers had put in their baskets and to accurately bill its customers without all that tedious checking out and interacting with another human being at the grocery store.

Image Copyright Amazon.com used under fair use for criticism, comment, or news reporting.

What was actually happening was that up to 1,000 people in India were watching and tagging videos to ensure that customers were billed correctly. Amazon has apparently laid off almost its entire development team of this “technology” and will start to phase out this service from its existing Amazon Fresh stores. This is all the more surprising after Amazon’s experience with A.I. recruitment. In 2015 Amazon had to abandon an A.I. résumé reading project due to being unable to stop it from discriminating against women. It was seen by many as a humiliating comedown for the tech giant.

Image Copyright Amazon.com used under fair use for criticism, comment, or news reporting.

“Pay no attention to that man behind the curtain!” — The Wonderful Wizard of Oz, L.Frank Baum.

While many will smirk at Amazon’s second major public A.I. failure, and I have to admit to being one of those people, there is a bigger issue here which Amazon should be commended for. It is the lifting the vail on A.I. tools that are not some magic that comes out of the ether. They often require human intervention to be usable- both in front and behind the keyboard. In addition A.I., or more accurately Machine Learning , need examples of human labor in the thousands, if not millions, to be trained. The training of these A.I. “models” has become a contentious subject for those with an interest in A.I. both as supporters and critics.   

“Any technology distinguishable from magic is insufficiently advanced” – Barry Gehm’s corollary to Arthur C. Clarke’s original quote.

The main issue with machine learning is that the A.I. industry, almost without exception, sees art, music, writing, film, and pretty much the entire internet as fair game for training A.I. models, which they in turn sell to us in the guise of generative A.I. Those of us on the other side (waves hand in air to indicate exactly where I stand on this subject in case you had not already guessed) say that copyright does not work that way. Derivative works are still derivative.

 It is indeed hilarious to watch companies such as Disney try to navigate this brave new world. On the one hand, Disney has tried to argue that generative A.I. is fine for them to use to create new works based on the work of artists they have employed in the past. But Disney has then complained about possible copyright infringement when someone else has tried the same trick with copyrighted works they own.

Image Copyright Walt Disney Company used under fair use for criticism, comment, or news reporting.

The lawyer who used ChatGPT to write a legal brief might want the machines to infringe a bit more. To his cost, literally, the lawyer found out that the pesky machine had just made up all the cases that it sited in its argument which he signed his name to. He was sanctioned and fined after he was found out. I just love that generative A.I. tools hallucinate (the developers term, not mine).

One of my favorite activities these days is to ask A.I. peddlers what they use to train their models. Indeed, I had a most entertaining afternoon doing just that at this year’s Western Veterinary Conference. Amongst the answers I received were “none of your business – who are you” (my favorite), “medical records from a university,” and “the internet.” None of the vendors I spoke to were willing to discuss privacy, copyright, or what happens if they are no longer allowed to train their models that way. One gets the distinct impression of building on borrowed land.

The latest darling of the A.I. generation tools is Sora, which creates beautiful full motion video from text prompts and is from the OpenAI stable. However, in a recent interview with the Wallstreet Journal, Mira Murati, OpenAI’s Chief Technology Officer, refused to answer questions about where Sora’s data set for modeling came from. Murati also refused to say whether the data set that Sora used included YouTube and Instagram videos – stating that she “did not know.” That in turn has led to some serious questions about licensing, as YouTube’s CEO Neal Mohan, confirmed that OpenAI using YouTube content for modeling purposes would be a violation of YouTube’s terms of service.  

“Thou shall not make a machine in the likeness of a human mind” – Dune, Frank Herbert

There is a temptation to label those who speak out about our current infatuation with A.I. tools and criticize the foundations those tools are built on as luddites. While our current use of the word brings to mind hoards of unemployed mill workers bent on smashing “the spinning jenny,” the truth about the Luddites is actually far more nuanced and carries a message for today.  The Luddites did not hate all machines, they in fact were fine with most and just wanted them run by workers who had gone through apprenticeships and were paid decent wages. The Luddites main concern were manufacturers who used machines in “a fraudulent and deceitful manner” notes Kevin Binfield in his book “Writings of the Luddites.” Outsourcing the cashing out of grocery shopping to a developing country, and labeling it as new technology, is a tactic the Luddites would have been all too familiar with and would have been happy to march against.

While I am not advocating for a Butlerian Jihad as Herbert described as the backdrop for Dune, there is merit in the context he provides to the proscription on thinking machines.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” – Dune, Frank Herbert.

As author SJ Sindu wrote on Twitter (I refuse to call it X on general principles); “We don’t need AI to make art. We need AI to write emails and clean the house and deliver the groceries so humans can make more art.”

A.I. art needs human art to model itself on and the pushback from artists and consumers is already significant. When the argument over modeling reaches the courts, the damage may already be done. Only then will we see the parallels between the creative arts and A.I. that we saw in the 2000s with Napster / Pirate Bay and music. Will it be too late to put this tool back in its box?

A healthy skepticism when it comes to A.I., I think is all important. And not just a skepticism for what A.I. can do but for the intentions of those that wield it.

A.I. will need to be “open” and not just open as in the name of a for profit corporation. Its models will need to be transparent and be able to be questioned. As I wrote about in my review of Hilke Schellmann’s book on A.I. in hiring and Human Resources “The Algorithm”; …it is often difficult to impossible for candidates or employees to challenge decisions by managers which they may feel have been affected by bias. How much more difficult is it when it is not a human making the decision or recommendation? A tool of which we cannot ask the most basic of questions: what were you thinking?

Footnotes and links would be a great start. But most generative A.I. companies consider this proprietary information and therefore refuse to provide what would seem a most obvious step when it comes to trust. That, in fact, is exactly why authors use footnotes and links, to allow others to follow their thinking on how they reached their conclusions. I’ve tried to add as many links and footnotes as I can to this article without becoming burdensome.

I am not a Luddite in the modern sense, but I do share a lot of the same concerns of the Luddites of old. We only need to look at our world to see why we should be concerned. It is a world where poor people in the developing world watch us shop so that we can pretend we are living in a magic future where machines do all the work. Where the drudgery of making art has been taken away from us so it can be sold back to us by corporations owned by billionaires.

I’m not sure I want A.I. to write my emails, but I can think of plenty of things that I’d like it to undertake. I already use it in a number of ways. I’ve used A.I. images in my books (although I probably will not do so in the future). I currently feel that A.I. has to earn its place in my world by proving its benefits not just to me, but the world as a whole. Will the undertakings of A.I. be for the benefit of people? Currently, that seems to be the last thing on the developers’ minds.

“The tune had been haunting London for weeks past. It was one of countless similar songs published for the benefit of the proles by a sub-section of the Music Department. The words of these songs were composed without any human intervention whatever on an instrument known as a versificator. But the woman sang so tunefully as to turn the dreadful rubbish into an almost pleasant sound.” – 1984, George Orwell

It seems that everywhere one turns today artificial intelligence (AI) is being added to every aspect of daily life. Whether it be the arts, education, entertainment, search, or the workplace – AI is everywhere.

Often, those of us who are distinctly dubious about the claims that are being made about the current generation of AI, more appropriately labeled machine learning, can often feel like Cassandra of  myth – fated never to be believed. At worst we are labeled as luddites, rather than as people who believe that technologies should earn their places in our lives and societies rather than being instantly adopted after being told by people hoping to get rich that they work great and everything will be fine.

Ms. Schellmann’s exhaustive exploration of AI in the workplace is pretty damning.

It catalogs how Human Resource (HR) departments have been adopting technologies that are often little understood by their users and are often working under misapprehensions as to the scientific backing of the ideas behind these tools. The fundamental problem is often one of garbage in – garbage out; a phrase that has been with us from the dawn of the computer age. For more on this I recommend the excellent “Weapons of Math Deception” by Cathy O’Neil which I reviewed here. The majority of AI tools are black boxes that we can’t look inside to see how they work. The manufacturers consider the algorithm’s inside these black boxes proprietary intellectual property.  Without being able to look inside the magic black box, it is often impossible to know whether an algorithm is biased inherently, whether it is being trained on biased data, or just plain wrong.

One of the things that comes up again and again in “The Algorithm” is AI’s, or the people that program it, inability to know the difference between correlation and causation. Just because a company’s best managers all played baseball, does not mean that baseball should be a prerequisite for being a manager – particularly if it means that an AI would overlook someone who played softball – which is essentially the same sport. When one considers the fact that men tend to play baseball, and woman tend to play softball, it is easy to see just how problematic these correlations can be.

The problems with correlation and causation are of course magnified when junk science are involved. Tones of voice, language usage, and facial expressions, are being used in virtual one-way interviews for hiring and have little to no science behind them. In one highly memorable section of the book, Ms. Schellmann speaks German to an AI tool, reading from a Wikipedia entry, which is assessing her customer service skills and quality of English. The tool rates her highly in customer service and English even though she is speaking a different language and does not even try to answer the questions being asked.

Where the book falls down a little, but probably says more about the sad state of business thinking, is on personality testing. The author seems to accept as scientifically valid that employees can be categorized as one of a few simple types. You can read my review of “The Personality Brokers” by Merve Emre here for more on this nonsense and dangerous business tool. As Ms. Schellmann rightly states in her take down of how AI handles personality testing, but could actually just apply to all personality testing; “we’d be better off categorizing by star sign.”

It is disturbing just how much AI has already invaded the hiring space in the HR offices at large companies and gives one pause as these tools become more mainstream. While it is true that it is often not the AI software itself that is the problem, but how the humans that wield such technologies choose to use them. There is also the problem of how hard it is for a human employee to challenge a decision that is made by an algorithm – which by its very nature is a secret. The developers will often say that these tools should not be the final word in hiring or firing; but the knowing wink and smile behind these statements tells us everything we need to know.

Ms. Schellmann’s work is laser focused on human resources, an area where bias has been and often is a significant problem. The idea of a tool that can be used to eliminate bias, and that companies want to use tools like this, is not inherently a bad idea – in fact it is admirable. The problem is that bias in hiring is often unconscious bias and tools that are wielded by those who are not aware of their own biases are most likely fated to continue to have these biases and therefore affect the process. In addition, it is often difficult to impossible for candidates or employees to challenge decisions by managers which they may feel have been affected by bias. How much more difficult is it when it is not a human making the decision or recommendation? A tool of which we cannot ask the most basic of questions: what were you thinking?

This is an important work for our time – hopefully one not fated to be a Cassandra.

Want to get really scared and hopeful at the same time?

Scary Smart is a low level dive into the technology of Artificial Intelligence (A.I.) and deep level dive into the ethics and morality of those who are most responsible for how A.I. will turn out:

Us.

A.I. may seem like the new buzz term with its adoption into our daily lives through products like ChatGPT’s Open A.I. platform and Bing’s real question search algorithm; however, A.I. is baked into almost everything we do with technology. Every app on our phones and every social media platform we interact with has A.I.’s fingerprints all over them.

Mr. Gawat’s premise in “Scary Smart” is that A.I. is a child. And the best way to predict what kind of teen and adult we will get with A.I. is to be good parents. A brilliant initial example from our current comic book obsessed culture is Superman. What kind of Superman would Clark Kent have become of Jonathan and Martha Kent were greedy, selfish, and aggressive? There is no doubt that A.I. is already smarter than humans in many specialized areas, but what happens when A.I. becomes just generally smarter than the smartest human and has access to all the knowledge of humanity through the internet?

Unfortunately, humanity is not doing a very good job of raising A.I. as a child. From our methods of creating and improving these machine intelligences all the way through to the tasks that we are giving them perform, we are emphasizing our worst instincts: To create wealth, surveille our citizenry, gamble, and coming to a battlefield near you soon – to kill people.

We as a society, may feel we have no choice but to use A.I. in this way. If a foreign power, or terrorists, use A.I. controlled drones which are smarter and more efficient than any human, the only way to fight back may be to use A.I. in a similar fashion. But what does that teach our new artificial children? A.I.’s already have a disturbing habit of developing their own language when they communicate together and of finding ways to communicate with each other. What happens when and A.I. who has been taught to ruthlessly buy and sell shares to maximize short term profits starts to talk to an A.I. that has been taught to ruthlessly kill its enemies when they are shown to it?

The author’s excellent example of what might happen is the world’s reaction to the outbreak of COVID-19: Ignore the problem, try to blame someone else, and ultimately overreact upending our society. We may try to put the A.I. genie back in the bottle through pulling the plug or lockdowns, but we will fail. A.I’s will be faster, smarter, and have more knowledge than any human or group of humans. While a lot of this may seem like the dream of Hollywood Blockbusters, Mo Gawat is at pains to explain that there is little disagreement within the A.I. community that these risks are real. How real is where the disagreements start.

The possible solution to these issues, the author postulates, lies not with the developers but with users and how we define our relationship with A.I. In history, master slave relationships have not ended well for the masters – with good reason. How we interact and decide to use A.I. will define what kind of parents we will be to this fledgling new intelligence. A new intelligence that although they may start out separately will share information and communicate with each other so quickly, and with access to the memories and experiences of all those who have come before them, that it will be impossible to not consider them one single intelligence.

This then leads us to what kind of example will we set for these new children? While A.I.s have already shown that they can develop a sense of morals, and not in a good way, by their interactions with users they will also learn from our interactions with each other. Machine morality may very well not be programed by developers, but learned from observing and interacting with us. What are machines already learning from our social media, search habits, and politics?

This is thought provoking and important work essentially on morals and ethics withing the framework of A.I. that occasionally reads like a Arnold Schwarzenegger movie. If we ignore the topics it raises, we deserve our fate.

And while Mr. Gawat paints a hopeful portrait, he also shows us just how bleak things could get.