This article is going to use the term AI, even though the more accurate and less marketing friendly term “machine learning” is the term I much prefer. But this article is about you, dear reader, not me.
Reason to Worry #1: Mid-Level Practitioners
I should preface this section by stating that in theory I have no issues with the idea of the creation of a midlevel practitioner in the vein of Nurse Practitioners in the human world. My main concerns are with the fact that the veterinary profession has decidedly steered away from this kind of thing in the past; I’m looking at you Veterinary Technician Specialists (VTS). Show me an LVT / RVT / CVT with a VTS in dentistry who can’t extract any teeth and I’ll show you a missed opportunity.
Colorado State University (CSU) has become ground zero in the midlevel practitioner debate. The idea of a Veterinary Professional Associate (VPA) was proposed as early as 2009 by a member of CSU and an alliance of multiple non-profit animal welfare / rescue groups. This alliance gathered enough signatures for a proposition which was passed despite significant opposition from just about every veterinary professional body. A more in-depth retelling and an examination of the issues can be found here: https://www.avma.org/news/veterinary-professional-associate-role-moves-ahead
My other concern is that there is so little appetite for a midlevel practitioner in the profession that my “spiddy sense” starts to tingle as to what else might come of this VPA.
Reason to Worry #2: The Erosion of the VCPR
Across the country, before, during, and after the pandemic, moves were made to reduce the needs and requirements of the Veterinary Client Patient Relationship (VCPR).
Ostensively, to allow the use of telemedicine to initiate treatment without the need for a physical exam of the patient. While there are some champions of telemedicine from within the profession, clients only seem to have a stomach for it if it does not cost anything or if it allows them to buy medications online.
If the pandemic taught us anything it was that Zoom is a poor substitute for meeting in person. Meanwhile, the push to allow telemedicine to replace an exam continues..
Reason to Worry #3: AI medical record writing is not what you think.
It seems like every cloud-based PMS software and every veterinary startup is selling a service that takes the conversation from the exam room and writes up medical records in a format that every vet board will love. Sounds like the perfect product: cheap, quick, and removes the drudgery of a task that just about every veterinarian hates – a task that takes time away from patients and clients.
Ignoring the inevitable veterinary board cases where the AI service just gets things wrong and the DVM did not double check – there is where these services are going and what they will turn into.
Machine Learning requires data to learn from. It takes large data sets and as AI commentator Subhasish Baidya states that AI currently is “decent summarization engines and lukewarm guessing machines.”
As Apple recently stated we are a long way off from “Thinking Machines” and the hype about Artificial General Intelligence is misplaced.
So if AI needs large data sets in order to work, so what? It just makes the product better right?
But what if the end product is actually something else entirely?
What else could a machine that learns what is talked about in an exam room do? If the medical record is meant to reflect the diagnostic process, and we are even very nice as to correct AI tools for writing the record when they get things wrong, how long before they starts suggesting the diagnosis for us?
At this year’s WVC conference I was told that it would launch this year.
A Problematic Veterinary Triad
Suggesting a diagnosis based on existing data is not particularly new. The issue is, and I know I start to sound like a conspiracy theorist here, the other two reasons to worry. Because if I can have a midlevel practitioner or even a credentialed veterinary technician perform the exam and talk to the client, and have the results reviewed by an AI that’s reasonably good at coming up with what might be wrong, why do I need a DVM?
Well the practice acts for one I hear you say! Well, my response is to remember about all that weakening of the VCPR? Why does the vet have to be on site? They could be in a different state or even a different country.
We are devaluing what it means to be a veterinarian and the role that they have to play in the care of pets.
I wish that I was super smart and that I could say that nobody else was thinking in these terms and I could claim my tech bro title. That way I could make my AI startup and combine it with my chain of low-cost veterinary clinics bankrolled by venture capitalists which I could then turn around and sell for billions. If I am… well then tech bro’s you’re welcome to my idea – my ethics can’t stomach it.
When I talk to vet students about this problematic triad they are horrified – literally horrified. When I talk to people who think about the future of veterinary medicine, they say “of course” and then tell me how they are planning to leverage these things.
When I talk to practice owners, they either reject the premise or shrug their shoulders and say “so what.” Nobody is looking to make AI models that replace upper management at the moment. We are the ones who buy those tools – tech bros are not stupid in that way.
When I talk to AI companies at trade shows (one of my favorite pastimes these days) and ask where they got their modeling data they are surprisingly evasive – particularly when you bring up the ownership of records and privacy.
The fundamental issue is that using machine learning to reduce the need for a DVM onsite, or the number of DVMs will come down to how much money is saves / generates. It’s a rare company that puts anything ahead of the bottom line. Particularly as those companies get larger.
A common saying from the AI world is that AI will not replace you but that a human using AI will. I hate this saying because it is so disingenuous. If I employ 10 technicians with AI tools and a DVM in another state to review everything, to replace 10 DVMs I am technically in line with this quote. But nobody would agree that AI has not replaced the 10 DVMs. Even if I just gave those same 10 DVMs those same AI tools their productivity is not going to increase to the level where the technicians and AI don’t make more sense from a purely economic standpoint.
Reason Not to Worry #1: AI is Self-Limiting
Ignoring the lawsuits about copyright infringement in the training of machine learning models for the time being, AI always needs new data to “learn” new things. Who is going to provide this new data for the diagnoses of new conditions or new treatments if we are just relying on an AI to make the diagnosis in the first place?
I also feel that the reliance on AI to write records will increase the reliance on AI tools that will summarize records into a few simple sentences. I have enough faith in my fellow humans to hope that the result of this will just be recognition that simple records are just better in the first place and why don’t we just write them that way. The alternative is complete madness when data is kept in some arcane format that no one actually reads.
In addition, the “hallucination problem” with AI does not seem to be anywhere close to being solved. For those who are unaware, AI’s “hallucinate” wrong data all the time. In technical circles we call this “getting things wrong.” Yes, you heard right; AI’s get things wrong all the time. There are numerous lawyers who have been cited by judges for submitting AI briefs that contain references to cases that just don’t exist.
The AI world calls these missteps “hallucinations” to make their products seem better than they are. More complex and “thoughtful.” What they actually mean by hallucination is that the AI got things wrong and they don’t know why.
Reason Not to Worry #2: Human Interactions Matter
There will be value in not using AI. Just like there is value in not allowing your work to be scraped by AI. Just like in film, music, and art, the use of AI is distinctly frowned upon because the consequences of doing so are so harmful for everyone involved. Why pay to use a tool, made by someone in Silicon Valley, that would not exist without the theft of material that the tool must have used in order to work?
Likewise some clients, admittedly not all, will value face-to-face interactions with their veterinarians as long as we make it worth what we are charging. If COVID taught us nothing else it is that a virtual appointment, like a virtual meeting, is a sorry excuse for the real thing. Why would veterinary medicine be any different? Medical records that read like they were written by a human and are understandable will have far more value than those that might be more technically proficient but don’t reflect the personality of the DVM.
In fact, humans are so much better at these interactions than AI that a surprising number of AI startups and tools are actually just low wage humans working in other countries remotely.
Reason Not to Worry #3: The Power of Community
While the midlevel practitioner for veterinary medicine bill was passed in Colorado, nobody seemed particularly happy about it. An alphabet soup of state and national organizations came out against the idea of midlevel practitioners and this bill in particular. Even the vet school at Colorado State, from what I can tell, was not enthused about being connected to this new position.
If the profession can fight back against the midlevel practitioner it can fight back against other things such as remote DVMs and hospitals just staffed by technicians all the way through to AI’s role in the diagnostic process. It might even win some of these fights and we will be stronger as a profession if we get used to fighting for what we believe in.
I do actually think machine learning does have a role in veterinary medicine – just like I think it has a role in business in general. My issue is that we are giving little to no thought to the consequences of using these tools wherever we can squeeze them into.
Part of the thought behind these six points is that I do believe that it will probably all work out in the end. It is the damage done to the profession in the meantime that concerns me most. That it might be too difficult to undo that damage and far too late to avoid the suffering caused – whether its lower wages, missed diagnosis, or a radically changed business model for the average veterinary practice which will now lack the skills needed to reject using AI even if it wanted to.
I’ll leave you with a final thought. If AI is writing all your emails so that you don’t have to write them and summarizing all your emails so that you don’t have to read them, would you then have the critical thinking skills to know when the AI had made a mistake? Why would we think veterinary medicine would be any different? I’m not suggesting that all technology is bad, but I think this quote, often attributed to folklore hero John Henry, says it best;
“When a machine does the work of a man, it takes something away from the man.”
Image by aytuguluturk from Pixabay



