Archives for posts with tag: big tech

This article is going to use the term AI, even though the more accurate and less marketing friendly term “machine learning” is the term I much prefer. But this article is about you, dear reader, not me.

Reason to Worry #1: Mid-Level Practitioners

I should preface this section by stating that in theory I have no issues with the idea of the creation of a midlevel practitioner in the vein of Nurse Practitioners in the human world. My main concerns are with the fact that the veterinary profession has decidedly steered away from this kind of thing in the past; I’m looking at you Veterinary Technician Specialists (VTS). Show me an LVT / RVT / CVT with a VTS in dentistry who can’t extract any teeth and I’ll show you a missed opportunity.

Colorado State University (CSU) has become ground zero in the midlevel practitioner debate. The idea of a Veterinary Professional Associate (VPA) was proposed as early as 2009 by a member of CSU and an alliance of multiple non-profit animal welfare / rescue groups. This alliance gathered enough signatures for a proposition which was passed despite significant opposition from just about every veterinary professional body. A more in-depth retelling and an examination of the issues can be found here: https://www.avma.org/news/veterinary-professional-associate-role-moves-ahead

My other concern is that there is so little appetite for a midlevel practitioner in the profession that my “spiddy sense” starts to tingle as to what else might come of this VPA.

More on this later…

Reason to Worry #2: The Erosion of the VCPR

Across the country, before, during, and after the pandemic, moves were made to reduce the needs and requirements of the Veterinary Client Patient Relationship (VCPR).

Ostensively, to allow the use of telemedicine to initiate treatment without the need for a physical exam of the patient. While there are some champions of telemedicine from within the profession, clients only seem to have a stomach for it if it does not cost anything or if it allows them to buy medications online.

If the pandemic taught us anything it was that Zoom is a poor substitute for meeting in person.  Meanwhile, the push to allow telemedicine to replace an exam continues..

Reason to Worry #3: AI medical record writing is not what you think.

It seems like every cloud-based PMS software and every veterinary startup is selling a service that takes the conversation from the exam room and writes up medical records in a format that every vet board will love. Sounds like the perfect product: cheap, quick, and removes the drudgery of a task that just about every veterinarian hates – a task that takes time away from patients and clients.

Ignoring the inevitable veterinary board cases where the AI service just gets things wrong and the DVM did not double check – there is where these services are going and what they will turn into.

Machine Learning requires data to learn from. It takes large data sets and as AI commentator Subhasish Baidya states that AI currently is “decent summarization engines and lukewarm guessing machines.”

As Apple recently stated we are a long way off from “Thinking Machines” and the hype about Artificial General Intelligence is misplaced.

So if AI needs large data sets in order to work, so what? It just makes the product better right?

But what if the end product is actually something else entirely?

What else could a machine that learns what is talked about in an exam room do? If the medical record is meant to reflect the diagnostic process, and we are even very nice as to correct AI tools for writing the record when they get things wrong, how long before they starts suggesting the diagnosis for us?

At this year’s WVC conference I was told that it would launch this year.

A Problematic Veterinary Triad

Suggesting a diagnosis based on existing data is not particularly new. The issue is, and I know I start to sound like a conspiracy theorist here, the other two reasons to worry. Because if I can have a midlevel practitioner or even a credentialed veterinary technician perform the exam and talk to the client, and have the results reviewed by an AI that’s reasonably good at coming up with what might be wrong, why do I need a DVM?

Well the practice acts for one I hear you say! Well, my response is to remember about all that weakening of the VCPR? Why does the vet have to be on site? They could be in a different state or even a different country.

We are devaluing what it means to be a veterinarian and the role that they have to play in the care of pets.

I wish that I was super smart and that I could say that nobody else was thinking in these terms and I could claim my tech bro title. That way I could make my AI startup and combine it with my chain of low-cost veterinary clinics bankrolled by venture capitalists which I could then turn around and sell for billions. If I am… well then tech bro’s you’re welcome to my idea – my ethics can’t stomach it.

When I talk to vet students about this problematic triad they are horrified – literally horrified. When I talk to people who think about the future of veterinary medicine, they say “of course” and then tell me how they are planning to leverage these things.

When I talk to practice owners, they either reject the premise or shrug their shoulders and say “so what.” Nobody is looking to make AI models that replace upper management at the moment. We are the ones who buy those tools – tech bros are not stupid in that way.

When I talk to AI companies at trade shows (one of my favorite pastimes these days) and ask where they got their modeling data they are surprisingly evasive – particularly when you bring up the ownership of records and privacy.

The fundamental issue is that using machine learning to reduce the need for a DVM onsite, or the number of DVMs will come down to how much money is saves / generates. It’s a rare company that puts anything ahead of the bottom line. Particularly as those companies get larger.

A common saying from the AI world is that AI will not replace you but that a human using AI will. I hate this saying because it is so disingenuous. If I employ 10 technicians with AI tools and a DVM in another state to review everything, to replace 10 DVMs I am technically in line with this quote. But nobody would agree that AI has not replaced the 10 DVMs. Even if I just gave those same 10 DVMs those same AI tools their productivity is not going to increase to the level where the technicians and AI don’t make more sense from a purely economic standpoint.

Reason Not to Worry #1: AI is Self-Limiting

Ignoring the lawsuits about copyright infringement in the training of machine learning models for the time being, AI always needs new data to “learn” new things. Who is going to provide this new data for the diagnoses of new conditions or new treatments if we are just relying on an AI to make the diagnosis in the first place?

I also feel that the reliance on AI to write records will increase the reliance on AI tools that will summarize records into a few simple sentences. I have enough faith in my fellow humans to hope that the result of this will just be recognition that simple records are just better in the first place and why don’t we just write them that way. The alternative is complete madness when data is kept in some arcane format that no one actually reads.

In addition, the “hallucination problem” with AI does not seem to be anywhere close to being solved. For those who are unaware, AI’s “hallucinate” wrong data all the time. In technical circles we call this “getting things wrong.” Yes, you heard right; AI’s get things wrong all the time. There are numerous lawyers who have been cited by judges for submitting AI briefs that contain references to cases that just don’t exist.

The AI world calls these missteps “hallucinations” to make their products seem better than they are. More complex and “thoughtful.” What they actually mean by hallucination is that the AI got things wrong and they don’t know why.

Reason Not to Worry #2: Human Interactions Matter

There will be value in not using AI. Just like there is value in not allowing your work to be scraped by AI. Just like in film, music, and art, the use of AI is distinctly frowned upon because the consequences of doing so are so harmful for everyone involved. Why pay to use a tool, made by someone in Silicon Valley, that would not exist without the theft of material that the tool must have used in order to work?

Likewise some clients, admittedly not all, will value face-to-face interactions with their veterinarians as long as we make it worth what we are charging. If COVID taught us nothing else it is that a virtual appointment, like a virtual meeting, is a sorry excuse for the real thing. Why would veterinary medicine be any different? Medical records that read like they were written by a human and are understandable will have far more value than those that might be more technically proficient but don’t reflect the personality of the DVM.

In fact, humans are so much better at these interactions than AI that a surprising number of AI startups and tools are actually just low wage humans working in other countries remotely.

Reason Not to Worry #3: The Power of Community

While the midlevel practitioner for veterinary medicine bill was passed in Colorado, nobody seemed particularly happy about it. An alphabet soup of state and national organizations came out against the idea of midlevel practitioners and this bill in particular. Even the vet school at Colorado State, from what I can tell, was not enthused about being connected to this new position.

If the profession can fight back against the midlevel practitioner it can fight back against other things such as remote DVMs and hospitals just staffed by technicians all the way through to AI’s role in the diagnostic process. It might even win some of these fights and we will be stronger as a profession if we get used to fighting for what we believe in.

I do actually think machine learning does have a role in veterinary medicine – just like I think it has a role in business in general. My issue is that we are giving little to no thought to the consequences of using these tools wherever we can squeeze them into.

Part of the thought behind these six points is that I do believe that it will probably all work out in the end. It is the damage done to the profession in the meantime that concerns me most. That it might be too difficult to undo that damage and far too late to avoid the suffering caused – whether its lower wages, missed diagnosis, or a radically changed business model for the average veterinary practice which will now lack the skills needed to reject using AI even if it wanted to.

I’ll leave you with a final thought. If AI is writing all your emails so that you don’t have to write them and summarizing all your emails so that you don’t have to read them, would you then have the critical thinking skills to know when the AI had made a mistake? Why would we think veterinary medicine would be any different? I’m not suggesting that all technology is bad, but I think this quote, often attributed to folklore hero John Henry, says it best;

“When a machine does the work of a man, it takes something away from the man.”

Image by aytuguluturk from Pixabay

Blood in the Machine cover

What comes to mind when you think of the term “Luddite?”

For the more historically minded of you it might be that they were a British 19th-century grass roots movement that were opposed to, and smashed, technology due to losing their jobs at the start of the industrial revolution.

More usually, “Luddite” is used as an epithet to describe someone who refuses to embrace change, usually technological, or insists on doing things the hard way when a simple technological solution exists. Reactionary idiots who were doomed and dumb. Malcontent losers.

These are both corruptions that were deliberately foisted on the public by those who had the most to gain by discrediting the movement: the State and the “big tech” entrepreneurs of their day.

In “Blood in the Machine: The Origins of the Rebellion Against Big Tech” Brian Merchant does a most remarkable thing for a book on a historical subject. He places events from the beginning of the 1800s in context with the events of today and the same challenges we currently face when it comes to technology and work.

The first half of the book is a history of the Luddite rebellion. Its early beginnings with workers refusing to cooperate with inventors on the design of machinery that was clearly created to put them out of work, to civil disobedience and protest, and then ultimately to the very brink of civil war. While the first half of the book does occasionally highlight just how close some the challenges that 19th century weavers were facing are to modern day concerns, it is the second half of the book which focuses on the “gig economy,” A.I., and other forms of modern automation.

What becomes clear throughout the book is that the Luddites were not sheep afraid of change. This was a nuanced, decentralized movement that had clear goals and wanted to embrace technology and change, but wanted their needs and livelihoods taken into consideration. Weavers were artisans who worked for themselves, setting their own hours, and involving the whole family in their work – but on their own terms. The industrialized mills that replaced them employed mostly woman and children working long hours for low pay and producing a lower quality product that was “good enough.”

A theme that crops up both in the 19th century section and the 21st century section is the concept of the replacement of skilled workers with cheaper lower skilled workers. Mr. Merchant also spotlights the outsized role that venture capitalists play in this dynamic – financing a cheaper alternative to one industry to the point of bankruptcy and then either raising prices or lowering wages of those now forced to work for the bright and shiny new thing: Uber and Lyft I’m looking at you.

The Luddites were met with brutal resistance. Factories became fortresses and soldiers were based in every northern town. This was a time when Britain was in a deeply unpopular War with France and was losing its American colonies. Dozens of Luddites were hanged, mostly for the breaking of machinery, and those who took the Luddite oath were often transported to Australia – a life sentence at the time. All for opposing profit over people.

While not only warning of the impact that disruptive change, both in the past and the present day, the author also adds the note of caution about how people are already pushing back against the same type of change as the Luddites fought against over two centuries previously. The strikes, organizing, and protests by Uber and Lyft drivers to be considered employees rather than contract workers. The organizing at Amazon during COVID-19 over safety concerns. The Hollywood writers strike over using A.I. technology.

These are not isolated incidents.

They form a pattern of how technology is often imposed on people without thought as to its impact. That the technology that is supposed to alleviate work often just degrades it. Just the lexicon of Silicon Valley points to this: “disruption,” “move fast and break things,” “Revolutionize.” To ignore these warning signs could quite possibly doom us to repeat the mistakes of the past.

There is often, from both Hollywood and the media, a hysteria that “the robots are coming for your job.” As Brian Merchant points out; the robots are not coming for anything. It is the people who run companies and implement technologies that decide the impact they will have on peoples’ jobs, and ultimately their lives. This needs to be a discussion, separate from the also highly needed discussion on how machine learning is trained, and how venture capital distorts the business landscape. All these discussions are related, but we have real choices ahead that we will all need to make.

It is interesting to reflect on what might have been if the Luddites had won. There would still have been an industrial revolution, but perhaps the assumed antagonistic relationship stances between management and employees, whether real or perceived, might have had a very different starting point. We can’t change what happened to the Luddites, but we have all the indicators that we have an opportunity ahead of us now.

This is a book for our times and a warning about one possible future.