Professionals are encouraged to be open minded while being aware of the ethical risks.
The British Veterinary Association (BVA) has released an eight-point policy statement outlining its position on the use of artificial intelligence (AI) in the veterinary profession.
Designed to support the profession across clinical practice, education, research epidemiology, admin and practice management, the eight principles advise vets on how to:
1. Use AI as a tool to support, not replace, the vet.
2. Understand how AI technologies work and feel confident in using them.
3. Actively participate in the design, development, and validation of AI tools for animal health and welfare.
4. Understand how an AI system was trained and the contexts in which bias may appear.
5. Be confident understanding how AI technologies are advancing and adapt to potentially quick changes in the tools available.
6. Ensure data privacy and client consent.
7. Oversee AI use in clinical practice and be responsible for final decisions.
8. Be able to easily access what data was used and explain how an AI tool reached its conclusion.
The position statement calls on vets to keep a positive open-minded approach to AI, while remaining aware of its potential ethical implications.
Among its recommendations include a call for veterinary workplaces to develop AI policies, undertake risk assessments, and develop resources to help team members understand how AI tools work.
It also calls on the wider sector to set clear rules as to how veterinary AI systems should be governed, establish proper regulations for their use, and for AI tech developers to be more transparent.
BVA president Dr Rob Williams said: “The AI revolution is here to stay and brings with it both important opportunities as well as challenges for the veterinary profession. Having a positive and open-minded approach that views AI as a tool to support vets and the wider vet team is the best way forward to make sure that the profession is confident applying these technologies in their day-to-day work.
“The general principles developed in BVA’s new policy position offer a timely and helpful framework for all veterinary workplaces considering the safe and effective use of AI technologies”.
He continued: “Vets must also be involved in the development process for AI tools as early and as frequently as possible so the profession can lead from the front when applying these emerging technologies, to ensure we continue to deliver on our number one priority of supporting the highest levels of animal health and welfare.”
According to data from BVA's Voice of the Veterinary profession survey, one in five vets (21%) are already using AI for tasks like data interpretation, diagnostic testing and time saving. However, an overreliance of AI undermining human skills and results being interpreted without context of follow-up checks were highlighted as potential risks.
To address this, the BVA has created a risk pyramid classifying the risks of AI use in veterinary settings from ‘minimal to unacceptable’. It has also produced a set of questions vets should ask software companies when undertaking risk assessments.
Dr Williams added: “We know that the degree of risk in AI use exponentially increases with the degree of autonomy an AI tool has. This risk pyramid is a handy reference for vets looking to incorporate AI in their work, with tasks lower down the pyramid such as marketing or clerical tasks able to be undertaken with more confidence of safety than those closer to the top, such as automated diagnosis or clinical decision making.
“As use cases move closer to the top, the importance of following the principles set out in BVA’s policy position becomes more critical as the impacts on animal health and welfare, professional standards, and people will be more significant. I’d urge all colleagues to take a look at this risk pyramid alongside the general principles.”
BVA’s full position statement on the use of AI in the veterinary profession can be found is available at bva.co.uk.
Image (C) Raker/Shutterstock.