Artificial Intelligence in Healthcare – what are your professional obligations?

by | Sep 12, 2024 | Health Blog

On 22 August 2024, the Australian Health Practitioner Regulation Agency (Ahpra) published guidance to practitioners explaining how existing responsibilities in National Boards’ codes of conduct apply when practitioners use Artificial Intelligence (AI) in practice.

AI technology is rapidly becoming integrated into many areas of healthcare, including to diagnose and treat patients or clients. New AI tools such as medical scribing are becoming more established within practices to assist in preparation of clinical documentation and support workload management and efficiencies.

However, as advancements in AI are rapidly evolving and new tools continue to emerge, its safe use in healthcare involves unique practical and ethical issues. Ahpra and the National Boards have identified the following key principles to highlight existing professional obligations that apply when health practitioners use AI in their practice.

Accountability

A practitioner remains responsible for delivering safe and quality care and for ensuring their own practice meets the professional obligations set out in their Code of Conduct. Practitioners must apply human judgment to any output of AI. The guidelines specify that TGA approval of a tool does not change a practitioner’s responsibility to apply human oversight and judgment, and all tools/software should be tested by the user/organisation to ensure they are fit-for-purpose prior to its use in clinical practice.

Understanding

Health practitioners using AI in their practice must have requisite understanding about the AI tool to use it safely and in a way that meets their professional obligations. At a minimum, the practitioner should review the product information about an AI tool including how it’s trained and tested on populations, intended use, and limitations and clinical contexts where it should not be used. The guidelines suggest that it is important to understand how data is being used and stored.

Transparency

Patients and clients should be informed about the use of AI and health practitioners should consider any concerns raised. The degree of transparency a health practitioner provides will depend on how and when AI is being used. For example, if a practitioner uses AI to record consultations, they will need to provide patients with information about how the AI works and how it may impact the patient in terms of its collection and use of their personal information.

Informed consent

Health practitioners need to involve patients in the decision to use AI tools that require input of their personal patient data and if a patient’s data is required for care. Practitioners must obtain informed consent from their patient and note the patient’s response in the health record. The guidance materials maintain that informed consent is particularly important in AI models that record private conversations such as consultations, as there may be criminal implications if consent is not obtained before recording.

Ethical and legal issues

The guidelines address further professional obligations in each Board’s Code of Conduct or equivalent that are relevant to the use of AI in practice which include:

  • ensuring confidentiality and privacy of the patient/client as required by privacy and health record legislation by checking that data is collected, stored, used and disclosed in accordance with legal requirements, and privacy is not inadvertently breached;
  • supporting the health and safety of Aboriginal and Torres Strait Islander people and all patients/clients from diverse backgrounds by understanding the inherent bias that can exist within data and algorithms used in AI applications and only using them when appropriate;
  • complying with any relevant legislation and/or regulatory requirements;
  • awareness of any governance arrangements established by an employer, hospital or practice to oversee the implementation, use and monitoring of AI to ensure ongoing safety and performance, including your role and responsibilities, and
  • holding appropriate professional indemnity insurance arrangements and consulting your provider if you’re unsure if AI tools used in your practice are covered.

As the use of AI in healthcare increases to improve many aspects of patient care, the importance of implementing strong ethical and legal governance cannot be overstated. Before introducing AI technology within their practice, practitioners should review Ahpra’s guidance materials in addition to any legislative and policy requirements which may be relevant.

The full guidelines can be found here.

Written by Tom Gillard and Gemma McGrath

 

Tom Gillard

Tom Gillard