AI Regulation & Guideline Updates - March 2026
- 3 days ago
- 4 min read
This article summarises recent updates from international regulatory bodies on AI-enabled medical devices and clinical decision support software. It aims to help researchers, clinicians, and developers understand the practical implications of evolving guidance from the TGA and FDA. For comprehensive information, we recommend visiting their website directly.
Therapeutic Goods Administration (TGA) AI Regulation Update
Australia’s TGA is an important reference point for medical device regulation in New Zealand. Last month, it updated its guidance on AI and medical device software regulation. It clarified that the intended purpose, not the underlying technology, is the sole determinant of whether software is regulated as a medical device. If a tool is intended for diagnosis, prevention, monitoring, prediction, prognosis, or treatment of disease, it is considered a medical device and must be included in the Australian Register of Therapeutic Goods (ARTG).
Changing Scope
The TGA has also called out manufacturers’ obligations to monitor how software updates change their status. System updates that introduce scope creep—altering the intended use or clinical performance— may cause the product to meet the definition of a medical device.
For example, a digital scribe that records and summarises clinician–patient conversations would not meet the definition of a medical device. However, if a later update adds features that suggest diagnoses or treatments not discussed in the consultation, the intended purpose changes. The developer would need to seek approval for the new intended use and include the product in the ARTG.
The guidance also addresses off-label use, noting that when a new use is identified, manufacturers must either implement controls to prevent it or revise the intended purpose and seek TGA approval. Institutions and users must clearly understand the scope and intended use of AI tools. A health service that deploys a large language model (LLM) for research administration may be operating within acceptable boundaries. However, if it starts using a non-certified LLM, such as ChatGPT, for patient triage, it is now a regulatory violation in Australia.
Clinical Decision Support Systems
If you’ve been following the regulatory scene, you’ll be aware that drawing a line between an administrative tool and a medical device has been the subject of debate. The FDA and TGA have recently updated their guidance on clinical decision support systems (CDSS), aiming to ensure regulatory oversight is proportionate to clinical risk and clarifying when exemptions apply to lower-risk medical device software.
The guidance from Australia’s Therapeutic Good Administration states that if a CDSS meets all of the following criteria, it does not need to be included in the ARTG*:
a) is intended by its manufacturer to be for the sole purpose of providing or supporting a recommendation to a health professional about preventing, diagnosing, curing or alleviating a disease, ailment, defect or injury in persons; and
(b) is not intended to directly process or analyse a medical image or signal from another medical device (including an in vitro diagnostic device); and
(c) is not intended to replace the clinical judgement of a health professional in relation to making a clinical diagnosis or decision about the treatment of patients
CDSS Exemption Example
A Clinical Information System (CIS) includes a computerised clinical scoring tool, which digitises the McIsaac criteria for assessing tonsillopharyngitis. A GP enters patient data, such as, age, presence or absence of fever, cough, into the system. The software uses these inputs to determine the probability score according to the referenced scoring tool and outputs the recommended treatment pathways. The GP uses their clinical judgement to decide which, if any of the recommended options to follow.
Exemption Explanation: This CDSS is exempt. The scoring tool is evidence-based and transparent (scoring tool is referenced and published) and supports clinical decision-making without replacing the GP’s judgement. It does not process medical images or signals from other medical devices, and its recommendations can be independently verified by the clinician.
The guidance highlights the importance of transparency in systems– sometimes called “glass box” models - because the decision-making process can be clearly seen, and information accuracy reviewed by the clinician, allowing them to apply their judgement. Opaque systems, often called ‘black box’, used to generate recommendations, cannot be exempt from ARTG inclusion. This is important for systems leveraging deep neural networks or other architectures that are not easily explainable.
Model Cards
Last year, the FDA released draft guidelines on lifecycle management and marketing for AI-enabled software. While a finalised version has not yet been published, one recommendation is gaining traction; developers providing “Model Cards”, often described as Nutrition Labels for AI models. These cards summarise key information, including intended use, user, workflow, model architecture, training data, performance measures, limitations, and product life-cycle management.
What’s in the tin? – Model Cards for AI Models (“Nutritional Label”)

The Coalition for Health AI (CHAI) is one group that has developed a Model Card template. This version has recently been adopted by HL7 International in their AI Transparency on FHIR Implementation Guide, which provides a structured approach to understanding how AI algorithms are used in the production or manipulation of health data, supporting transparency and responsible AI use.
Their model card template and schema are available on GitHub.
This article was published on 23 March 2026. Regulatory changes and guidelines updates after this date may affect its accuracy.
Content Manager:
Nathan Baker, AI in Health Research Network


Comments