Across hospitals in the United States, the United Kingdom, and Australia, AI systems are currently being used to analyse patient scans, flag abnormalities, prioritise emergency cases, and in some instances recommend treatment pathways. Most patients have no idea this is happening. The consent forms they sign before treatment do not mention it in language that would alert them to it. The doctors using these tools often do not volunteer the information.
This is not a hypothetical future scenario. It is the current state of medicine, and the regulatory frameworks that govern it are years behind the technology.
What AI Is Actually Doing in Hospitals Right Now
The applications range from impressive to alarming depending on how you look at them. AI systems like Google’s DeepMind are being used in the UK’s NHS to detect diabetic retinopathy from eye scans with accuracy that rivals specialist ophthalmologists. A system called Sepsis Watch at Duke University Health has been credited with reducing sepsis mortality by flagging at-risk patients earlier than human nurses typically would.
These are genuine achievements. The problem is not that AI is being used in medicine. The problem is the gap between where the technology performs reliably and where it is being deployed because it is cheaper and faster than hiring specialists.
In radiology, AI tools are being used to pre-screen chest X-rays and CT scans before a human radiologist reviews them. In practice, this means the AI’s flags shape what the radiologist pays attention to. Findings the AI did not flag get less scrutiny. This is a form of automation bias — humans deferring to machine outputs in ways that reduce rather than augment their independent judgement.
The Failure Modes Nobody Is Advertising
AI diagnostic tools are trained on datasets that are not representative of all patient populations. Multiple studies have found that dermatology AI systems perform significantly worse on darker skin tones because the training data was overwhelmingly composed of images of lighter-skinned patients. Cardiac risk prediction models trained primarily on male patient data underperform when applied to women.
When these systems fail, they do not fail with uncertainty. They fail with confidence. An AI that flags a benign lesion as malignant, or misses a cancer in a patient whose presentation does not match its training data, does so without any indication that it is operating outside its reliable range.
The Consent Problem
When you sign a consent form before a procedure, you are consenting to treatment by the medical team as described. You are almost certainly not being told that an algorithm trained on a dataset with known demographic biases is part of the diagnostic pipeline, or that the radiologist reviewing your scan is doing so after an AI has already told them what to look for.
Medical ethicists have been raising this issue for several years. The response from hospitals has generally been to argue that AI is a tool like any other tool, and that doctors using calculators or decision support software do not need to obtain specific consent for each tool they use. This argument is technically defensible and practically inadequate given the scale of influence these systems now have.
What You Can Do
Ask your doctor directly whether AI tools are being used in your diagnosis or treatment planning. Request a second human opinion on any AI-flagged finding before agreeing to invasive intervention. This is not paranoia. AI is already doing work that was previously reserved for licensed professionals, and the accountability frameworks have not caught up. In that gap, the only person reliably looking out for you is you.