Wisedocs blog
-
How To

Ensuring Responsible and Ethical AI for Medical Record Reviews

Influenced by AI’s rapid movement into popular culture, conversations around ethics and responsibility in the use of AI are quickly becoming a regular concern.

Published on:
September 9, 2024

The potential for artificial intelligence (AI) to transform industries, streamline efficiencies, and gain a competitive edge has been much discussed in recent years. Influenced by AI’s rapid movement into popular culture, conversations around ethics and responsibility in the use of AI are quickly becoming a regular concern. How can organizations keep patient health information (PHI) safe while using AI? How can organizations make sure AI is accurate? How can professionals know when AI is safe, or ethical, to use?

Understanding the answers to these questions begins with understanding that AI is only a tool. It is up to the end user to commit to responsible and ethical AI use, and to use their professional judgment in the same way they would if AI were not being leveraged. Using AI technology can massively speed up the process of sorting medical documents used for claims. However, it’s important to note that these tools do not replace the human professional – merely make their ‘tools of the trade’ more easily accessible and much quicker to use. 

What does ethical AI mean in medical records and claims processing?

Artificial Intelligence is a machine process that adopts characteristics of human intelligence, such as learning (collecting data and understanding what to do with that information), reasoning (using logic or rules), and self-correction. Most importantly, AI’s role in processing medical records or in generating summaries goes back further than Large Language Models (LLMs) like ChatGPT. AI tools have been in development since the 1950s, and many of the first iterations of AI sought to replicate the work of human doctors or other academic experts

Further developments such as natural language processing (NLP), machine vision and speech recognition were all efforts to make the “human machine” think, hear, or see. While all of these features can enhance task automation, productivity and decision-making, organizations need to ensure these systems adhere to ethical standards and societal values – and, in the claims industry, standards around keeping sensitive information (such as patient health information, or PHI) safe.

Why are there so many concerns about AI privacy?

Understanding where patient data goes and how it is used in an extremely large AI model is difficult, even for experts. However, it’s not impossible to understand, especially for how these models work when it comes to privacy. In a hypothetical example, say the following sentence appears in a patient medical record:

“On February 3, patient John Doe saw Dr. Nancy Drew for a minor head injury he sustained from a collision that occurred while driving a truck at work.”

When your human brain reads a phrase, it pulls out the most important words and associations. If you’re an adult reading in your native language or a language you know well, you’ve probably read plenty of sentences and have repeated the process thousands of times: you pull out the patient’s name, his head injury, the fact that he was in a collision, or even (if you’re used to seeing workers compensation claims) that John was at work. 

A similar analysis is mimicked using AI. An LLM doesn't “read” the sentence, but it does break it down into parts: “February” “3” “Patient” “collision” etc. The model decides which parts are most important, and gains context about the sentence as a whole, so it can answer any questions later. Who is being treated, Nancy or John? The patient. John. Will John be okay? Probably. 

With the same sentence, a ten year old elementary school student might skim over the doctor’s name and location. If you ask them what happened, they might tell you John crashed his truck at work. If you ask a five year old, they might only be able to tell you John got hurt. Unless either child is an extremely prolific reader, chances are that neither of them has seen many sentences similar to this one. They need experience and examples before they can do so.

Artificial intelligence models and LLMs require the same learning process and huge amounts of data to be able to answer any questions with the same insight a human professional would have. Some of this data is patient health data, which must be kept private. This is where privacy becomes a concern. 

How can claims professionals ensure PHI is kept safe?

The safest way to use AI while still making sure that patient information is kept safe is deidentification. Before sharing the above example with an AI tool, this is the act of replacing any personal information in the claims documents such as the patient’s name and any details that could identify them. For example, 

“On a recent date, the patient saw a doctor for a minor head injury sustained from a collision that occurred while driving a vehicle at work.”

Or even

“On [date], the patient saw a family doctor for injuries sustained in a work-related incident involving a vehicle.”

Here, the amount of patient data going into the model remains limited and stays anonymous. Details such as dates, names, injury specifics, patient gender, and vehicle type are removed. Choosing a HIPAA compliant provider, and using a human reviewer will ensure no outputs from the AI platform can store or inadvertently reveal information about the patient. 

Regulatory Landscape for AI

AI regulations are evolving, as are regulations in the broader claims processing field. The EU's GDPR focuses on data privacy, while the US has the AI Bill of Rights. In Canada, Bill C-27 includes the Artificial Intelligence and Data Act (AIDA) and updated privacy laws. Quebec's Law 25 requires transparency and privacy assessments for AI tools. 

As the regulatory landscape evolves, the businesses looking to add and adopt these tools will need to evolve, too. Responsible AI use maintains the ethical decision making and human-centered workflows of your profession – whether it’s medicine, insurance, or law. While artificial intelligence tools can dramatically speed up the way we work, there’s a need for human professionals, and decision makers, to be involved and to make responsible, ethical choices to govern AI use.

Kristen Campbell
Content Writer

Kristen is the co-founder and Director of Content at Skeleton Krew, a B2B marketing agency focused on growth in tech, software, and statups. She has written for a wide variety of companies in the fields of healthcare, banking, and technology. In her spare time, she enjoys writing stories, reading stories, and going on long walks (to think about her stories).

try wisedocs

Ready to transform your document processing?