Wisedocs blog
-
How To

Responsible Use of AI for the Medical Claims Industry

Reliable, consistent, and accurate AI use is a reality, but responsible AI use is up to the user and the practices that they choose. Here are a few of our recommendations for ethical AI use.

Published on:
September 3, 2024

Much of the news in the past few years has been surrounding generative artificial intelligence (AI). Generative AI became an “overnight sensation” in 2022 with the high-profile release of ChatGPT. This led to extensive public discussion of AI, including the possibility of technology replacing human professionals such as lawyers or doctors

Headlines aside, there are many ways to utilize AI that live up to the impactful change it is making. Today, companies use AI to streamline administrative tasks and automate, condense, and speed up repetitive work – leading to faster processing times and better outcomes all around.

Unfortunately some of the “newness” surrounding AI means claims providers are still trying to wade through the noise. Artificial intelligence tools seemed to crop up overnight, with features so refined they made the tools feel almost like magic. However, artificial intelligence is anything but new, and many of its earliest uses were actually developed specifically for the medical field. 

The first recorded discussion of artificial intelligence was in Alan Turing’s 1947 research into “intelligent machines” where Turing introduced a new idea: can machines think? Further interest in the question led to many new terms, including machine learning and thinking machines, before early researchers finally agreed on calling this research “AI”. AI technology was soon tasked with assisting medical experts, and some of the very first AI tools were created to complement the work of doctors. 

Reliable, consistent, and accurate AI use is a reality, but responsible AI use is up to the user and the practices that they choose. Here are a few of our recommendations for ethical AI use, whether for creating medical reports or for claims as a whole, and some do’s and don’t for how AI tools should be used.   

Do: Understand the difference between “extractive” and “abstractive” AI tools

AI tools use machine learning to pull information from reports via a classification of their objective data. Objective data refers to dates, titles, provider names, or other details on each page which tell the machine learning model what the document is or could be. This objective data is then used to sort and organize this information, either for generating the report or by the human professional processing the claim. 

In a machine learning model, extractive summarization identifies the important components of the original text, selects these components, and provides the selection to the user. Wisedocs’ medical summaries are an example of extractive summarization: pulling the important sentences out of a report, for example, without generating any content by itself. Amazon review summaries are also an example of extractive summaries: they give the user a selection of the most relevant reviews, verbatim, with the option of expanding the page to see the full list.

In abstractive summarization, the model goes a step further and “reads” the document before summarizing it in its own words. Some of the phrases in the summary are not included in the original text, but should still preserve the meaning of what has been said. To ensure accuracy in this use case, an appropriate level of human oversight is necessary to make sure it actually does. Platforms like Wisedocs interactive timeline view are examples of abstractive summarization.

Don’t: Use AI to replace experts

AI tools are capable of generating reports and summaries, indexing thousands of pages of documents, and removing duplicates from patient files. This is hugely effective when looking to save manual time – but it does not, and should not, replace the work of human experts. 

Say an AI tool understands that patient number 00777777 is named Johnny, and that Southern Hospital is a medical facility. Johnny visited Southern Hospital once in 2002, twice in 2004, and five times in 2016 after he broke his arm falling out of an apple tree. The AI tool sees the patient file contains ultrasounds, patient visits, and follow ups for the break and sorts these documents chronologically, placing the documents from each visit into a final report, which makes it easy for a human reader to see data at a glance. 

Notice that in this example, nowhere is an AI tool being used to assess Johnny’s health or make an expert opinion: just organize his medical records for human review and analysis. The artificial intelligence tool isn’t “making anything up” – it’s simply recognizing dates, patient numbers or hospital visits, and pulling the information directly from these pages. Based on what it finds, it will put them in order for the human professional. This is an example of extractive summarization.

AI automation of administrative tasks simply brings the human, legal, or medical expert the information they need faster, so they can start the 100% human task of arguing a case, writing a medical opinion, or researching a patient’s file, without the burden of administrative work. 

The final output (the opinion, report, or research document) remains the expert’s original work – which is exactly how it should be! The final report is still established with the expert’s unique knowledge in their area of expertise. The only difference is that the expert doesn’t have to hunt through files for the document they’re looking for – so the process is handled in a fraction of the time. 

Today’s professionals are able to combine the benefit of AI’s speed with the creativity and critical thinking that belongs to humans alone. AI is not replacing the professional’s opinion, and should not be used that way. 

Do: Protect patient privacy and PHI when using AI

Trustworthy AI partners should be HIPAA compliant: they use deidentification of user’s data and personal health information (PHI), on-premise model deployments, strategic partnerships with Large Language Model (LLM) providers, and a feedback-testing loop to ensure that what the model does is correct, each and every time. This is similar feedback-testing to the Technology Assisted Review (TAR) method that many legal cases have used (or have even been required to use) for e-discovery. 

Do: Keep humans in the loop

LLMs like ChatGPT make decisions and “think” based on huge amounts of data and statistical probability. If “Ontario, CA” refers to Ontario, Canada in 99% of examples that a model sees, it may incorrectly categorize the next Ontario, CA in the same way – even if the document refers to Ontario, California. A human expert placed at the right point in the process would correct this error right away based on nuanced context present, but the model would suggest an incorrect response if left to work unsupervised. 

Say you told ChatGPT that you have been driving on the highway for hours, and just pulled into the gas station with an empty fuel tank. You pull up to the pump, grab your wallet, and then go inside to get a can of…

In the past, some models may well have said “gasoline.” Why? The LLM is pulling its answer from the billions of words in its dataset. Between “highway” “gas station” “pump” “fuel tank” or “driving,” it pulls out the most-likely response.

Medical record summary creation is not prone to these errors. Extractive summaries don’t “make up” answers, feedback-testing rigorously sets parameters to fit the use case for each client, and human supervision makes sure outputs meet standards at every stage. Using a “human in the loop” supervision process leads to high quality and rigorous results that physicians and other professionals can rely on. 

Using AI responsibly in the medical and legal field

Trustworthy AI platforms for medical data are not the same as ChatGPT - and that is a very good thing. Accuracy, responsibility, and quality are built directly into the workflows, which means medical assessment professionals should have no fear when using AI tools to help streamline their workday. 

AI automation means human experts finish their work in a fraction of the time, which leads to better patient outcomes and faster claims. This not only enhances efficiency, but also empowers experts to use their skills and insights in a more effective way. These time savings can not only help improve service quality, but also allow more experts to spend more of their time doing work they find meaningful and enjoy.

Kristen Campbell
Content Writer

Kristen is the co-founder and Director of Content at Skeleton Krew, a B2B marketing agency focused on growth in tech, software, and statups. She has written for a wide variety of companies in the fields of healthcare, banking, and technology. In her spare time, she enjoys writing stories, reading stories, and going on long walks (to think about her stories).

try wisedocs

Ready to transform your document processing?