Whether ChatGPT is your new best friend, you’re afraid the machines are going to take over the world, or you fall somewhere in between, there’s no denying that AI has become a permanent part of the conversation. In the EMS space, we’ve seen more agencies enabling the use of AI in writing ePCR narratives. While the efficiency of such a tool is attractive, there are factors EMS operators need to consider in order to avoid potential documentation issues or compliance concerns. In other words, AI is not a magic wand; it should be viewed with the same professional skepticism that would be applied to any advanced technology.
Here are five guidelines on how to benefit from AI tools, whose successful output depends first and foremost on the data it receives.
1. Complement the Data with Observations
It seems obvious, but despite its seemingly ‘natural’ ability to form detailed descriptions, AI can’t put information into a narrative that doesn’t exist elsewhere in the chart, so an AI generated narrative will almost never paint a thorough and complete picture of the patient’s condition and transport on its own.
Put another way, when you consider the data sections of the ePCR, there is typically no opportunity to include additional elements that make up a high-quality narrative. For example, using a combination of data sections, an EMS crew can document a patient’s pain level, where the pain is located, and possibly when it began; but not the nature of the pain, the type of pain, or if anything makes it better or worse. Data sections—from which AI draws input—don’t include what the patient said, or any statements from bystanders.
As such, AI can’t describe the events leading to the illness or injury, articulate findings by the crew, or offer a detailed assessment. Therefore, to fully paint a complete picture of the patient encounter, crews need to enhance AI-generated narratives with additional comments and observations based on their assessment and treatment of the patient.
2. Review Narratives for Relevance
AI is programmed to pull certain data and put it into a particular format, but it doesn’t have the ability to discern what information is relevant to the patient’s condition and transport. For example, if a patient has a significant medical history listed within the data, AI will include all of it in the narrative, which may or may not be relevant to the reason the patient is being transported on a particular date of service. When irrelevant information is included, it can clutter the narrative, making it longer and less helpful in terms of determining why the patient is being transported that day.
For this reason, crews should always review and edit AI-generated narratives to eliminate redundant or irrelevant information and produce a clean, accurate record.
3. Review Narratives for Accuracy
When it comes to AI, output is only as good as the input used to generate it. Because AI generated narratives pull from the data sections of the ePCR, any errors, inconsistencies, or missing information in the chart will automatically be reflected in the narrative.
Relying solely on AI to write a narrative eliminates any potential chance to catch and correct an error. Human review is imperative to make sure every account is error-free.
For example, one common mistake we see in the field is a crew member selecting medication administration via IO instead of IV in the treatments section. When the provider is describing their actual experience of treating the patient in the narrative, they have a chance to correctly recount giving medication via IV. Careful proofreading and review by a human are essential to catch errors that would be missed when solely relying on AI.
4. Remember: Even AI Can Glitch
As we’ve said, AI isn’t magic—it’s technology. Even without data entry errors, any tech can generate significant errors on its own. We’ve seen cases where information in AI-generated narratives wasn’t consistent with the data fields; notably one narrative referred to a patient by different genders throughout the report. AI, like all technology, can have glitches that cause errors, triggering a ripple effect through the process.
As such, proofreading is essential to accurate documentation. That has always been true for humans writing narratives and it’s still true for AI generated narratives.
5. Safeguard Legality and Billing Compliance
Agencies are responsible for creating and maintaining complete and accurate documentation of every patient encounter, both as the patient’s medical record and for compliant claim submissions. As such, the repercussions of inaccuracy can be costly. If there is missing, inaccurate, or false information in the narrative, “I used AI” won’t be a defense in a lawsuit, an audit, or a False Claims Act investigation.
We’ve seen agencies who have enabled AI with the best intentions, yet their documentation quality has decreased—in some cases, significantly. With poor quality documentation comes the downstream potential for and increased compliance risk. This reiterates the need for human input and review when it comes to implementing AI tools.
The Bottom Line
Crews should not expect AI generated narratives to be faster or “done for them.”
To mitigate risk, we recommend that if you choose to enable AI, you start by doing so for only a small subset of crew members first. Preferably, begin with those who find documentation challenging. Then, develop training for everyone based on what you discover during the trial period.
Most importantly, remember—and impart to your crew—that you cannot take a hands-off approach if you choose to enable AI. Like any new skill, it will require training, monitoring, and feedback.