You can’t look in the news or see social media posts each day without hearing about artificial intelligence in healthcare. In fact, the advancements in AI in healthcare are making leaps and bounds, seemingly with each day that goes by.
But nursing homes and assisted living providers need to understand not just the benefits of how AI can improve quality of resident care and improved operations, but also the legal issues surrounding AI in your facility.
The advancements
More and more, facilities are gearing up for use of AI in long term care. This comment in the McKnight’s article “Rising AI and machine learning investments powering senior care advancements” back in early May sums it up well:
“AI is transforming all industries and healthcare is no exception,” said Amber Schiada, Senior Director, Head of Americas Work Dynamics and Industries Research at JLL. “The impact of AI and [Machine Learning] on senior housing goes hand-in-hand with the impacts on the broader healthcare industry including for patients: better personalized care, medication tracking, and wellness tracking and for the business: improved data analysis, cost savings, and more efficient workplaces.”
For the growing problem of Alzheimer’s disease, one research team in Hong Kong developed an AI model that can use genetic information to predict a patient’s risks of developing Alzheimer’s before symptoms occur. Another researcher developed an AI method to screen seniors for mild cognitive impairment and early dementia by analyzing their voices. Pretty amazing advancements.
But as with anything new, there can be problems and errors that go with the territory. Some researchers have found there can be errors with AI that could result in reliability problems. But hopefully those gaps and errors will be fixed as advancements in AI continue to proliferate and advance. In any event, sound contracting with AI vendors is needed to protect the facility against liability through use of AI.
AI and privacy
One of the biggest concerns that providers have with AI is privacy. With the use of so much data needed to empower AI, that massive amount of data concomitantly needs to be harnessed, and, more importantly, kept private and secure. That security may entail the need for more advanced technology to keep AI models and functions away from cyberattacks or even breaches.
In contracting, it is best to really explore what a vendor can offer — beyond them just telling you they are HIPAA compliant. Facilities need to focus on AI vendors and their actual privacy and security compliance. While HIPAA will require a robust business associate agreement (BAA) as part of your AI contracting, that may not be enough to protect the facility.
After all, a BAA only complies with the letter of HIPAA regulations. What may be more important is finding out how well equipped an AI vendor is with their technology. This may require facilities to look behind standard BAAs: Do they have a HIPAA compliance plan? Do they have safeguards in place? Are they HIPAA certified? Do they have adequate cyber liability insurance to cover potential breaches?
Again, HIPAA does not require facilities to police or audit their business associates, but to avoid liability, a little homework into your vendors can go a long way. Do you know where the data comes from to power the AI, and do you know where it goes? Facilities need to make sure that service providers providing AI agree in their contracts to meet appropriate obligations to protect the confidentiality of data, and its use and dissemination. Furthermore, adequate indemnification provisions in contracts will be key in securing a safe AI partner that protects your facility from liability.
AI and intellectual property
In addition to privacy, there are other intellectual property concerns that should be addressed in proper AI contracting. AI systems can often learn from a wide variety of data sources. Facilities need to make sure that service providers selling or licensing their AI product have the appropriate rights and consents to use data from all of these sources. For example, if an AI vendor does not have appropriate rights to use certain information, a facility could be liable for infringement or misappropriation from a third party. Making sure your contract addresses these intellectual property concerns is, therefore, important for facilities using AI technology.
Where did my indemnification go?
Finally, when contracting for any type of healthcare technology, make sure you look for more than the mere words “indemnification” in your contracts. Oftentimes, the most important part of any AI or healthcare technology contract may not be the indemnification provisions themselves, but the “Limitation on Liability” provisions in the contract.
Sometimes vendors say they have indemnification in their contracts, but upon closer review, the limitations on liability strip the protection offered by indemnity provisions with broad carve-outs or small caps on liability that limit the AI vendor’s liability immensely. Make sure the service provider accepts the brunt of responsibility and liability under your contract to protect your facility.
Taking heed of your AI contracts is important for the successful use of AI in your facility. Paying attention to the above issues, while a start, is just the beginning. Don’t be fooled by the glimmer of this fascinating AI technology at the expense of incurring potential liability under poorly drafted contracts.
Neville M. Bilimoria is a partner in the Chicago office of the Health Law Practice Group and member of the Post-Acute Care And Senior Services Subgroup at Duane Morris LLP; [email protected].
The opinions expressed in McKnight’s Long-Term Care News guest submissions are the author’s and are not necessarily those of McKnight’s Long-Term Care News or its editors.Have a column idea? See our submission guidelines here.