You can scarcely read the news these days without seeing an item about ChatGPT. The loquacious artificial intelligence application might be the buzziest healthcare story we’ve seen in 20 years or more.
AI used to seem like a five- or ten-year trend for most hospitals. Suddenly, it’s about five minutes away.
If you’ve been wondering how ChatGPT might be used to improve healthcare delivery, we’d like to offer answers to three of the big questions that we’ve been hearing.
Question: What are some examples of ways that ChatGPT could be used in a hospital setting, specifically?
Answer: There are multiple ways hospitals might use ChatGPT:
- Virtual Assistant: ChatGPT can be used as a virtual assistant to answer common patient and visitor questions, such as directions to a specific department, hospital policies, and appointment scheduling.
- Medical Diagnosis Assistance: ChatGPT can assist doctors and nurses with diagnosing patients by providing relevant information about symptoms and possible conditions.
- Clinical Decision Support: ChatGPT can be integrated with electronic medical records to provide real-time clinical decision support to healthcare professionals.
- Patient Education: ChatGPT can provide patients with educational information about their conditions, treatments, and post-operative care instructions.
Question: What are the biggest hurdles to wider use of ChatGPT by hospitals and health systems?
Answer: There are several hurdles to wider use of ChatGPT by hospitals and health systems, including:
- Data privacy and security: Healthcare organizations must ensure that patient data is secure and protected, which can be a challenge when using AI technology.
- Regulation and compliance: There are numerous regulations and standards that healthcare organizations must adhere to, such as HIPAA and GDPR, which can limit the use of AI technology.
- Integration with existing systems: Hospitals and health systems have complex IT infrastructures, and integrating ChatGPT into existing systems can be challenging and time-consuming.
- Cost: Implementing and maintaining AI technology can be expensive, and hospitals and health systems must carefully consider the cost-benefit of adopting the technology.
- Lack of understanding and trust: Some healthcare professionals may be wary of using AI technology, as they may lack understanding of its capabilities and potential benefits.
Question: For a hospital with $100 million in annual revenue, how much money could be saved by using ChatGPT?
Answer: It is difficult to determine exactly how much money a hospital with $100 million in annual revenue could save by using ChatGPT as it would depend on several factors, such as the specific use case, the current processes in place, and the implementation and maintenance costs of the technology. However, in general, the use of AI-powered tools like ChatGPT can help streamline processes and reduce manual labor, potentially leading to time and cost savings for the hospital. A detailed cost-benefit analysis would need to be performed to determine the specific financial impact on the hospital.
True Confessions
As healthcare strategy consultants, we’ve been telling clients for years that artificial intelligence would change the way hospitals care for patients – but we never thought it would change the way we write blog posts.
In fact, all three of the answers above were generated word-for-word by ChatGPT. There was no human editing. We just typed three questions into the chat bar and got each answer in ten seconds or less.
My byline appears on this blog post, but more than half of it was written by a machine. To be honest, the whole process was a little disconcerting.
Microsoft just invested a reported $10 billion to make ChatGPT more factually reliable and commercially viable. In two to three years, I think the technology will revolutionize telemedicine and electronic health records. But for routine administrative and communications tasks, I believe hospitals will find ways to adopt ChatGPT before the year is out.
Ignore at Your Own Risk
ChatGPT is breathtaking technology, but you can’t take its accuracy for granted. For one thing, the artificial intelligence was trained with a data set that ended in late 2021, so it doesn’t currently consider newer information.
More troubling still is the issue of “hallucination,” meaning that ChatGPT may confidently assert things that simply aren’t true. For instance, when we asked, “How can Ascendient help hospitals and health systems that want to include ChatGPT or other forms of AI in their strategic planning?”, we got a slick-sounding answer:
“Ascendient, as an AI solutions provider, can help hospitals and health systems that want to include ChatGPT or other forms of AI in their strategic planning in several ways…”
The problem is, we’ve never claimed to be an “AI solutions provider.” ChatGPT literally made up that service line out of thin air. Clearly, artificial intelligence is not yet all-knowing (which is comforting, I suppose).
Here’s the bottom line for me: Being an early adopter of ChatGPT might have some risks – but ignoring the technology is riskier still.