Improving Clinician and Patient Experience with Generative AI

PLUS: OpenAI Sued for Defamation

Welcome to the 12th issue of the AstroFeather AI newsletter!

This week was full of eye-catching headlines and product releases. The healthcare industry is implementing generative AI (GenAI) tools to reduce the administrative burden on clinicians, and OpenAI was recently sued for defamation!

On the other side of the industry, Adobe is integrating its Firefly AI image generator into as many applications as possible, WordPress now has a GenAI assistant, and around 4,000 US jobs have been lost due to AI. You’ll find these trending stories and more covered in this issue!

I hope you enjoy reading this week’s updates and if you have any helpful feedback, feel free to respond to this email or contact me directly [ LinkedIn ]

Thanks - Adides Williams, Founder @ AstroFeather

In today’s recap (10 min read):

  • Improving Clinician and Patient Experience with Generative AI.

  • ChatGPT Hallucinations Lead to OpenAI’s First Defamation Lawsuit.

  • Product Previews and Launches.

  • Company Announcements and News Throughout the Industry.

Must-Read News Articles and Updates

Update #1. Improving Clinician and Patient Experience with Generative AI (GenAI).

Image credit: Carbon Health

The latest: US-based healthcare chain Carbon Health has launched an AI-powered hands-free medical charting system that can capture patient visits and automatically generate near-complete notes in minutes, directly within its proprietary electronic health record (EHR) software. The system is designed to reduce clinician workload and improve the patient visit experience.

How it works: The foundation of the AI hands-free charting system is OpenAI's GPT-4 large language model (LLM), which is used to summarize transcribed notes from the patient visit and provide takeaways for the patient as part of their care plan. According to Carbon Health, its AI notes assistant is safe to use with sensitive patient data (HIPAA compliant) and is only used with the patient's consent.

  • At the beginning of the visit, the physician presses "record" on the device to begin recording the patient encounter.

  • Once the recording is complete, the raw audio is captured and transcribed using Amazon's AWS Transcribe Medical.

  • The system combines the transcript with patient data pulled from the EHR, including lab results, physician notes, and diagnosis codes.

  • The notes assistant then uses GPT-4 to create summaries (and key takeaways) of the patient visit based on the transcript (and other patient data).

  • As a final step, the provider must review and approve all AI-generated summaries.

The results: AI hands-free charting has been deployed at scale and made available to Carbon Health's 600+ clinicians, who have reported improvements in charting efficiency, accuracy, and completeness:

  • On average, the system generates a complete chart in less than four minutes.

  • 88% of AI-generated text is accepted by the provider without edits.

  • Automated records are 2.5 times more detailed than manually entered records.

Driving the news: Carbon Health's platform is just one of several that use generative AI (GenAI) tools to reduce the burden of administrative tasks, make it easier to search across multiple data sources at once, and improve the patient experience. Examples include healthcare-focused GenAI platforms from Google Cloud, Hyro, Hippocratic AI, and Microsoft subsidiary Nuance Communications:

[#1] Mayo Clinic and Google Cloud recently teamed up to bring GenAI search tools to the healthcare industry. Mayo Clinic will be testing a new service called Enterprise Search on Generative AI App Builder, which will allow its medical professionals to quickly find patient information using custom chatbots.

Dubbed "Gen App Builder," a recent demo shows that the generative search tool can: 1) Access information from multiple internal and external data sources. 2) Generate summaries for a single data source. 3) Deliver insights and findings across a collection of data sources.

[#2] Hyro recently raised $20 million in a Series B round to continue development of its HIPAA-compliant conversational AI platform. A demo of the Hyro AI system shows that it can automatically process calls and texts, answer common questions, and handle tasks like booking appointments.

Hyro also has a "smart routing" feature that allows it to "intelligently" decide whether to automatically complete a task, send a link to self-service via text, or route an inquiry to the right department.

[#3] Hippocratic AI has raised $50 million in seed funding to develop a safety-focused large language model (LLM) designed specifically for healthcare. The technology is a text-generating model aimed at tasks such as explaining benefits and billing, providing dietary advice and medication reminders, answering pre-operative questions, and onboarding patients.

To evaluate its AI's "bedside manner," Hippocratic developed a benchmark to test the model for empathy, and it reportedly scored the highest across all categories of models tested, including GPT-4.

[#4] Microsoft subsidiary Nuance Communications announced its AI-based clinical documentation platform, Dragon Ambient eXperience (Nuance DAX), in March this year. DAX is like Carbon Health's system in that it also provides a voice-enabled, hands-free note-taking experience.

The patient visit is recorded, transcribed, summarized and translated into a clinical note using GPT-4 and automatically entered into the EHR for final physician approval.

Why it matters: Healthcare workers are burdened by limited staff and time-consuming tasks such as transcribing notes, searching for information, responding to physician inbox messages, and other non-patient facing administrative tasks. At the same time, clinicians must deliver high-quality care while managing their practices and complying with regulatory requirements. As a result of this increasingly stressful lifestyle (exacerbated by the COVID-19 pandemic), the prevalence of clinician burnout has reached alarming levels.

As discussed in the examples above, hospital systems are exploring the use of GenAI tools to automate monotonous administrative tasks to save time, combat clinician burnout, and improve the overall quality of patient care. So far, these GenAI tools have made it easier to search patient (and non-patient) data, transcribe notes, update charts, and schedule appointments. Over time, I hope these platforms (and others in development) will have a lasting positive impact on care delivery and the clinician and patient experience.

Additional Links for “Improving Clinician and Patient Experience with Generative AI”:

Update #2. ChatGPT Hallucinations Lead to OpenAI’s First Defamation Lawsuit.

Image credit: AndreyPopov/Getty Images

The latest: Georgia-based radio host Mark Walters is suing OpenAI after ChatGPT allegedly published false information about him. In this "first-of-its-kind" lawsuit, Walters alleges that ChatGPT damaged his reputation by publishing false allegations that he embezzled money from a gun rights nonprofit.

Here’s what happened: According to Walters’ complaint, the story begins with Fred Riehl, a journalist and editor-in-chief of the gun publication AmmoLand, who was doing research for an article about a real-life court case in Washington state.

  • Riehl asked ChatGPT for a summary of the Washington federal court case called Second Amendment Foundation v. Ferguson (a case that accuses state Attorney General Bob Ferguson of abusing his power to suppress the activities of the Second Amendment Foundation (SAF)).

  • However, according to Walters' lawsuit, ChatGPT provided Riehl (the journalist) with a summary of the case, which states that Walters (the radio host) is being sued by the SAF for "defrauding and embezzling funds" from the SAF while serving as the foundation's treasurer and chief financial officer.

  • Here's where things take an interesting but entirely predictable turn. Everything ChatGPT allegedly generated for Riehl about the SAF case and Walters was completely fabricated. Walters never worked for SAF, did not defraud or embezzle funds from the foundation, and is not mentioned anywhere in SAF's actual 30-page lawsuit.

  • Finally, Riehl (the journalist) decided not to use any of ChatGPT's information in his article after confirming that ChatGPT's claims were false. Walters (the radio host), however, filed the lawsuit against OpenAI anyway, claiming that ChatGPT's allegations about him were "false and malicious" and exposed him to "public hatred, contempt, or ridicule.”

Driving the news: While Walters' case is believed to be the first of its kind, there are at least three recent events where ChatGPT hallucinations have raised similar concerns.

  • Australia: Regional Mayor Brian Hood made headlines in April when he said his lawyers were preparing to sue OpenAI over false allegations made by ChatGPT. Hood's lawyers claimed that ChatGPT had falsely implicated him in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia.

  • USA (New York): New York attorney Steven Schwartz faces possible sanctions after filing a court brief that included six non-existent cases. Schwartz relied on ChatGPT to help prepare a court filing that was later discovered to contain fake court cases, complete with fake court decisions, quotes, and internal citations. The federal judge presiding over the case called the situation an "unprecedented circumstance.

  • USA (Texas): In a possibly related event, Judge Brantley Starr in Texas added a new rule for his courtroom called "Mandatory Certification Regarding Generative Artificial Intelligence," which requires all lawyers to certify that a) no part of their filing was created using GenAI tools such as ChatGPT, or b) any part of their filing that contains AI-generated information has been verified for accuracy.

Why it matters:

Understanding the limitations of chatbots: Walters' lawsuit against OpenAI and related events highlight the limitations of conversational chatbots (such as ChatGPT, Bard, and Bing Chat) and their tendency to "hallucinate" responses that are untrue. As users continue to rely on tools like ChatGPT for tasks such as text summaries and document preparation, it is important that they understand that these AI tools are not infallible, can produce false information, and must have their output verified for accuracy before sharing it widely to avoid the potential spread of misinformation.

Liability for OpenAI: Looking at the Walters case, one wonders whether OpenAI will be held liable for ChatGPT's output, and if so, what such a legal precedent might mean for AI companies.

In general, liability for defamation arises in a couple of ways: 1) the defendant knew that the statements made were false (or likely to be false) but disregarded that fact and negligently published the statements, or 2) the defamed person suffered actual damages (such as loss of business opportunities) as a result of the defendant's false statements.

In Walters' case, it doesn't appear that OpenAI was ever put on notice that ChatGPT was making false statements about him, with demands that OpenAI (ChatGPT) immediately stop publishing the false information. Furthermore, because Riehl (the journalist) didn't publish the article with ChatGPT's fabricated information, it may be difficult for Walters to prove that he suffered any damages. Thus, Walters' case may fail if his lawyers are unable to prove that he suffered damages or that OpenAI knowingly published false information about him.

Additional Links for “ChatGPT Hallucinations Lead to OpenAI’s First Defamation Lawsuit”:

Update #3. Product Previews and Launches.

Adobe Firefly for the Express App and Enterprise: Adobe recently announced the integration of its Firefly AI image generator into the Adobe Express App, allowing users to easily design and enhance a variety of media, including flyers, banners, TikTok videos and Instagram reels. Demos on the Express App page show the "all-in-one" app using Firefly to remove backgrounds and convert simple text prompts into images and text effects.

Adobe is also making Firefly available to its enterprise customers, allowing them to customize the model with their own branded assets. According to Adobe, Firefly produces commercially safe images by training the model on images from Adobe's stock image library.

Image credit: Adobe

WordPress Now has an AI Assistant: WordPress brings GenAI to its platform with the launch of the Jetpack AI Assistant. The AI plug-in helps users create and edit various content (including blog posts) in a variety of tones and styles that can be translated into 12 languages.

A demo of the Jetpack AI Assistant shows the tool being used to: 1) summarize blog posts; 2) adjust the tone of text; and 3) generate a blog post from a single text prompt.

Image credit: WordPress

ChatGPT for iPad: OpenAI has updated ChatGPT for iPad with a full screen interface, drag and drop support, Siri support, and Shortcuts integration.

The drag and drop functionality allows users to easily transfer messages from the chat interface to other applications, the Siri support allows users to access ChatGPT from anywhere with voice commands, and the Shortcuts integration allows users to automate ChatGPT queries.

OpenAI has clearly been making incremental improvements to the iOS and iPadOS app, and the company has stated that it plans to release an Android version soon.

Image credit: OpenAI

Glean Chatbot for Enterprises: Palo Alto-based startup Glean just launched Glean Chat, a conversational AI chatbot assistant designed for enterprise environments. Glean Chat can answer questions and analyze information from enterprise data sources in real time.

Glean Chat has many features and can: 1) analyze documents across Slack, Google Docs, Microsoft Office, and more; 2) provide answers and summaries of internal company documents; and 3) generate content such as marketing emails and customer communications.

Image credit: Glean

Tafi Launches Text-to-3D Engine: Tafi has released a text-to-3D character engine that can generate tens of billions of 3D character variations in minutes. According to the company, the 3D character engine uses a vast 3D dataset derived from its proprietary Genesis character platform.

Users can use natural language input to create any character they can imagine and then export to a variety of platforms including Unreal, Unity, Blender, Autodesk 3ds Max, and Autodesk Maya.

Image credit: Tafi

Instagram’s leaked AI Chatbot: Instagram is apparently testing an AI chatbot feature that will allow users to choose from 30 different AI personalities. According to a screenshot shared by leaker Alessandro Paluzzi on Twitter, the chatbot will be able to answer questions, offer advice, and help users compose messages.

While there has been no official announcement from Meta, the company behind Instagram, this feature would be in line with previous statements about the company's AI ambitions. Earlier this year, CEO Mark Zuckerberg said the company was focused on creating “AI personas that can help people in a variety of ways.”

Image credit: Alessandro Paluzzi

Additional Links for “Product Previews and Launches”:

Update #4. Company Announcements and News Throughout the Industry.

EU calls for labeling of AI content: EU Commissioner Vera Jourava is urging big tech companies such as Google, TikTok, and Meta to “clearly label” AI-generated content and build safeguards to prevent the spread of disinformation. Jourova suggests that generative AI services, such as Microsoft's Bing and Google's Bard, need to be designed with “necessary safeguards” to prevent malicious actors from using them to generate disinformation.

Some big tech companies have already begun to address the issue of AI-generated images. For example, Google announced a feature that allows users to see if an image is AI-generated, and Twitter has also announced that it is expanding its Community Notes feature to allow users to fact-check images by adding additional context about an image’s origin.

Image credit: Getty Images

AI impact on jobs market: AI was responsible for nearly 4,000 US job losses in May, according to a report from outplacement firm Challenger, Gray, and Christmas. US companies announced more than 80,000 job cuts in May, which is a 20% increase from April. Of the 80,000 layoffs in May, ~5% were due to AI.

The report suggests that companies are shedding human employees in favor of AI to save money and impress shareholders. Recent industry examples include the National Eating Disorder Association (NEDA), which closed its phone helpline and laid off its small staff of specialists in favor of an AI chatbot called Tessa.

Boston Dynamics Robot Dog Upgrade: Boston Dynamics has unveiled updates to its robot dog, Spot, including the ability to use handles to open doors on its own. Spot can use its upgraded articulated arm accessory to open doors, as well as grasp, pick up, and carry a variety of objects.

A video from Boston Dynamics shows Spot's additional upgrades including: 1) the ability to monitor temperatures and detect invisible air and gas leaks in systems; 2) a physical emergency stop button; and 3) improved ability to catch itself if it slips.

Image credit: Boston Dynamics

Additional Links for “Company Announcements and News Throughout the Industry”:

Thanks for reading this issue of the AstroFeather newsletter!

I’m always looking for ways to improve and would love to hear your constructive feedback about the format and content of the newsletter. You can reply to this email, and I’ll be sure to respond.

See you in the next issue!

If you enjoy AstroFeather weekly content, be sure to share this newsletter!

Adides Williams, Founder @ AstroFeather (astrofeather.com)

Join the conversation

or to participate.