Home | Technology | Can ChatGPT improve healthcare access?
Investigating the connection between social cognitive skills and dementia

Can ChatGPT improve healthcare access?

Artificial intelligence can be used break down language barriers between health practitioners and residents and help to personalise individual healthcare plans.

ChatGPT, a chatbot which makes predictions based on a vast dataset of words and natural human dialogues, gained over 100 million users in the first two months after its launch in 2021.

UNSW's School of Electrical Engineering and Telecommunications Associate Professor Beena Ahmed said ChatGPT could play a critical role in optimising access to healthcare services.

Yet, she also acknowledged the challenges of technology and protecting sensitive data.

"It's got its deficiencies, and we should be wary of them," Professor Ahmed said.  

Aged Care Insite spoke with professor Ahmed about how ChatGPT could improve health practitioner based relationships through improved communication.

ACI: Can ChatGPT be used in healthcare settings?

BA: It's a generative AI system, meaning that you can ask it questions, and it will give you responses.

So, within the healthcare system, it can be used for various purposes.

You can use ChatGPT to translate technical or medical terms to a patient in a more straightforward language.

When a healthcare worker might be struggling to communicate with a patient because of a language barrier, you could ask ChatGPT to translate medical information into various languages.

Or when a nurse is trying to discuss a very difficult topic with a patient, the nurse can ask ChatGPT to describe the situation in more sympathetic language.

It can produce anything from general text to pamphlets, brochures and videos for patients to read or view.

So, those functions are helpful to improve the relationship between a healthcare worker and patient. 

A healthcare worker can also use ChatGPT to create text to communicate within the healthcare system. It can write emails, prepare summary reports and create an information sheet based on a patient's records.

It can produce discharge papers a person could use for Medicare or insurance claims and create transcripts of conversations.

ChatGPT can also prevent miscommunication within teams that could've led to issues.

Now, the caveat here is that somebody always has to check whether ChatGPT's output is correct.

It can prefill everything and prepare anything, but a healthcare worker has to review it before it goes out.

ChatGPT can also develop personalised medicine by generating treatment plans based on an individual's health data and medical history.

So, it's a time-saver for administrative tasks, which is very useful for healthcare workers who are overworked and short-staffed.

There are also major educational uses and resources in a similar way.

How do you think ChatGPT will evolve in the future?

I don't see it being used for diagnostic purposes.

ChatGPT can be useful in an assistive mode for, for example, administrative task documentation or help mine the literature for information.

But it'll be long before we can use it for diagnostic purposes. Because, at the moment, there is no way of analysing massive amounts of medical data over a longer period of time.

So far, the AI used in ChatGPT doesn't have that immense power to comprehensively evaluate and interpret such information.

Some medical researchers are building individual AI systems to analyse specific images for certain diseases, such as types of cancer. 

But that's just one medical issue in one group of patients in a small number of hospitals.

What would be transformative would be for all the medical and health data currently being collected to be brought together and for an AI system to be built to analyse everything as a whole.

We look at ChatGPT and see what it's capable of producing by using tremendous amounts of data to predict and generate text. 

Nothing like that is happening in the medical field, even though we've probably collected the equivalent amount of data.

What downsides are there to using ChatGPT in a healthcare context?

The single biggest issue for me is, 'how do we ensure data protection of all that medical information we might collect?'

I think the government or another autonomous body needs to take control of and implement guidelines that everyone must follow.

At the moment, the data belongs to whoever is collecting it, and that is very often a tech company that can then just sell that information to the highest bidder.

Another huge risk is that data can be incorrect and biased, giving someone in the medical field the wrong information.

The last thing you want to do as a healthcare professional is make a wrong diagnosis or give the wrong advice.

There's also a cultural sensitivity aspect to ChatCPT because it's predominantly trained on English text.

The majority of the internet text also comes from the US, so it's very skewed. 

Then you can have gender bias too, not just cultural. But it's open AI, so the company that created ChatGPT is not the only one working on models like this. 

There's Google's version, Bard, and the Chinese search engine, Baidu, also works on one called Ernie.

So, ChatGPT is not going to be the only solution – there will be others that can cover other information written in different languages.

Like with every technology, we can't stop progress from happening. But what users should be aware of is that we can't blindly trust technology like this.

It's got its deficiencies, and we should be wary of them.

Do you have an idea for a story?
Email [email protected]

Get the news delivered straight to your inbox

Receive the top stories in our weekly newsletter Sign up now

Leave a Comment

Your email address will not be published. Required fields are marked *

*