Home | Industry+Policy | New tool is canvassing concerns to enable better care

New tool is canvassing concerns to enable better care

A new survey allows clients and workers to give their opinions on quality with ease, offering researchers and providers an unobtrusive new tool for improving services.

The researchers behind a survey tool that measures quality of life in community care have held a public seminar to detail its potential to contribute to understanding changes in the delivery of age services.

The team, comprising researchers from Macquarie University and the Australian Health Services Research Institute (AHSRI) at the University of Wollongong, was awarded an Australian Research Council linkage grant to develop the Australian Community Care Outcomes Measure (ACCOM), which aims to ensure that clients get the assistance and services important to them.

Macquarie lead professor Michael Fine says: “Because it is difficult at present to assess the impact of providing care, it is easy for staff, clients or family carers to lose hope, or to become distressed about the problems faced by those who need help. This is where outcome measures come into play. They provide a standard way of measuring improvement, stability or deterioration in the most important aspects of an individual’s life.”

When consumers complete the ACCOM, providers receive information on a number of different issues, such as whether people feel they have control over their daily activities, whether they’re getting enough food and whether they feel they are engaging in meaningful social participation.

Case managers answer the same questions about the client and can compare their perception of a client’s quality of life with the client’s own feelings.

Researcher Dr Beatriz Cardona, from Macquarie University’s Department of Sociology, says case managers can use this information to think about the care plan and improvements to how the service is being delivered.

The survey also includes an open question, which allows consumers to elaborate on issues important to them or that aren’t addressed specifically.

The researchers have partnered with community aged-care service providers The Whiddon Group, BaptistCare, Community Options Australia and KinCare, who all assist with making ACCOM meaningful and practical.

Karn Nelson, executive general manager of strategy and research at Whiddon, says seeing a comparison between clients’ feelings and case managers’ perceptions has been valuable for staff.

“It gives them a real indicator of how well they’re meeting client needs and where there are gaps and differences,” Nelson explains. “They can home in on those areas where they see a need, broaden the conversation with the client and understand why there is a difference.”

The research team has also been working in partnership with professor Ann Netten from the University of Kent, who was involved in creating the Adult Social Care Outcomes Toolkit (ASCOT), a UK-developed measure that several countries have adopted.

ASCOT was designed to capture information about an individual’s social care-related quality of life and forms the basis for the Australian tool.

Whiddon was involved in the ASCOT trial and the group has deployed it across its residential aged-care facilities. Nelson says the group has learned much about the tool and how it helps improve care planning.

“We’ve been trialling it in residential aged care in four services and it’s part of our initial care planning conversations, where we [engage] residents, their family members and dedicated care staff members, and use it as a conversation,” Nelson explains. “We call it a circle of care methodology. Working in each of those domains, it gives us a structure to discuss their broad needs and desires and how well we’re supporting them. It gives our RNs, who are conducting these ASCOT conversations, a good structure for doing this.”

Here, Aged Care Insite sits down with Michael Fine and Adam Stebbing, from Macquarie University’s Department of Sociology, to discuss the development of the ACCOM and how it may come to be applied.

ACI: What does the Australian Community Care Outcomes Measure focus on?

MF: The tool homes in on wellbeing of consumers over time and that then becomes a measure of service quality. It can be used to monitor service efficiency and effectiveness. To do that, it breaks things down simply into three broad areas, one of which we need to ask the consumer about, and two of which [involve] data the services already hold.

We ask the consumer nine questions about their quality of life and one about their self-rated health. There’s also one other question where they can say whatever they like about things. That’s the consumer input area.

The first [set of data that comes from the services] is the demographic background. How old are they? Do they live alone? What sort of home do they live in? Is it their own? Is it rented? Is it an apartment or a house? [We learn] a little bit about their cultural background. The demographic data is always held on clients.

The other [set of data the services provide] comes from initial assessments; it’s about clients’ need for care and their capabilities. What things do they need help with? What things can they manage to do themselves? What are the areas where they could improve? That data is also already held.

We then use that to compare like with like data; for example, [we don’t want to] compare quality of life for different kinds of clients, because then you’d find the services that had high-functioning clients also had high quality-of-life results and services that dealt with highly dependent clients had perhaps lower quality of life. We can pull out those services with high-dependency clients or clients living alone and compare those clients with others who are in the same circumstance.

The tool surveys case managers and clients about their views on care and quality of life. How is that information then used?

AS: We’ve got two main uses for that information. The first use is research. At the moment, we’re in the midst of a pilot study with the ACCOM. We’ve done two rounds of data collection from about 200 clients. We’re interested in measuring the validity of the tool – how accurate its measures are of wellbeing and quality of life. Are they sensitive to differences within the population?

We’re also interested in reliability, which has to do with consistency of the data. Early results are pleasing.

The second major use of our data is care planning. During the data collection process, case managers don’t have any access to their clients’ responses; however, once we have both sets of data, with the clients’ permission we let the case managers see the information. It gives the case manager and client the opportunity to compare their responses and discuss any differences. We hope the tool can be useful for measuring outcomes and informing care planning as well.

You discussed some of the early results of research into the use of the tool. What was revealed about how case managers’ and clients’ views aligned?

AS: Overall, there was general consistency in how case managers and clients responded to the quality-of-life measures that we included, which was pleasing.

What was also pleasing is that case managers and clients both agreed clients overall had relatively high quality of life, which suggests the support they’re receiving is effective.

You mentioned the results were quite positive in terms of how valid or reliable the tool is. What was revealed about that and the ease of use amongst staff and consumers?

AS: We might start with the issue of usability. Overall, the results exceeded our expectations. We thought we’d be doing a pretty good job if we had a tool that could be completed within 10–15 minutes. Looking at the results of two rounds of data from both clients and consumers, we’re seeing that over 90 per cent of our respondents were able to complete it within 10 minutes. Even more pleasingly, about three-quarters of case managers were able to do it within five minutes.

With both clients and case managers, what we tended to see was that in round two, it took them less time to fill out the tool than in round one. On top of that, clients and case managers also reported that the tool was easy to use. That was great.

How might the tool be used in the future?

MF: It’s useful for both service delivery and for research; and indirectly it can inform policy and help underpin an approach to both value and quality. It helps the individual consumer and individual staff members or case managers work out what’s working and what’s not, what the gaps are in service provision and so forth.

It’s good at that individual consumer level and promotes a real discussion between the case manager and the recipient at the moment. As we move beyond care-package recipients to other community-care receivers, it can be useful in the home support program also.

[The tool is] valuable when you aggregate the data, and this is probably its main use. When you aggregate the data and look at how well a service is running, you can benchmark it against the way all other services of a similar kind are going across Australia, or across a state. If you’re doing well, you can perhaps use those results in some form of marketing, or you might like to use that for staff rewards or staff support programs. If there are problems, you use the data as a diagnostic tool to identify the areas to improve and ways to enhance your service provision.

It’s also useful alongside financial data. You can use it to compare the efficiency and effectiveness of different services with financial benchmarks. How well do the outcomes for clients compare across different service types and different approaches? That will help foster innovation. You can see its use for researchers, they’re the sorts of things we need to look at: What are the best approaches to be moving towards? What sorts of new innovative ways can we provide care?

We’ve got a measure that’s easy to use, captures consumer views as well as staff views, and does it in a way that is not too interventionist. It’s not going to disrupt service delivery; it’s not too intrusive for consumers or staff members. One of the problems with much of existing research is it’s costly because it’s quite intrusive. So this will help us look at service provision, as a way of understanding data that’s routinely collected.

Do you have an idea for a story?
Email [email protected]

Get the news delivered straight to your inbox

Receive the top stories in our weekly newsletter Sign up now