MOA Benchmarking reposted this
About half of the residential aged care sector participates in the National Quality Indicator Program using MOA Benchmarking’s platform. Of those services, roughly half also use our individual-level survey collection tools, while the rest rely on internal systems or third-party tools for primary data collection. Recently, we had a few questions come through about refusal rates and how they’ve been changing over time. Some time ago I’d reported that refusal rates jumped quite sharply after the first couple of QoL/QCE rounds. Now that Filip Reierson has revisited these data, we see that they’ve largely plateaued. That pattern is shown on the first page. For services where we have individual row-level data, meaning they’re using MOA’s collection tools, refusal rates are noticeably lower than for services using other systems for primary collection. The size of that difference is not trivial. It was certainly bigger than I expected, in the order of 20%. At first glance, there’s no obvious reason why simply using a different collection tool should produce materially lower refusal rates. So we dug a bit further. Where it gets more interesting is when you split the data by how the survey is collected. See page 2. The second chart breaks down proxy responses for services using MOA tools versus non-MOA tools. Services using the MOA tools have a much higher proportion of surveys completed by proxies. Importantly, resident self-completion and interviewer-facilitated responses are virtually the same across both groups. The difference in refusal and non-completion is therefore being driven almost entirely by differences in proxy completion. That raises an obvious question about what’s driving the difference. “Non-MOA” covers a wide range of systems and approaches, so it’s unlikely there’s a single explanation. But whatever the mix of reasons, proxy participation clearly plays a large role in overall refusal rates. This means that optimally facilitating proxy responses properly has a real impact on overall participation. And at the scale we’re talking about, that matters. These analyses are based on around 100,000 surveys in a single quarter, roughly 50,000 in each group. A difference of 10 percentage points at that volume is not marginal. It materially changes the completeness and representativeness of the data. For me, the takeaway isn’t just about refusal rates. It’s a reminder that seemingly small design and workflow decisions in data collection systems can have very large downstream effects on what we end up measuring, and who gets counted at all. If you want to go deeper into the detail, Filip has written up the full analysis here (🔗 https://lnkd.in/gZm4m2xi).