How Did We End Up with “NDIS Independent Assessments 2.0”? (And Why We MUST Fight It Again)
- Kate Hoad
- 12 minutes ago
- 9 min read
Written by Kate Hoad (Occupational Therapist; Sister of NDIS Participant with multiple and complex disabilities, and Mother to 3 children with disabilities)
When I first heard the rumblings of something called "Support Needs Assessments" in the NDIS during the push for passing the National Disability Insurance Scheme Amendment (Getting the NDIS Back on Track No. 1) Act 2024 , my heart-rate shot up and it immediately became a little harder to breathe. I felt déjà vu, for I immediately recognised this as a thinly-veiled, renewed push for “independent assessments”. We have been here before, participants and providers— in the Coalition years—fighting tooth and nail to prevent a mechanism that would strip participants of choice, reduce the complexity of disability to tick‑box metrics, blunt the power of lived experience, and funnel more control to the bureaucrats. Against that backdrop, the resurgence of this model deserves not just criticism, but outright resistance.
Below is my take — partly a warning of what's to come, partly a rallying cry — about how we got here, why “version 2.0” is just as dangerous (if not more so), and why the only truly fair way forward lies in a robust, participant‑centred model, not algorithmic shortcuts.
The Original Fight: Why We Fought, and Why We Won (For Now)
The original 2019 Independent Assessment proposal was relatively blunt: force all (or most) NDIS participants to undergo independent assessments administered by contracted allied health professional, completely unknown to them, then feed those scores into a formula that would generate a “typical support package.” The idea was to replace (or at least heavily circumscribe) the previous model in which participants, or their treating professionals, submit evidence (reports, assessments, treating histories) for NDIA consideration, which then informs the person sitting in NDIS headquarters, who gets the final "Yay or Nay".
The backlash was immediate, fierce, and justified. Among the key objections:
Dehumanisation and loss of control. Many feared a system that treats disability as a static, measurable function rather than a lived, fluctuating reality. A 2–3 hour “interview” by a stranger cannot capture context, variations over time, or hidden complexities. (1) (2)
-------------------------
Resistance from key voices. Bruce Bonyhady, one of the architects of the NDIS, publicly warned against “robo‑planning” and said the changes undermined the scheme’s founding values. (4)
Algorithmic “black box” risk. Once you reduce supports to numerics and feed them into packages by typology, you lose flexibility and nuance. People become “personas”, reduced to categories, rather than individuals, each with their own specific human needs and desires.
-------------------------
Political pushback and intergovernmental breakdown. Disability Ministers across states refused to sign on; ultimately, the proposal was shelved, at least in that form. (5)
Lack of transparency, consultation, and evidence. The NDIA’s process was heavily criticised in the Joint Standing Committee inquiry for being opaque, rushed, and weak in community consultation. (3)
When Linda Reynolds, who was the Minister at the time, announced that the "independent assessments in their current form will not proceed," it marked a victory, albeit an incomplete one. The phrase "in their current form" implies much without explicitly stating it. The key takeaway from this part of the history is that we demonstrated it is possible to stop such a reckless overhaul, but continued vigilance was necessary.
But the Problem Was Never Solved — Only Paused
Return to present day - 2024/2025 - and it was dangerous to think the matter closed. Those words "In their current form" haunts us. Back to the National Disability Insurance Scheme Amendment (Getting the NDIS Back on Track No. 1) Act 2024, enacted October 3rd 2024.

Supports Needs Assessment, as Announced as part of a tranche of legislative changes pushed through parliament in 2024.
The fundamental tensions that led to the suggestions of Independent Assessments remained: the NDIA complains of too many reports (especially long, expert reports), difficulties in reading or processing them, bias, inconsistency, and cost. Indeed, in Senate Estimates (27 Feb, 2025), the then‑NDIS-CEO reportedly said “To be really frank about it, my staff can't read 280 page reports that they get”.
Firstly, I'm not aware of any therapist submitting 280-page reports to the NDIS. Most Occupational Therapists I know would consider a 30 to 40-page report to be more than sufficient. Secondly, isn't that kind of their job? Isn't it their responsibility to review the provided evidence to determine the reasonable and necessary supports required for the individual?
In addressing the NDIA's claim, I will be first to admit that my initial response was Well if you wanted more concise reports, you should have provided therapists with clearer guidance on what is required in reports.
This is something that has endlessly frustrated therapists over the history of the National Disability Insurance Scheme (NDIS) - We came as therapists from systems where the guidelines and templates were very clear (e.g., MASS/CAETI), to a system of ambiguity and uncertainty leaving us grappling for clarity in our professional practice. It's a landscape marked by vague directives posed significant challenges for many practitioners, who have simply done their best with what they've been given. Then when the NDIA keep saying "Not enough evidence for X or Y", we of course try to give them more evidence so that our clients can get what they need.
My thoughts now however, are more along the lines that it is a dangerously incomplete idea -
Indeed, the NDIA should offer more precise and stringent templates, expectations, and formats to ensure reports are functional. However, this does not justify replacing the reports from participant’s own clinician/s, with a tick and flick exercise from a non-allied-health trained outsider.
Furthermore, when the NDIA claims to be "overwhelmed" by the amount and variety of submitted evidence, it suggests a preference for reducing the complexity of evidence rather than dedicating sufficient resources to meet its obligations effectively. This is an issue of politics and resources, not an unavoidable technical problem.
What “Independent Assessments 2.0” Looks Like — And Why It’s Worse
The renewed model, now called "Support Needs Assessment" has several alarming features, in striking parallel to the orginal, halted version:
Standardised assessment protocols (Note: what exactly will be used is not yet known for the Under 16 cohort, we're only talking about for the over-16 cohort for now) The proposed tool for the over-16 cohort is the I‑CAN version 6 (Instrument for Classification and Assessment of Support Needs) — The I-CAN is meant to generate “support needs” scores that will then ultimately feed into formulaic bundles. (6) It would appear however, that the government chose it without the expected consultation and co-design it promised people with disability.
Another of the more pressing concerns with the new assessment tools, such as I‑CAN, is the emergence of a genuine "black box" risk. In these systems, personal data—like age, type of disability, and location—are entered, and a decision or support package is spat out, but the logic underpinning these outcomes remains opaque (hence the 'Black Box').
This complete lack of transparency means individuals and their families are often left in the dark about how or why a particular decision was made, making it difficult to challenge or appeal if the outcome seems unfair or incorrect. Furthermore, any errors or biases embedded in the algorithm can persist undetected, undermining trust in the process.
The issure is compounded when scoring tools directly result in individuals being placed into standardised support packages based on their scores, rather than acknowledging their unique needs and goals. E.g. if someone receives a score of '6' and is assigned to Package B, it suggests that their needs are deemed average for that score, and they are provided with a predetermined set of services. This method risks overlooking the diversity and complexity of individual situations, especially for those whose needs do not align neatly with established categories.
The shift to scoring systems like I-CAN threatens to reduce individual needs into oversimplified "typical packages," employing hidden logic that exerts downward pressure on funding and care for those who need it most.This is called shrinkage pressure, where the system is intentionally designed to compress support outcomes towards the average. By channeling individuals into standardised service bundles, the system effectively overlooks outliers (particularly those with above-average needs). Consequently, individuals requiring more support find their needs forcibly adjusted (downwards) to align with what the system considers typical. In practice, the average becomes the baseline, leaving those with complex or fluctuating needs underfunded, and therefore at risk.
A further glaring problem is that there is no evidence I‑CAN has been validated for certain NDIS populations (e.g. Autistic folk). (7).
Downplaying or excluding treating‑professional reports.
People with Disability Australia (PWDA) has flagged that the I‑CAN tool process must “allow participants to submit reports from their own therapists, clinicians, specialists,” and guarantee “meaningful rights of review at every stage.”
Yet Funding Periods, also introduced in the October 2024 changes, make it near-impossible for many participants to be able to access therapist-provided reports, as there isn't enough funding in most plan periods to allow this. This works alongside other mechanisms in policy drafts and guidelines recently discussed in the news, of talk towards making therapy reports “stated supports” — which the NDIA dictates terms for — and limiting participants’ ability to commission independent expert reports. The needs assessment in s32L of the NDIS Act states 2) the tool ( eg tickbox questionnaires) MUST be used; 4a) the reports the NDIA request MUST be used; 4b) information held on the participants file MAY be used.
In effect, it would appear that the NDIA desires less external evidence, not better external evidence.
Singular and/or time‑bounded interviews. The assessment is still likely to be limited to a 2–3 hour session or questionnaire, which is particularly ill‑suited for many under the scheme - see below examples:
For someone with intellectual disability or limited expressive ability, a structured interview may fail to pick up the “hidden effort,” strategies, supports, cues, or scaffolding they need daily. |
For psychosocial or episodic conditions (bipolar, chronic pain, PTSD, etc.), capacity fluctuates. A single session may capture a “good day” and underrepresent needs. |
For people with communication or cognitive impairments, or sensory impairments, interacting with a generic assessor unfamiliar with adaptations, AAC, or neurodiversity may lead to misinterpretation. |
Comorbid complexity: many participants have overlapping physical, cognitive, sensory, emotional factors. A reductive tool may undercount cross‑domain impacts. |
Environmental, assistive technology, and compensatory strategies matter. Someone may appear less “impaired” if they have supports, but removing supports or not factoring them in may penalize them. |
Why Many Participants needs cannot be captured well within a system that assumes a fixed scale of “functional capacity”
Poor oversight, restricted appeals, and “no challenge” to the assessment itself. Some critics point out that the support needs assessment outcome cannot be appealed directly — only the subsequent plan or access decision can be contested.
Assessor capacity and experience problems. As previously mentioned, the Independent Assessment pilots showed assessors rated their own training poorly, and participants expressed doubts about whether assessors knew their disability type well. This is only going to be far worse with non-allied health professionals leading Support Needs Assessments.
Re‑engineering the participant’s power. The new model shifts power further toward the NDIA than it already was, and away from the participant and their team of treating professionals, arguably violating the original vision of choice, control and co‑design.
Together, these problems magnify the risk to the vulnerable. The version 1.0 was bad — version 2.0 threatens to be a more polished but more insidious version.

Frankly, many OT assessments (or interprofessional assessments) that we already do are richer in contextual reasoning, long‑term observations, qualitative narrative, and goal alignment than what such an I‑CAN interview can yield. We know this from lived experience in the provider world every day.
The Case for a Participant Providers Continuing to do the Assessment Work
If you want a fair, nuanced, robust evidence base from which to determine appropriate and needs-based funding, here is why the participant’s provider / treating teams remain the ideal source:
Deep contextual knowledge. The provider knows the history, the day-to-day patterns, the risks, the compensations, and lived priorities. |
Continuity. Providers see participants over time. That longitudinal perspective is crucial to credible assessment. |
Cross‑disciplinary insight. A good provider team can coordinate OT, physio, speech, psychology, etc., to each provide inputs on their own area of practice, further triangulating assessment complexity rather than depending on a single generalist assessor (aka watered down). |
Flexible reporting structure. Providers can tailor reports to highlight what matters to the participant — environmental adjustments, subtle barriers, non‑obvious burdens — which a generic form will most certainly not prompt for. There are reasons why allied health professionals are classified into, and must work under their professional code of conduct for their specific disciplines. It is why I as an OT, are NOT best placed to comment on someone's Dysphagia. The risks here are scary. |
Accountability and challenge. Reports by providers can be critiqued, interrogated, cross‑referenced, questioned — something harder when assessments are black box. |
Responsibility and incentive. Providers have reputational and professional stake in just and accurate outcomes. They are more likely to engage with nuance than a “stranger for hire.” |
So ultimately, if the NDIA complains their staff are overwhelmed by varying reports, I conclude that it would seem that is a staffing / capacity / design problem, not an inherent fatal flaw in participant‑driven reports - and the solution should not be to eliminate rich reports — the solution is better systems.
If NDIA wants consistency, it can publish clearer guidance on what they expect in a report (domains, metrics, summarised functional statements, goal‑mapping) — before trying to eliminate those reports altogether.Construct appropriate detailed, but maleable standardised templates and require structured executive summaries.
But above all - They would be better placed to allocate sufficient numbers and training of NDIA staff to accurately read and analyse the complexity of the lives of the amazing people they are deciding potentially life-altering funding for. |
References: (1) https://theconversation.com/dehumanising-and-a-nightmare-why-disability-groups-want-ndis-independent-assessments-scrapped-156941
Comments