Update to our current scoring approach

Page last updated: 19 May 2025

Published: 19 May 2025

Since 2 December 2024, we no longer score and report findings for registered health and care providers at evidence category level – we now only score at quality statement level. There is no change to the way we aggregate quality statement scores to reach a rating for a key question. We still consider all the relevant evidence categories when assessing a quality statement to enable us to reach a judgement. This supports consistency in our approach and our judgements, and ensures evidence is corroborated. We then use professional judgement to score the quality statement, based on the evidence we have assessed and the scoring criteria.

However, the technology we use has not developed at the same pace as our methods. Currently, our IT system still requires us to score at evidence category level to enable us to give scores at quality statement level. Until this can be changed, we have implemented a workaround process. We refer to this internally as 'the hybrid approach'. The way we carry out the assessment itself is the same, but how we record the information in our system and the way the draft report is presented is different.

Producing scores

Even though we are no longer scoring at evidence category level, our IT system currently requires inspectors to assign a nominal score to one evidence category for each quality statement. This score will be the same as the quality statement score but it is not a judgement and does not have any influence on quality statement scores. It is only an IT functional workaround in our system and is essential to enable us to progress through the system to produce the report.

Moderation

When we previously scored at evidence category level, we sometimes ‘moderated’ a quality statement score. We did this where the evidence category scores produced a quality statement score that did not reflect our professional judgement. This happened because our scoring model treated all evidence categories as equal when calculating the quality statement score, but it is not always appropriate to weight all evidence as equal.

In those cases, we moderated our judgement at the quality statement level to reflect the impact of our findings across the evidence categories. As we now make our judgement at quality statement level, there is no need for a process to moderate quality statement scores to reflect professional judgement.

However, as our system still requires the inspector to insert nominal evidence category scores (as described above), the draft inspection report may indicate that ‘moderation’ has taken place. Entering a score against only one evidence category does not always generate the required score at quality statement level. One reason for this is that the IT system requires evidence category scores from previous assessments to be pulled through, which may influence the quality statement score. We therefore often still need to use the ‘moderation’ functionality to amend the quality statement score to ensure it is correct in the system and the report.

In these cases, the draft report will include a standard paragraph to explain that the quality statement scores have been moderated. This wording only appears in the draft report – there will be no reference to moderation in the final, published report.

Such references are the result of a technical lag in the transition from our previous approach (which we are working to address) and are purely for the purpose of the system workaround.

The only scenario where we do not use moderation as a functionality workaround step is where there is no change to the quality statement score. In this case, the moderation text will not appear against the relevant quality statement.

Producing the report

To produce a report using our IT system, we currently need to use the workaround approach. We select one evidence category for each quality statement and give it a score, which should be the same as the quality statement score. This step enables us to give a score for the quality statement based on all the evidence reviewed. As this is simply a current workaround in our IT system, that score may therefore not reflect our findings about the specific evidence category entered in the system.

Important: Although the heading for an evidence category and a score will show in the draft report that we send you, this is just a system workaround and you should disregard it. The title and score for evidence categories will not appear in the final published report, as they are not relevant to how we assigned a score to the quality statement and we do not use these evidence category scores for any internal purpose.

Character limitation in the factual accuracy portal

The move to scoring at quality statement level has an impact when checking the factual accuracy of the draft report. In the factual accuracy portal, there is only a single text field to submit comments against each quality statement. Text fields in the portal are limited to 2,000 characters (including spaces).

If you have reached the character limit, you can attach additional comments on factual accuracy into the portal on a supplementary document. If you do this, you need to enter some text into each relevant text field within the portal (even if it is just a reference to an attached document). This is important to ensure the system opens the fields for editing. Read more about the factual accuracy check.