4 May 2026 · CareTime
Care England has warned that AI-generated CQC policies and procedures put inspection ratings at risk. The concern is not AI itself — it is the use of generative AI to produce documentation that is meant to reflect a service's real, dated, accountable practice. Inspectors recognise generic AI-written policies on sight: they are too general, sometimes cite outdated legislation, use passive voice, and rarely identify named responsibilities. Care homes that rely on generative AI to write their compliance documents are increasingly flagged as Well-Led concerns.
This is a different question from whether technology has a place in CQC evidence at all. Tools that record what actually happened — call logs, dated incident reports, real Morning Briefs — produce the kind of factual, time-stamped evidence inspectors trust. Tools that generate plausible-sounding text about what a service supposedly does are now under explicit regulator-adjacent scrutiny.
If you are using AI in your service in 2026, the safer side of the line is: AI to record and surface real events, not AI to write policies on your behalf.
Care England published guidance noting that a growing number of providers are using generative AI tools — ChatGPT, Claude, off-the-shelf "policy generator" products — to draft core CQC documentation including policies, procedures, and audits ahead of inspection. The trade association's position is that this carries significant risks to a CQC rating. Their stated concerns:
Inspectors are increasingly reading these signals and downgrading services accordingly.
Three things are converging in 2026:
A service that walks into a 2026 inspection with generic, AI-drafted policies and no factual record of what actually happened day to day is exposed on both sides.
The Well-Led quality statement asks providers to act on the best information about risk, performance, and outcomes. Inspectors look for:
Generative AI cannot supply the first three. It can occasionally help draft the fourth, but only after the underlying facts have been recorded somewhere by the service itself.
CareTime's Silent Guard is built deliberately around the recording side of the line. It does not write policies. It does not generate retrospective narratives. It does the following, and only the following:
Every line of that record is a real event with a real timestamp. If a CQC inspector asks "How does the home know it is responding to family contact appropriately?", a manager can show the actual call log and the actual brief. There is no text written by a model. There is no policy claim that cannot be backed.
This is the use of AI a regulator can read without flinching. It is also the use of AI a Well-Led inspection can credit, because it shows ongoing oversight and a genuine information loop.
If your home has used ChatGPT, Claude, or a "policy generator" product to write CQC documents in the last 12 months, the priority before a 2026 inspection is:
Is CareTime an AI policy writer? No. CareTime records real events on your phones and produces factual, dated logs and summaries. It does not draft policies, procedures, or audit narratives.
Can Silent Guard's call logs be used as CQC evidence? Yes. Call logs are dated, factual, and produced by a defined system, which is exactly the kind of evidence inspectors look for under Well-Led oversight and the Responsive "listening to and responding to feedback" KLOE.
Does Care England's warning mean AI cannot be used at all in care? No. Care England's specific concern is generative AI being used to write compliance documentation. Tools that record real events, surface real data, or assist a person reviewing real evidence are a different category and not the subject of the warning.
When does the new CQC adult social care framework go live? Consultation closes 12 June 2026. Final framework expected summer 2026. Implementation toward the end of 2026. Most services continue under the current Single Assessment Framework until then.
If you want a factual record of every call into your home before your next inspection, Silent Guard's £49/30 days pilot is the simplest way to start. No phone system changes. No AI-written policies. Just a daily, dated record of what actually happened.
CareTime's Silent Guard is available now for a 30-day pilot. £49, 1-page pilot letter — exit by reply-email.
Join the 30-Day Pilot