Monitoring UK bank accounts for benefits fraud would be ‘huge blow to privacy’ | Surveillance
Ministers have been warned not to resurrect Conservative plans to tackle welfare fraud by launching mass algorithmic surveillance of bank accounts.
Disability rights, poverty, pensioner and privacy groups fear the government is poised to deliver a “snooper’s charter” by using automation and possibly artificial intelligence to crack down on benefit cheats and mistakes which cost £10bn a year. They fear it will mean a “huge blow for privacy in the UK”.
In a letter this week to Liz Kendall, the secretary of state for work and pensions, they warned that requiring banks to scan accounts for suspicious behaviour would be a severe “intrusion into the nation’s privacy, with potentially punitive consequences for vulnerable individuals”.
Keir Starmer last week announced a fraud, error and debt bill to make banks share data on account holders that “may show indications of potential benefit overpayments”.
Details are yet to be published. But the government is concerned welfare fraud is becoming more sophisticated and without new legal powers it cannot keep pace with the changing nature of fraud to tackle it robustly enough. It believes asking banks to share claimants’ data with the Department for Work and Pensions to help it tackle benefit fraud could help save £1.6bn over five years.
The previous Conservative bill did not make it through parliament before the July general election. Aiming to increase public and business confidence in AI tools, it was welcomed by some, including the technology industry and the information commissioner.
It also aimed to facilitate the flow and use of personal data for law enforcement and national security purposes. Aspects of the bill which focused on privacy rights and automated decision making were strongly contested.
Labour’s new bill could compel banks and other third parties to trawl the accounts of the entire population to target welfare recipients for monitoring. By its own estimation it would only stop about 3% of the total amount lost to fraud and error.
Such mass financial surveillance powers would be “disproportionate”, according to the signatories of the letter to Kendall, which included leaders of Disability Rights UK, Age UK, Privacy International, Child Poverty Action Group and Big Brother Watch.
“Imposing suspicionless algorithmic surveillance on the entire public has the makings of a Horizon-style scandal – with vulnerable people most likely to bear the brunt when these systems go wrong,” they wrote to Kendall, referring to the Post Office software that resulted in the wrongful imprisonment of post office operators. “Pensioners, disabled people, and carers shouldn’t have to live in fear of the government prying into their finances.”
The warning comes amid widening use of artificial intelligence in government departments with about 70% of them estimated to be piloting or planning to use AI, according to the National Audit Office spending watchdog.
Welfare algorithms are far from faultless. It emerged in the summer that DWP software had wrongly flagged over 200,000 people for investigation for suspected fraud and error.
The Department for Work and Pensions has been approached for comment.
Source link