Future Tech

UK Lords push bill to tame rogue algorithms in public sector

Tan KW
Publish date: Tue, 10 Sep 2024, 06:09 PM
Tan KW
0 478,305
Future Tech

The UK Parliament's second chamber is set to launch a bill designed to regulate the use of algorithms and automated decision-making by public bodies.

There are currently no legal obligations on public authorities to be transparent about when and how they use algorithms to automate decision-making without human intervention or oversight, according to Lord Clement-Jones, Liberal Democrat peer and former chair of the Lords Select Committee on AI, who is proposing the bill.

"The Post Office/Horizon scandal demonstrates the painful human cost when there aren't proper checks in place to challenge these automated systems," he said. The introduction of an EPoS and accounting system in the Post Office led to hundreds of staff being erroneously prosecuted for fraud and false accounting.

"Too often in the UK we legislate when the damage has already been done. We need to be proactive, not reactive, when it comes to protecting citizens and their interactions with new technologies. We need to be ahead of the game when it comes to regulating AI. We simply cannot risk another Horizon scandal," Lord Clement-Jones said in a statement.

The UK government has been hovering around a position on automated decision-making for some time, without actually landing.

In December 2021, the government's Centre for Data Ethics and Innovation (CDEI) published a "roadmap" designed to create an AI assurance industry to support the introduction of automated analysis, decision making, and processes.

Elsewhere in Whitehall, the Central Digital and Data Office developed an algorithmic transparency standard for government departments and public sector bodies. Working with the CDEI, the standard would be piloted by several public sector organizations and further developed based on feedback, it said.

In September 2021, the Department for Digital, Culture, Media & Sport (DCMS) launched a consultation that suggests it could water down individuals' rights to challenge decisions made about them by artificial intelligence afforded under the EU's GDPR, which the UK still uses. The proposal for new UK data protection legislation highlighted the need "to provide human review" of AI decisions that "may, in future, not be practicable or proportionate," said the paper on updating data protection laws after leaving the EU.

The proposals resulted in the Data Protection and Digital Information Bill, which has passed its committee and report stages in the House of Commons, but has yet to pass through the House of Lords. Since then, the UK changed government with the election of Labour in July.

Under Lord Clement-Jones' proposals, if an automated decision affects social security benefits, immigration, or similar, a citizen would have a right to access the information on why the decision was made so that they could challenge it. The proposed bill says public authorities should publish impact assessments of any automated or AI algorithms they use to make decisions, and to keep a transparency register on automated decision making. It also obliges the government to provide an independent dispute resolution service for people who want to challenge any decisions made about them.

In its pre-election manifesto, Labour said it would ensure the safe development and use of AI models by introducing binding regulations on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.

Law firm Clarkslegal said reform to UK GDPR - the UK's adoption of EU data protection law after Brexit - was unlikely under Labour. However, it said legislation to regulate AI was more likely.

"In their manifesto, they have pledged to create a new Regulatory Innovation Office which is intended to help update regulation and co-ordinate this across sectors," the law firm said in a blog. "Further, the manifesto proposes to introduce binding regulation intended to ensure 'the safe development and use of AI models'. They have said that this will apply to the handful of companies developing the most powerful AI models. The manifesto is silent on when regulations will be enacted which suggests that this is not something to be expected in the first few months."

In a statement sent to The Register, the Department for Science, Innovation & Technology, said:

“Technology has huge potential to improve public services, but we know it's important to maintain the right safeguards including, where appropriate, human oversight and other forms of governance.

“We will respond to Lord Clement-Jones' Private Members’ Bill in due course.” ®

 

https://www.theregister.com//2024/09/10/uk_lords_algorithms_bill/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment