Identifying benefits cheats or blatant discrimination?

 By: Emma Gibson-WLiAI steering Committee

 

When disabled people living in Manchester started to notice that many in their community were starting to receive letters through the door saying they were being investigated for benefit fraud, they smelt a rat.

And now, the Greater Manchester Coalition for Disabled People have launched a legal challenge, together with campaigning group Foxglove, to challenge the Department of Work and Pensions algorithm which is responsible for sending those letters. They claim that they are being unfairly targeted for being at high risk of benefit fraud, because they are registered as disabled.

Why does this matter? Disabled people face many barriers to getting paid employment and face higher living costs, so as a result are more likely to live in poverty. When an investigation for benefit fraud starts, it can result in all payments being suspended, leaving people unable to pay for essentials like food, energy and rent.

And this is not the first time that this has happened. In 2020 Dutch campaigners won a court case which forced their government to back-track on using a database, algorithm and associated surveillance to target people who they suspected of benefit fraud. The Dutch court said that they couldn’t understand the ‘decision tree’ behind the SyRI algorithm; and that the subjects of the fraud investigations were unable to protect themselves, or hold the power of the algorithm to account. Under article 8 of the UN convention on human rights- which says that no public authority can interfere with private and family life, the Dutch court ordered the immediate halt of the SyRI app. FOI requests revealed that SyRI had been deployed primarily in lower income areas. Are you seeing a pattern here?

The DWP algorithm is similarly accused of a lack of transparency, and of operating like a ‘black box’. In Foxglove’s legal letter to the DWP they argue that:

  • The Government needs to release information about how the algorithm works, stating that both the European Convention on Human Rights and GDPR state that people have a right to transparency

  • That under section 22 of GDPR you can object to a solely automated decision being made about you, and that you have a right to human intervention in that decision.

  • And finally, that public bodies have an obligation to apply their Public Sector Equality Duty, to advance and protect the opportunities of citizens with protected characteristics. So they want to know if a Data Protection Impact Assessment has been done on the algorithm.

Algorithms are being used with increasing frequency by public bodies to make critical decisions about people’s lives, with no guarantees that our human biases are not being incorporated into this automated decision making.

 

Algorithms are currently used to decide where to deploy police or to decide on whether someone is at risk of re-offending.In the UK the Government is currently looking at allowing private companies to use AI to analyse health data collected by the National Health Service. Campaigners argue that these algorithms often discriminate against ethnic minorities and immigrant communities.

In Women Leading in AI’s 2019 report 10 Principles of responsible AI we called for the introduction of a new ‘certificate of fairness for AI systems’ to show that they had been checked for bias and any discriminatory impacts at the deisgn phase. We also called for mandatory Algorithmic Impact Assessments for organisations deploying AI systems that have a significant effect on individuals. And we argued that there should be a mandatory requirement for public sector organisations using AI to inform citizens that decisions are being made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.

What’s critical is that all algorithms used by the public sector are transparent; can be independently scrutinised; can be held to account by people who are affected by them; and challenged when there is no “’human in the loop”’.

We need to make sure that our Governments are not out-sourcing decisions about our lives to unaccountable and biased technology, which will make life harder for those who already have the odds stacked against them.

Please follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *