
More and more government decisions are partly based on data
The government is using algorithms on an increasingly large scale, but there needs to be better supervision. That writes the Court of Audit. The privacy of citizens must also be better monitored and the government must keep a closer eye on the algorithms.
Algorithms are used, among other things, for the automated sending of letters, but also to estimate who is entitled to benefits. In all cases, an official is also involved.
Predictive algorithms
The NOS previously concluded that the government is deploying smart algorithms on a large scale. The Court of Audit’s investigation also shows this: for example, the Tax and Customs Administration uses algorithms that predict which tax returns may be suspicious.
Although an algorithm does not single-handedly determine that, for example, a payment must be stopped, that does not mean that they are risk-free. For example, there is a chance that algorithms will discriminate unintentionally, the Court of Audit writes.
Critics have long warned against algorithms that work completely independently and make decisions without human intervention. But that currently only happens with simple processes, such as sending letters, the research shows.
Government interests
According to the Court of Auditors, the government thinks too much about government interests in the use of algorithms, and too little about the interests of individual citizens. Not only privacy is sometimes poor, governments also think too little about, for example, the ethics of algorithms.
In the childcare allowance affair, algorithms also layers under a magnifying glass: they are said to have played a role in picking out, for example, people with dual nationality. More attention should be paid to such risks.
According to the Court of Audit, this is an advantage: the algorithms used by the government can be analyzed. It is nowhere about a ‘black box’ that works independently and is not understood by anyone, says the Court of Audit. Critics feared this.