The UK government is using AI in all areas, and it is performing as expected.

General
The UK government is using AI in all areas, and it is performing as expected.

The UK government is using deep learning algorithms, collectively known as AI, to help make decisions in a variety of sectors, including welfare benefit claims, fraud verification, and even passport scanning. Not surprisingly, as one study suggests, this is about to have major repercussions for all concerned.

If you wonder what kind of AI is being discussed here, consider Upscale. The system employed by the government is not that dissimilar to the one developed by Nvidia for its DLSS super-resolution technology.

The data model for it is trained on millions of very high resolution frames from hundreds of games. Thus, if one gives the algorithm a low-resolution image, it can examine how that frame is most likely to be displayed after being upscaled.

DLSS upscaling uses fairly standard routines to make the jump from, say, 1080p to 4K. It then runs an AI algorithm to correct errors in the image. However, as with all such systems, the quality of the final result depends largely on what is fed into the algorithm and what the data set was trained against.

The Guardian's investigation into the use of AI by the UK government highlights what happens when both of these aspects are problematic. For example, the magazine reported that the Home Office was using AI to read passports at airports.

According to the Guardian, the Interior Ministry's internal assessment is that the algorithm highlights a disproportionate number of people from Albania, Greece, Romania, and Bulgaria. If the dataset was trained on data that already overemphasized certain characteristics in the survey, the AI would make similarly biased calculations.

It is not uncommon to hear in the news that government agencies have made serious mistakes due to overreliance on AI. The hype surrounding the potential of artificial intelligence has led to things like ChatGPT being treated as if it were one of the most important inventions of the moment, but it can easily produce highly questionable and shocking results.

The UK government naturally defends the use of AI, stating that in the case of welfare benefit claims, the final decision is made by a person. But is that person making a decision based on the output of an algorithm, or do they go back and check everything again?" if the latter, then the use of AI is a complete waste of time and money.

But if it is the former, and the AI is already trained based on biased information, then the final decision that a flesh-and-blood person makes will also be biased. For example, even seemingly innocent use scenarios, such as identifying which people are more at risk in the event of a pandemic, are affected by this.

Thus, no government will turn its back on deep learning now because it has the potential to be used for all kinds of things, both good and bad. What is needed is more transparency behind the algorithms used, as well as access to the code and data sets by experts to ensure that the system is used fairly and appropriately.

In the UK, such a move has already been made, but there is little incentive or legal pressure for any organization to do so simply because "completion of an algorithm transparency report for all algorithm tools is encouraged."

This may change in time, but until then, we would like to see an extensive training program for all government employees who use AI in their jobs, so that they understand not how to use AI, but its limitations, and can question its algorithmic output.

We are all biased in one way or another, but we must remember that AI is no different.

Categories