The Ethics of Using AI
Many of us have seen what Artificial Intelligence is capable of. It is beginning to transform the way data is processed and analysed, pathing the way for an increasingly automated future. Whilst this has its benefits, there are a number concerns from both experts and the general public surrounding the negative impacts AI could have, as it becomes more integrated in society.
It’s no secret that automation can take jobs. Companies are constantly looking for ways to reduce costs, and automating certain processes is becoming a more viable option. Unsurprisingly, the manufacturing industry has always been ahead of the game in this respect, with the first industrial robot being mass produced in 1961.
However, as AI has advanced, industries which previously couldn’t be automated are now seeing new technology capable of doing so. Consider courier/delivery services. Until recently, working as a delivery driver has been fairly secure employment, but with the emergence of self-driving vehicles, it’s clear these jobs are slowly but surely becoming more at risk.
The main counter-argument to concerns like these, is usually based around the concept of integration. The companies developing these systems state their technology is not always designed to replace jobs, but rather save employees time by carrying out the particularly difficult or monotonous tasks. It’s also worth noting that new technology does create new jobs. It’s a case of whether the jobs created can equal to, or even outweigh the ones lost.
Bias is a hot topic within the AI community, and seems to be a problem for even the largest of companies. These issues generally arise due to imbalanced training data. For example, Microsoft’s facial recognition technology was found to have an error rate of 20.8% when identifying women with darker skin tones. The system had little trouble identifying people from other ethnic backgrounds, and highlighted a key challenge data scientists face when developing these algorithms. In simple terms, training data is information gathered specifically to teach an AI system to differentiate between certain categories, in Microsoft’s case, the data consisted of thousands of faces. So where did they go wrong? Microsoft’s data was not diverse enough, it lacked the images of women with darker skin tones, meaning the algorithm was not as experienced in identifying that particular category. The issue has since been amended, but nonetheless emphasises the importance of good quality data.
As AI starts making more important decisions on our behalf, we must prioritise reducing bias to prevent poor choices being made. In some parts of America, controversial AI is being used to assign prisoners a risk score before going to trial, this score determines factors such as whether they are in jail before being trailed, and how severe their sentence should be. It’s a prime example of why bias needs to be eliminated (or significantly reduced), to ensure fair assessments are made.
The expression ‘with great power comes great responsibility’ is particularly relevant to AI. We are seeing systems capable of truly astounding feats, and as a result, must consider the possibilities of this technology being misused with malicious intent.
Deepfakes have become an area of particular concern, these General Advisory Networks (or GAN’s) can generate images, video, or audio of people with incredible accuracy. As you may have guessed, these could become an extremely effective way of spreading misinformation and fake news. Deepfakes of both Barack Obama and Donald Trump have demonstrated how convincing this technology can be. They aren’t spot on just yet, but at the rate they’re improving, it won’t be long until it’s borderline impossible to distinguish between what’s genuine, and what’s generated.
It is clear we stand to gain a lot from AI, and is an area we should all be excited about. However, proper regulation and rigorous testing is essential to ensuring this technology is as safe and reliable as possible.
Written by: Mark Ajder