Artificial intelligence is the stuff of nightmares and science fiction stories. What if it goes rogue and thinks we are dispensable? It’s not as far-fetched as you might think.
Artificial intelligence can automate many of the repetitive things we do every day. It can even drive for us and recognize different human faces.
The problem?
Artificial intelligence is only as unbiased as we can program it to be — and we are humans with bias. How can we ensure AI is programmed in an ethical and unbiased way?
The Proven History of Bias In AI
People have tried several times to program AI to take over lower-level tasks. It was supposed to free us up to handle the things that are higher-order in nature.
Amazon famously tried this with its hiring protocols. It fed resumes into an artificial intelligence algorithm, telling it which candidates were successful hires. The result was not only that the AI refused to consider women applicants for jobs, but it also kicked out any resumes that have women as references.
There are more dangerous instances of biased programming in artificial intelligence. A 2019 study found that driverless cars were better at detecting pedestrians with lighter skin tones. The data fed to the algorithm contained three times as many light-skinned people as it did dark-skinned people. So, the AI learned to detect lighter skinned people much faster but struggled to identify people with darker skin tones.
There’s also been a lot of talks lately about facial recognition software utilized by police departments. Some cities and states are banning the practice. However, states like Orlando, Florida and Washington County, Oregon have already started using the software.
It has many of the same problems as facial detection software in autonomous vehicles. The programming is biased and often misidentifies people with darker skin tones. The ACLU compared 25,000 mugshots with photos of members of congress and found 28 false matches, 39% of which were people of color. This technology scans police body camera footage as well as security footage even with known flaws.
The Purpose Of Ethical AI
Artificial intelligence can potentially make our lives easier. If we can figure a way to program AI to be ethical, we can actually use the technology to save lives. Driverless cars are estimated to save us up to 250 million hours of free time, $234 billion in public costs in savings from accidents, and 90% of traffic fatalities. But this is, of course, only if it is programmed correctly.
There’s not even a consensus about how driverless cars should react in situations that could lead to death or injury. Only about three-quarters of people believe that driverless cars should save as many lives as possible. There’s not even a consensus that human life is more valuable than property or other considerations.
There are people who believe that autonomous vehicles should spare the lives of children almost unanimously. However, less preference is given to the lives of criminals or animals. What’s more, very few people were actually willing to spend the money to buy a car with programming that would minimize harm.
How Can We Promote Ethical AI?
As the old saying goes, garbage in, garbage out.
If we want artificial intelligence to be less biased, we have to understand the inner workings of human bias. We need to spend the time to ensure it doesn’t get translated into the AI’s training modules.
Training the AI to weigh darker skin tones more heavily or to ignore gender could help make the algorithms less biased. Being more careful about the data that is fed into the system and monitoring for any problems in the output are going to be crucial steps moving forward. Subtle human bias can be multiplied when it becomes part of an algorithm. Once the AI is left to its own devices this can become a serious problem.
If we want to be able to fully benefit from AI, we need to do the work on the front end to make sure it thinks ethically. Learn more about ethical artificial intelligence from the infographic below.
Are we ready for a world where the machines can make their own decisions?
Source: Cyber Security Degrees