The Ethics of AI

Archie Shou
7 min readJan 31, 2021

--

Photo by Markus Winkler on Unsplash

From 1927 to 2019 there have been more than 100 films produced worldwide about artificial intelligence. However, many of these portrayed AI as downright horrific. In movies such as The Terminator, The Matrix, Avengers: Age of Ultron, and many others. The movie industry placed into our shared imagination scenes demonstrating how intelligent machines will take over the world and enslave or totally wipe humanity from existence. And this thought of potential AIs to become more superior than any human intelligence paints a dark future for humanity.

Now more recently, countries all over the world have entered the race to develop artificial intelligence with over 20 countries in the EU releasing their strategies on AI development in both R&D and education. And Yes, currently there are already AI agents who are very capable of completing processes parallel to human intelligence. Universities, private organizations, and governments are actively developing artificial intelligence to have the ability to mimic human cognitive functions such as learning, problem-solving, planning, and speech recognition. But if these AI agents lack empathy, wisdom, and care to live organisms, should their integration into society be limited, and if so, in what ways?

Disclaimer, by no means, is this article meant to change or persuade you or your opinion, but merely to highlight some of the issues people are looking at right now.

1. Job Loss and Wealth Inequality

One of the most pressing issues to date on AI is how it’s going to replace jobs. Should we develop AI and integrate AI into society if it means many people will lose their jobs — and quite possibly their livelihood?

AI brings mixed emotions and opinions when referenced in the context of jobs. However, it’s becoming quite increasingly clear that AI is not a job killer, but rather, a job killer for certain categories. As has happened with almost every wave of new technology, from automatic weaving systems to amazon warehouse robots, we see that jobs are not destroyed, but rather an employment shift from one place to another, and entirely new categories of employment are created. We should be expecting around the same thing for AI. Research has shown that it’s inevitable that AI will replace entire categories of work, especially in transportation, retail, government, professional services employment, and customer service. However, it will create new categories. After all, people will be tasked with creating these robots and managing them in the future.

But what will happen to the wealth inequality? Currently, companies have to pay wages, taxes, or other expenses with leftover profits that usually go back to production. In this situation, the economy continues to grow. However, what will happen if we introduce AI to this equation? Robots don’t need to get paid, they can contribute at 100% efficiency all the time, and they don’t have distractions or problems that normal workers face that can decrease their efficiency. This opens the door for CEOs and stakeholders to keep more company profits generated from the AI workforce replacing paying jobs which leads to a larger wealth gap and inequality. This could lead to the rich, those individuals and companies who use AI, getting richer while new companies have to pay for labor work which leads to them having a slower efficiency for their company to grow as they have to pay workers.

2. AI is Imperfect, What if it Makes a Mistake? Who takes the blame?

AI isn’t perfect. It’s bound to make mistakes as it continues to learn. If trained well, using good data, then AIs can perform well. However, if we feed AIs bad data or make errors with internal programming, the AIs can be harmful. Teka Microsoft’s AI chatbot, Tay, was released on Twitter in 2016. In less than one day, due to the information it was receiving and learning from other Twitter users, the robot learned to spew racist slurs and Nazi propaganda.

Yes, AIs make mistakes. But do they make greater or fewer mistakes than humans? How many lives have humans taken with mistaken decisions? Is it better or worse when an AI makes the same mistake? Who takes the blame if the AI makes such mistakes?

Do we blame it on the creator? If so wouldn’t it be unfair as AI could have self-learned to make such mistakes? This is almost the same as a child. If a child makes a mistake you can’t blame the adults. The adults/guardian was just providing the child with information and their decisions were on their own.

Moreover, what if AI robots have different views of the world than us. What if though it’s complicated thinking, it thinks that perhaps our choices aren’t the right ones? Comment what you think we should do with AI and what if they have different opinions than us.

3. How Should We Treat AIs?

Should robots be granted human rights or citizenship? If we evolve robots to the point that they are capable of “feeling,” does that entitle them to rights similar to humans or animals? If robots are granted rights, then how do we rank their social status? Would they want freedom and live among humans? In fact, recently, the Hanson Robotics humanoid robot, Sophia, was granted citizenship in Saudi Arabia. While some consider this to be more of a stunt than actual legal rights but it does set an example of the type of rights AIs may be granted in the future and that we really have to start thinking about how we are going to treat AIs

4. Who should be allowed to have access to AI technology?

While AI can do a lot of good we must be careful about AI in the hands of malicious users. As technology continues to become more powerful, what will we do about individuals, criminal organizations, and rogue countries that apply AI to malicious ends? How can we make sure that AI isn’t used for malicious needs that can hurt people? This is a very controversial topic as this technology can be used to be very damaging yet it can also be very easily implemented without prior knowledge. Many companies have actually already started taking action against malicious AI attacks. However, as these AI systems get smarter during the process, they could potentially change the nature of threats, making them harder to detect, more random in appearance, and more adaptive. Far beyond human hacking capabilities. This could be extremely terrifying and the government along with large companies need to start implementing security systems to protect our privacy from such attacks.

5. Should AI systems be given the ability to kill?

Perhaps one of the biggest ethical problems right now is if AI was so consistent and supposedly “right” should they be given the ability to decide to kill or sentence? In a Ted Talk by Jay Tuck, he explains that AIs are software that writes its own updates and makes themselves better on what they deem right. This means that, as programmed, the machine is not created to do what we want it to do but rather what it learns to do. A real-life example of this is an AI anti-aircraft weapon called Oerlikon GDF-005. It uses its own data and one time its computerized gun was jammed and it opened fire uncontrollably killing 9 people and wounding 14 more.

Is it better to use AIs to kill than to put humans in the position? Choosing humans could put a strain on the person firing while choosing the AI could cause something unexpected with endless ethical problems on who to blame. There’s currently actually a non-profit organization called “Stop Killer Robots”. They aim to ban fully-autonomous weapons that can decide who lives and dies without human intervention.

Key Takeaways

Yes, the rise of AI and surpassing human intelligence is quite scary to think about. And the ethical issues that come with this are also very complex. Whether AI is good or bad can be examined from many different angles and opinions with none of them being the only framework and solution. We need to keep learning and stay informed in order to make good decisions for our future. Thank you so much for reading this article. I hope you enjoyed it. This article was purely made to be informative and should not change your opinion on these topics.

If you’d like to check out some of my other works, visit archieshou.com

Medium: https://archie-shou.medium.com/

Linkedin: https://www.linkedin.com/in/archie-shou-7488b3193/

Gain Access to Expert View — Subscribe to DDI Intel

--

--