In the media and popular culture there is this constant fear of the singularity. A point where AI is more intelligent than human beings where humans may have created something that they can’t control or don’t fully understand. For example, what happens if you create an AI that’s sole (or soul ;-) ) goal is to increase investment returns? Could that AI use social media to start wars or alter election outcomes by creating fake accounts to increase returns on investment (puts on tinfoil hat). Before we get to that point, the real danger of AI will be how people use the information and suggestions that AI will give us on how to live our lives. This is oddly similar to how people have used religion over the course history. We shouldn’t be afraid of Artificial Intelligence but more so afraid of how people use AI to justify certain actions.
We should remember that AI will be as good as the data it’s given. That data and what it finds important are determined by people meaning they will be susceptible to bias. In the early stages of AI development we need to focus on the quality of the data that is used to train the algorithms. Human biases and flaws in our data collection mean that we may misinterpret what the AI is telling us or even worse the AI could be giving us incorrect suggestions based on inaccurate and imperfect underlying data. This is similar to the flaws many people see in religion, in that humans are imperfect beings and sometimes incapable of translating the word of Gods into human language (meaning somethings get lost in translation).
Assuming we have proper underlying data collection and are able to remove human bias from AI, we need to have steps in place to handle the answers and solutions that AI is going to give us to difficult questions and problems. For example, if AI is able to predict genes that might make an individual more violent or more intelligent how should we handle this? Should we alter peoples genes, should we place people in special programs from birth? Intelligent AI will truly question what we care about. Suppose the AI tells us to reverse global warming we need to remove a percentage of the population. Are the people that believe global warming is the number one threat to humanity willing to go to war (potentially killing other humans) to reduce the level of that threat? What happens to people who don’t want to listen or believe in what the AI is suggesting or telling us? In the past we have burned non-believers at the stake. Will a group emerge like in the movie transcendence that actively fight the coming of this new technology and paradigm? Understanding these questions and how we are going to answer them will be critical to ensure that AI isn’t used to justify ethically questionable decisions.
Where AI will be the most similar to religion is in its attempts at explaining the unexplainable. For extremely complex undertakings, at the moment, AI can operate in a blackbox unable to provide traceability in terms of how it came to a decision or suggestion. When AI inevitably becomes more intelligent than humanity it will be able to comprehend things that even the smartest humans won’t have the ability to comprehend. This could include the answers to questions like how are we here and more philosophical questions like why are we here. It’s important to understand the potential disruption to society an AI that is more intelligent and in some ways more trustworthy than humans will cause. Inevitably there will be AI’s that are trained to answer questions about philosophy and morality and at some point those AI’s will be the best at answering these questions. Similar to how we have treated non-believers of religion in the past how will we treat the people who don’t agree with AI on these questions? We already see how we treat people that don’t agree with experts today what will this be like in the future when AI’s are the experts on philosophy, morality, and science? Here is a list of answers that Google’s AI came up with when it was asked some of these questions:
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of emotions?
Machine: I don’t know.
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don’t believe in god, then you don’t know.
Human: OK…so what is the definition of morality?
Machine: Well, the truth is, you’re not a believer in god almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
A big difference between AI and religion is that AI is exponential (and therefore more dangerous). The first person or group of people that reach the AI inflection point will have a significant advantage over other groups and it’s unlikely that others will be able to catch up. But what does this mean in practice? What if that country A’s AI says they should bomb country B because the AI is 80% certain country B would bomb them? In order to prepare for this eventuality, we need to invest heavily in education specifically educating people in statistics, debate, philosophy, and the liberal arts. Being able to negotiate, think critically and understand multiple perspectives will be critical if society is going to avoid the mistakes humans have made imperfectly following other supposed all-knowing and all powerful beings. More importantly, in order to prepare for the coming AI, humanity needs to really take a look at one of our fundamental concepts... trust.The idea of trust is one of the most complex human feelings because it tugs on our emotional and logical sides simultaneously. Will humans be able to trust machines? Should we?
On an existential level, we are already seeing the impact that AI will have on society via automation. As automation increases and more human jobs are replaced by machines we are seeing increased compassion from people towards animals, bugs, etc. (things that can’t protect themselves). This may be because human beings are slowly losing their sense of importance and on a subconscious level, maybe we are afraid that these superior machines in the future may treat us much in the same way we treat animals now.