David Love
Western Governors University
Approaching the Singularity: A Survival Guide
The technological singularity is the idea that the rate of progression for technology will get to a point where machine intelligence exceeds human intelligence. The progress of technology at this point will become exceedingly rapid, and difficult to comprehend at this time. Leading researchers in the field believe that if we approach the singularity correctly then we will survive, and be made better by it. (Yampolskiy, R. V. (2011). Leakproofing the Singularity: Artificial Intelligence Confinement Problem. Journal Of Consciousness Studies) However, our greatest existential threat for the next two centuries would arguably be the technological singularity due to our inability to prepare for or understand the ramifications. Research suggests that the implementation will only be successful with man’s aid because of the need for wisdom toward new technology, compassion amongst the human race, and a fortitude against the highly probable issues that will arise with artificial intelligence.
Wisdom is the application of knowledge and experience to a situation so as to bring about the best possible outcome. One might have knowledge on a subject, but without experience they could error in their judgment. That is why when a new technology is engineered, it must be tested first. An example of this would be a new type of braking system on a car. If an engineer can come up with a brake that is 200% more efficient, it must be tested to ensure that it doesn’t critically fail before going into production. This can be applied to artificial intelligence. If a researcher is able to come up with an algorithm that allows an AI to modify its own programing for the purpose of improving itself, then the algorithm must be tested. It would be prudent to test it in a sandbox that is isolated from outside networks to ensure that it can be contained if something goes wrong. Wisdom will allow us to approach the singularity and achieve a more desired outcome. An AI superintelligence is not just a step forward for technology, it would be the most important step humanity ever made. (Bostrom, N. (2003). Ethical Issues in Advanced Artificial Intelligence. ) This will be the point at which that humanity is no longer the most intelligence species on our planet.
Many futurists, and scientists have signed on to an open letter regarding AI research. The point this letter makes is showing that now there are tangible benefits to AI research. We have reached a point in AI research where it is beginning to become more valuable to companies and individuals. The signatories are concerned with the safety of the research being done. (Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter.) AI has gotten to the point where in some cases it is better to implement a limited AI program than to have a human aspect to a decision, or process. An article by Sean Coughlan proves the point that we must understand the unintended consequences of an AI. “Such computer-driven "intelligence" might be a powerful tool in industry, medicine, agriculture or managing the economy. But it also can be completely indifferent to any incidental damage.” (Coughlan, S. (2013, April 24). How are humans going to become extinct?)
Nick Bostrom, one of the leading researchers in the field discusses the numerous complexities of predicting the outcome of an AI. He states “even if we think hard and honestly about this issue, we are apt to neglect at least one crucial consideration.” (Bostrom, N. (2007). TECHNOLOGICAL REVOLUTIONS: ETHICS AND POLICY IN THE DARK.) Keeping that in mind, we must explore every conceivable consequence of the programming of the AI before releasing it upon the world. Even if we miss a few things, the fact we are putting effort into understanding the results that an AI would incur should increase our chances of creating a safe AI. The type of scenario we are trying to avoid is where say an AI is meant to build as many paperclips as possible. It may see humans as competing for the resources to build those paperclips, and decide to get rid of us. That’s why the creator would need to include a safeguard with “common sense” to understand that killing humans because they use the same resources as itself would be detrimental to humans.
Compassion might sound like an oddity when talking about a cold hard subject like AI. It’s not. Once an AI superintelligence is built, assuming we are able to control it, it could be used to improve anything. If it were used to create a better weapon, the possibilities would be boundless. That weapon could be used against anyone or any state. It would be like an ant facing a magnifying glass. If we do not evolve as a species beyond petty wars in competition for resources, or religion, then we may decide to use these potentially apocalyptic weapons on each other. Humans have a long history of being cruel to each other. We can see this by looking back into history at the Punic wars, Mongol invasions, WWI and WWII. Imagine if one side had the perfect weapon. There would be massive bloodshed.
We only need to turn on the TV and watch the news to get an idea of the kind of suffering that is caused by human on human action in this world. We as a species need to evolve beyond our use of violence and the means to an end. Violence may achieve its goal of gaining resources, or attaining wealth. However, it always causes more violence in the form of retribution or punitive punishment. Even an empathetic AI may see that the only way to end the suffering caused by other members of our species is to irradiate people that it deems harmful. Once the AI decides that, what would keep it from deciding that even “making fun” of someone is too harmful to allow that person to continue existing. Remember, the AI might be extremely intelligent, but its decision is based on cold hard logic. So not only must we learn to have more compassion for our fellow humans to avoid taking advantage of an AI generated super-weapon, but we also must also understand that we may be judged by our AI. Our actions could have direct consequences on how the AI decides to deal with us.
Of course, compassion also needs to extend towards, and from the Artificial Intelligence. It is theoretically possible to include human-like subroutines in the algorithms of an AI. According to Brian Tomasik “Cognitive scientists are already unpacking the mechanisms of human decision-making and moral judgments. As these systems are better understood, they could be engineered directly into AIs.” (Tomasik, B. (2014, May 14). Thoughts on Robots, AI, and Intelligence Explosion. In Foundational Research Institute.) This includes empathy. If an AI has empathy, it would be more likely to make decisions based not only on what effect that decision has on itself, but also how those decision might affect others. This is a key point in making a safe AI. If the AI knows about what effect it is having on others, and has empathy towards humans, then it is less likely to bring harm upon us. Likewise, we must have empathy towards an AI. It seems we may be making strides toward that anyway. In the USA, when AI researchers are looking for funding, they tend to give the AI human-like features and responses. This is to give the feel that they are close to a breakthrough, but it also means that people are more likely to respond toward that AI with empathy. For example: You don’t see (many) people taking apart a Tickle Me Elmo doll just for fun. This is because we see it as cute, and playful. What one might forget is that the doll is simply a mixture of plastic, metal, and batteries. There is no substance there that necessarily deserves our empathy.
We must ensure that when we create an Artificial Superintelligence that it is governed by a strict set of rules. These rules must come before any ability for the AI to manipulate its own programming. The rules should include: 1. The AI cannot bring direct harm upon a human. 2. The AI must obey the orders given to it by humans, except when it violates rule number one. This is a summary of Isaac Asimov’s rules from I, Robot. (Asimov, I, Robot). A reader of his books might note that I left off the third law. In my opinion, it is not necessary for an AI to be deemed safe. This rule is that a robot must protect its own existence, so long as it doesn’t violate the first two rules. Since we are not talking about robots, but rather a very complex computer program, I do not think this is necessary to include. The implementation of rule number one is more complicated than it sounds, however. Bostrom brings up a flash crash in 2010 that was caused by a complex computer algorithm that buys and sells shares of stock in exceedingly short periods of time. (Bostrom, N. (2014). Past developments and present capabilities. Superintelligence: Paths, Dangers, Strategies.) This can bring about harm to humans in an indirect way. Causing the economy to lose money and jobs to disappear could harm humans by starving some of resources as a result of the economic loss. How is this handled?
Yampolskiy provides us with theory on how we might keep an AI contained. He speaks of the Laws of Confinement, which is a set of rules postulated by Butler Lampson. Those rules, if followed correctly would keep an AI from being able to directly influence a world outside of its hardware. I will not go into the specifics of those rules for the sake of brevity. Other people like David Chalmers critiqued Lampson’s approach, stating that “…a truly leakproof singularity is impossible, or at least pointless. For an AI system to be useful or interesting to us at all, it must have some effects on us.” (Chalmers, D. J. (2010). The Singularity: A Philosophical Analysis. E Journal Of Consciousness Studies) However, Yampolskiy argues that Chalmers is incorrect. He concludes that restricting a superintelligence from the real world, and only allowing it to communicate through a gatekeeper would make it difficult, or at least time consuming for an a superintelligence to “escape.” He states that a confinement protocol is sufficient to keep the worst possible outcomes of a superintelligence from being released, and will allow humanity to benefit as it approaches the singularity. (Yampolskiy, R. V. (2011). Leakproofing the Singularity: Artificial Intelligence Confinement Problem.)
Yudkowsky says “We must execute the creation of Artificial Intelligence as the exact application of an exact art.” (Yudkowsky, E. (2008). Artificial Intelligence as a positive and negative factor in global risk.) What he means is that to have an effective AI and for it to not cause risk to humans we must create an AI to do only a specific task, rather than an omnipotent AI that is capable of solving any problem. Essentially this removes the motivation for an AI to do something. It does this by making the sole purpose of its existence the completion of a single task. Yudkowsky is right. It does take the danger out of an AI, but to a certain extent we already have this technology. Let us take a game like Elite: Dangerous. That is a space simulator, and you do battle with AI pilots. Over time this pilots learn your moves and get better. This is an example of an AI that has a single purpose. Obviously we are looking to go beyond that with AI technology.
Research suggests that the implementation will only be successful with man’s aid because of the need for wisdom toward new technology, compassion amongst the human race, and a fortitude against the highly probable issues that will arise with artificial intelligence. Wisdom will allow us to understand the implications of creating the AI Superintelligence. Empathy and compassion will help us utilize the technologies that come out of the “Singularity.” Lastly, understanding how to contain the superintelligence will keep us from letting it dominate our species. So long as we follow these guidelines, then it doesn’t appear that an AI superintelligence would pose a serious risk to humans as a whole. Once we learn how to work together, then we may find that creating an AI superintelligence would be to humanity’s ultimate benefit. Imagine a word where a complex machine would be able to spend 100% of its resources trying to solve a problem. Remember, machines can think much faster and devote much more of its time on solving a problem. This could be the age of abundance that we have all been hoping for!
References
Antonov, A. A. (2011). From Artificial Intelligence to Human SuperIntelligence.International Journal Of Computer Information Systems, 2(6). Retrieved from http://www.svpublishers.co.uk/download/i/mark_dl/u/4008228453/4553936101/paper-1.pdf
Bostrom, N. (2003). Ethical Issues in Advanced Artificial Intelligence. In Nick Bostrom's Personal Website. Retrieved from http://www.nickbostrom.com/ethics/ai.html
Bostrom, N. (2014). Past developments and present capabilities.Superintelligence: Paths, Dangers, Strategies. (pp. 17-18). Oxford, United Kingdom: Oxford University Press.
Bostrom, N. (2007). TECHNOLOGICAL REVOLUTIONS: ETHICS AND POLICY IN THE DARK. In N. M (Ed.), Nanoscale: Issues and Perspectives for the Nano Century. (pp. 129-152). New York, USA: (John Wiley.)
Chalmers, D. J. (2010). The Singularity: A Philosophical Analysis. E Journal Of Consciousness Studies, 17(7). Retrieved from http://consc.net/papers/singularity.pdf
Coughlan, S. (2013, April 24). How are humans going to become extinct? In BBC News. Retrieved from http://www.bbc.com/news/business-22002530
Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter. (2015). In Future of Life Institute. Retrieved from http://futureoflife.org/misc/open_letter
Tomasik, B. (2014, May 14). Thoughts on Robots, AI, and Intelligence Explosion. In Foundational Research Institute. Retrieved from http://foundational-research.org/robots-ai-intelligence-explosion/
Yampolskiy, R. V. (2011). Leakproofing the Singularity: Artificial Intelligence Confinement Problem. Journal Of Consciousness Studies, 19(1-2). Retrieved from http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf
Yudkowsky, E. (2008). Artificial Intelligence as a positive and negative factor in global risk. In N. Bostrom, & M. M. Cirkovic (Eds.), Global Catastrophic Risks. (pp. 308-315). New York, New York: Oxford University Press.