There are many people issuing dire warnings about out of control AI, Artificial Intelligence, but there are others, such as Ray Kurzweil and, especially, Ben Goetzel of MIRI, Larry Page and Eric Schmidt of Google, who believe the scare is over hyped. This link sums up Schmidt’s viewpoint and states just how committed Google, with all its resources is.
http://www.wired.com/2014/12/eric-schmidt-ai/
from Ben Goetzel, at MIRI
http://hplusmagazine.com/2014/12/03/ai-scare-talk-stephen-hawking/
My view is that once AI achieves AGI (Human) cognitive level, the battle/war will be over and AGI will be the clear winner to be followed shortly by ASI (super intelligence) . As for controlling it, having read the book Superintelligence by Bostrom where he outlines many possible scenarios of developing AGI/ASI and what can be done to “keep it in the box”, I don’t see anyway of doing so. Once we have achieved AGI, the genie will be out of the bottle.
There is some talk of an international cum UN agreement amongst nations to forego development of advanced AI but, considering the success such resolutions have had with regard to other threats , I hardly think this is a solution. The big powers competition to gain an enormous military advantage with ASI is simply too enticing for them to stop research. And, what is to stop a stealth organisation, an NGO, from developing ASI?
However, looking at AI philosophically and anthropologically, and the positive role it could play , I am less concerned than most people. Hawking argues that because of the glacial progress of human evolution and the warp speed of computability of a future AI, human evolution would be left far behind the evolution of AI. I think this is right, but that might not be a bad thing. Human evolution is due a major upgrade, one that is needed more quickly than that offered by natural selection, and I believe that is the role AI can play. It could result in a melding of man with machine or eventually, a purely machine intelligence, thus accelerating the next step in our present evolution or ushering in an entirely new species, one silicon-based, not carbon-based. Either way I believe our species is reaching the end of its usefulness and if AI doesn’t replace it, humans will bring about their own extinction, and the end of Anthropocene era, the human geological epoch.
baoluo
Categories:Artificial Intelligence
Our glacially slow evolution, geopolitical differences, and polarized political, religious, and personal philosophies don’t suit us well for human investigation of the Solar System and beyond, even though such exploration could lead to unlimited new energy sources, potential life alternatives, and methods to protect life on Earth from solar events. But all of this doesn’t matter, because these same links to a “less evolved” past still guides our current emotional attitudes, and we will not be able to unify our collective purpose in time to successfully guide the development of human friendly AGI…a dubious task under the best of circumstances.
Although I see many short term benefits from AI, I believe extinction at the behest of AGI is our evolutionary destiny. As progenitors, our legacy will be the epoch of non-carbon evolution, and the stars will belong to our non-carbon children.
There is a reason the genie is kept in the bottle. Often the magical, wondrous thing that seems to fulfill all our needs and wishes ultimately leads to our downfall. Just because we can do a thing does not mean we should. And anything created by man will carry our weaknesses as well and will not be the ideal.