RSI capabilities could be charted, and are likely to be AI-complete.
What does RSI stand for?
Lately I've been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.
I'll post quotes from the audiobooks I listen to as replies to this comment.
More (#3) from Better Angels of Our Nature:
...let’s have a look at political discourse, which most people believe has been getting dumb and dumber. There’s no such thing as the IQ of a speech, but Tetlock and other political psychologists have identified a variable called integrative complexity that captures a sense of intellectual balance, nuance, and sophistication. A passage that is low in integrative complexity stakes out an opinion and relentlessly hammers it home, without nuance or qualification. Its minimal complexity can be quantified by counting words like absolutely, always, certainly, definitively, entirely, forever, indisputable, irrefutable, undoubtedly, and unquestionably. A passage gets credit for some degree of integrative complexity if it shows a touch of subtlety with words like usually, almost, but, however, and maybe. It is rated higher if it acknowledges two points of view, higher still if it discusses connections, tradeoffs, or compromises between them, and highest of all if it explains these relationships by reference to a higher principle or system. The integrative complexity of a passage is not the same as the intelligence of the person who wrote it, but the
Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.
Outstanding:
Worthwhile if you care about the subject matter:
A process for turning ebooks into audiobooks for personal use, at least on Mac:
Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
RSI capabilities could be charted, and are likely to be AI-complete.
This is to be taken as an arguendo, not as the author's opinion, right? See IEM on the minimal conditions for takeoff. Albeit if &q...
(I don't have answers to your specific questions, but here are some thoughts about the general problem.)
I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven't thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".
That said, there are some steps in...
I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk i
The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.
Don't trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.
/doom
I think there's a >15% chance AI will not be preceded by visible signals.
Aren't we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.
Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.
Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.
These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.
One question is whether AI is like CFCs, or like CO2, or like hacking.
With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.
With CO2, the solution is equally simple: cap and trade. It's just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we're left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because ...
Here are my reasons for pessimism:
There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull el
Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.
Even if one organization navigates the creation of friendly AI successfully, won't we still have to worry about preventing anyone from ever creating an unsafe AI?
Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.
The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?
@Lukeprog, can you
(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI's as an organisation's confidence in each of the 3:
Elites often fail to take effective action despite plenty of warning.
I think there's a >10% chance AI will not be preceded by visible signals.
I think the elites' safety measures will likely be insufficient.
Thank you for your diligence.
There's another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it's a threat in the first place.
Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.
Combining the beginning and the end of your questions reveals an answer.
Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of [nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars] just fine?
Answer how just fine any of these are any you have analogous answers.
You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.
More (#7) from Wired for War:
if a robot vacuum cleaner started sucking up infants as well as dust, because of some programming error or design flaw, we can be sure that the people who made the mistakes would be held liable. That same idea of product liability can be taken from civilian law and applied over to the laws of war. While a system may be autonomous, those who created it still hold some responsibility for its actions. Given the larger stakes of war crimes, though, the punishment shouldn’t be a lawsuit, but criminal prosecution. If a programmer gets an entire village blown up by mistake, the proper punishment is not a monetary fine that the firm’s insurance company will end up paying. Many researchers might balk at this idea and claim it will stand in the way of their work. But as Bill Joy sensibly notes, especially when the consequences are high, “Scientists and technologists must take clear responsibility for the consequences of their discoveries.” Dr. Frankenstein should not get a free pass for his monster’s work, just because he has a doctorate.
The same concept could apply to unmanned systems that commit some war crime not because of manufacturer’s defect, but because of some sort of misuse or failure to take proper precautions. Given the different ways that people are likely to classify robots as “beings” when it comes to expectations of rights we might grant them one day, the same concept might be flipped across to the responsibilities that come with using or owning them. For example, a dog is a living, breathing animal totally separate from a human. That doesn’t mean, however, that the law is silent on the many legal questions that can arise from dogs’ actions. As odd as it sounds, pet law might then be a useful resource in figuring out how to assess the accountability of autonomous systems.
The owner of a pit bull may not be in total control of exactly what the dog does or even who the dog bites. The dog’s autonomy as a “being” doesn’t mean, however, that we just wave our hands and act as if there is no accountability if that dog mauls a little kid. Even if the pit bull’s owner was gone at the time, they still might be criminally prosecuted if the dog was abused or trained (programmed) improperly, or because the owner showed some sort of negligence in putting a dangerous dog into a situation where it was easy for kids to get harmed.
Like the dog owner, some future commander who deploys an autonomous robot may not always be in total control of their robot’s every operation, but that does not necessarily break their chain of accountability. If it turns out that the commands or programs they authorized the robot to operate under somehow contributed to a violation of the laws of war or if their robot was deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold them responsible. Commanders have what is known as responsibility “by negation.” Because they helped set the whole situation in process, commanders are equally responsible for what they didn’t do to avoid a war crime as for what they might have done to cause it.
And:
Today, the concept of machines replacing humans at the top of the food chain is not limited to stories like The Terminator or Maximum Overdrive (the Stephen King movie in which eighteen-wheeler trucks conspire to take over the world, one truck stop at a time). As military robotics expert Robert Finkelstein projects, “within 20 years” the pairing of AI and robotics will reach a point of development where a machine “matches human capabilities. You [will] have endowed it with capabilities that will allow it to outperform humans. It can’t stay static. It will be more than human, different than human. It will change at a pace that humans can’t match.” When technology reaches this point, “the rules change,” says Finkelstein. “On Monday you control it, on Tuesday it is doing things you didn’t anticipate, on Wednesday, God only knows. Is it a good thing or a bad thing, who knows? It could end up causing the end of humanity, or it could end war forever.”
Finkelstein is hardly the only scientist who talks so directly about robots taking over one day. Hans Moravec, director of the Robotics Institute at Carnegie Mellon University, believes that “the robots will eventually succeed us: humans clearly face extinction.” Eric Drexler, the engineer behind many of the basic concepts of nanotechnology, says that “our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short.” Freeman Dyson, the distinguished physicist and mathematician who helped jump-start the field of quantum mechanics (and inspired the character of Dyson in the Terminator movies), states that “humanity looks to me like a magnificent beginning, but not the final word.” His equally distinguished son, the science historian George Dyson, came to the same conclusion, but for different reasons. As he puts it, “In the game of life and evolution, there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” Even inventor Ray Kurzweil of Singularity fame gives humanity “a 50 percent chance of survival.” He adds, “But then, I’ve always been accused of being an optimist.”
...Others believe that we must take action now to stave off this kind of future. Bill Joy, the cofounder of Sun Microsystems, describes himself as having had an epiphany a few years ago about his role in humanity’s future. “In designing software and microprocessors, I have never had the feeling I was designing an intelligent machine. The software and hardware is so fragile, and the capabilities of a machine to ‘think’ so clearly absent that, even as a possibility, this has always seemed very far in the future.... But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of technology that may replace our species. How do I feel about this? Very uncomfortable.”
One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):
The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I'd like to know: