Hanson often takes his turn to speak this way: he summarizes Yudkowsky's last argument, in a way that at least superficially does not seem unfair or tendentious, then explains why he doesn't find it compelling, then explains why his own position is more compelling.
Yudkowsky seems to respond to Hanson's points without summarizing them first.
I find Hanson to be hugely more effective in the recording. Is it because of this? I was less sympathetic to Yudkowsky's point of view before I started listening, so it's hard for me to tell if this is an illusion.
Modifiers like "new" or "old" in titles or filenames quickly become unhelpful. It's generally better to use dates (in this case, something like "Jun 2011" would do the trick).
Can someone who agrees with Yudkowsky do an extended summary of Yudkowsky's position and arguments in text, the way Hanson summarized his position and arguments in text?
72:00 - Hanson cites the AAAI white paper which rails against alarmism.
That seems more likely to be a tussle over funding. If the panic-stricken screechers make enough noise, it might adversely affect their member's funding. Something rather like that has happened before:
...It was rumoured in some of the UK national press of the time that Margaret Thatcher watched Professor Fredkin being interviewed on a late night TV science programme. Fredkin explained that superintelligent machines were destined to surpass the human race in intelligence quite soon, and
In both of these debates, the change in "margin of victory" between people was smaller than the number of people who voted the first time but not the second. In the debate with Yudkowsky, Hanson's margin of victory went from -5 to 1, and 20 voters dropped out. In the debate with Caplan the margin went from 32 to 5--actually getting smaller--and 17 voters dropped out. I'm not sure if we can even determine a "winner" in terms of audience popularity, with that many disappearing votes. Is it normal in debates for large numbers of audience members to not vote at the end?
My two cents:
Legg's Is there an Elegant Universal Theory of Prediction? is somewhat relevant to parts of this discussion.
How about a LW poll regarding this issue?
(Is there some new way to make one, since the site redesign, or are we still at vote-up-down-karma-balance pattern?)
Hanson seems to agree that if we get human-level agents that are cheap to run, this gets us a local takeover. I don't think that having cheap chimp-level agents widely available at that time overturns the advantage of gaining access to cheap human-level agents. So if we grant that the capability of AIs gets increased gradually and publicly, all that a local group needs to take over the world is make the step from chimp-level state-of-the-art agents to human-level agents before any other group does that. If chimp-level agents are not that different from hum...
I don't have an intuition for what would happen if you ran a chimp-level intelligence very fast. The ratio Yudkowsky mentioned in the recording was 2500 years of human-in-skull thinking = 8 hours of human-in-laptop thinking. Is it completely obvious that 2500 years of chimp thinking would yield nothing interesting or dangerous?
Chimps haven't accomplished much in the last 2500 years but that's at least partly because they don't pass on insights between generations. Can we stipulate 2500 years of chimp memory, too?
Hanson has made a lot of comments recently about how intellligence is poorly defined and how we don't really know what it is - e.g. 77:30 and 83:00 minutes in. I think we do now have a pretty good idea about that - thanks to the Hutter/Legg work on universal intelligence. If Hanson was more familiar with this sort of material, I rather doubt he would say the kinds of things he is currently saying.
Whoever asked Robin about his opinion that social skills separated humans from chimpanzees "Can you envision a scenario where one of the computers acquired this 'Social Skill' then said to all the other computers "hey guys lets go have a revolution" " Love that comment
73:00 - this seems to be a mis-summary by Hanson. I am pretty sure that Norvig was saying that complex models were still useful - not that simpler ones didn't even exist.
The situation is similar to that with compression. If you can compress a bit that is still useful - and it is easier to do than compressing a lot.
It's not a brain in a box in a basement - and it's not one grand architectural insight - but I think the NSA shows how a secretive organisation can get ahead and stay ahead - if it is big and well funded enough. Otherwise, public collaboration tends to get ahead and stay ahead, along similar lines to those Robin mentions.
Google, Apple, Facebook etc. are less-extreme versions of this kind of thing, in that they keep trade secrets which give them advantages - and don't contribute all of these back to the global ecosystem. As a result they gradually stack u...
Hanson gets polite and respectful treatment for his emulation scenario. I am not convinced that is the right approach. Emulations first is a pretty crazy idea - and Hanson doesn't appear to have been advised about that a sufficiently large number of times yet.
Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con.
The thesis seems pretty obviously true to me, though there is some issue over how much is "much".
Google or Facebook control a much larger fraction the world compared to farmers or industy folk from decades ago. Essentially technological progress promotes wealth inequality by providing the powerful with technology for keeping control of their wealth and power. So, we have more wealth inequality than ever - and will most likely have even more wealth inequality in the future.
Hanson's debating success is all the more impressive given that he was fighting with a handicap. Imagine how potent his debating would be if he was actually arguing for a correct position!
Have you taken your own survey and published the results somewhere?
Yes I have done so. But I don't trust my ability to make correct probability estimates, don't trust the overall arguments and methods and don't know how to integrate that uncertainty into my estimates. It is all too vague.
There sure are a lot of convincing arguments in favor of risks from AI. But do arguments suffice? Nobody is an expert when it comes to intelligence. Even worse, I don't think anybody knows much about artificial general intelligence.
My problem is that I fear that some convincing blog posts are simply not enough. Just imagine all there was to climate change was someone with a blog who never studied the climate but instead wrote some essays about how it might be physical possible for humans to cause a global warming. Not enough, the same person then goes on to make further inferences based on the implications of those speculations. Am I going to tell everyone to stop emitting CO2 because of that? Hardly! Or imagine that all there was to the possibility of asteroid strikes was someone who argued that there might be big chunks of rocks out there which might fall down on our heads and kill us all, inductively based on the fact that the Earth and the moon are also a big rocks. Would I be willing to launch a billion dollar asteroid deflection program solely based on such speculations? I don't think so. Luckily, in both cases, we got a lot more than some convincing arguments in support of those risks.
Another example: If there were no studies about the safety of high energy physics experiments then I might assign a 20% chance of a powerful particle accelerator destroying the universe based on some convincing arguments put forth on a blog by someone who never studied high energy physics. We know that such an estimate would be wrong by many orders of magnitude. Yet the reason for being wrong would largely be a result of my inability to make correct probability estimates, the result of vagueness or a failure of the methods I employed. The reason for being wrong by many orders of magnitude would have nothing to do with the arguments in favor of the risks, as they might very well be sound given my epistemic sate and the prevalent uncertainty.
In summary: I believe that mere arguments in favor of one risk do not suffice to neglect other risks that are supported by other kinds of evidence. I believe that logical implications of sound arguments should not reach out indefinitely and thereby outweigh other risks whose implications are fortified by empirical evidence. Sound arguments, predictions, speculations and their logical implications are enough to demand further attention and research, but not much more.
I agree that friendliness is a long shot. If you know of a better solution, please let me know.
If there was a risk that might kill us with a probability of .7 and another risk with .1 while our chance to solve the first one was .0001 and the second one .1, which one should we focus on?
Why do I feel like there's massively more evidence than "a few blog posts"? I must be counting information I've gained from other studies, like those on human history, and lumping it all under "what intelligent agents can accomplish". I'm likely counting fictional evidence, as well; I feel sort of like an early 20th century sci-fi buff must have felt about rockets to the moon. Another large part of being convinced falls under a lack of counterarguments - rather, there are plenty out there, just none that seem to have put thought into th...
Link: overcomingbias.com/2011/07/debating-yudkowsky.html