I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:
4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?
Please indicate a probability for each option. (The sum should be equal to 100%.)”
Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.
The five options were: “Extremely good – On balance good – More or less neutral – On balance bad – Extremely bad (existential catastrophe)”
Question 3 was about takeoff speeds.
So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years. But what about the other theses? Even though 18% expected an extremely bad outcome, this doesn't mean that they expected it to happen for the same reasons that MIRI expects it to happen, or that they believe friendly AI research to be a viable strategy.
Since I already believed that humans could cause an existential catastrophe by means of AI, but not for the reasons MIRI believes this to happen (very unlikely), this survey doesn't help me much in determining whether my stance towards MIRI is faulty.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
To play devil's advocate: Will MacAskill reported that this post of his criticizing the popular ice bucket challenge got lots of attention for the EA movement. Scott Alexander reports that his posts on social justice bring lots of hits to his blog. So it seems plausible to me that a well-reasoned, balanced post that made an important and novel point on a controversial topic could be valuable for attracting attention. Remember that this new EA forum will not have been seeded with content and a community quite the way LW was. Also, there are lots of successful group blogs (Huffington Post, Bleacher Report, Seeking Alpha, Daily Kos, etc.) that seem to have a philosophy of having members post all they want and then filtering the good stuff out of that.
I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor. The important thing is to make it easy for users to find the good stuff, and suppressing the bad stuff is only one (rather blunt) way of accomplishing this. Ultimately the best way to help users find quality stuff depends on your forum software. It might be interesting to try to do a study of successful and unsuccessful subreddits to see what successful intellectual subreddits do that unsuccessful ones don't, given that the LW userbase and forum software are a bit similar to those of reddit.
(It's possible that strategies that work for HuffPo et al. will not transfer well at all to a blog focused more on serious intellectual discussion. So it might be useful to decide whether the new EA forum is more about promoting EA itself or promoting serious intellectual discussion of EA topics.)
(Another caveat: I've talked to people who've ditched LW because they get seriously annoyed and it ruins their day when they see a comment that they regard as insufficiently rational. I'm not like this and I'm not sure how many people are, but these people seem likely to be worth keeping around and catering to the interests of.)
Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.