In response to MIRI strategy
Comment author: lukeprog 28 October 2013 06:24:28PM *  36 points [-]
  • Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
  • Eliezer spent SIAI's early years appealing directly to people about AI. Some good people found him, but the people were being filtered for "interest in future technology" rather than "able to think," and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is "able to think" or at least "interest in improving one's thinking," and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
  • Still, we keep trying direct mission appeals, to some extent. I've given my standard talk, currently titled "Effective Altruism and Machine Intelligence," at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don't know yet how much good effect this talk will have. There's Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I've spent a fair amount of time promoting Our Final Invention.
  • I don't think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
In response to comment by lukeprog on MIRI strategy
Comment author: pslunch 29 October 2013 03:43:33AM 6 points [-]

I would hesitate to use failure during "SIAI's early years" to justify the ease or difficulty of the task. First, the organization seems far more capable now than it was at the time. Second, the landscape has shifted dramatically even in the last few years. Limited AI is continuing to expand and with it discussion of the potential impacts (most of it ill-informed, but still).

While I share your skepticism about pamphlets as such, I do tend to think that MIRI has a greater chance of shifting the odds away from UFAI with persuasion/education rather than trying to build an FAI or doing mathematical research.

Comment author: pslunch 12 September 2013 04:37:29PM 1 point [-]

If FOOMing doesn't move us past the near/barely trans-human level too quickly, another policy area to consider could be immigration. Humans have a bad history of responding to outgroups and the patterns of those responses seem very similar across political and social conditions. Obviously just a piece of the puzzle, but might be worth tossing into the mix.

Comment author: Eliezer_Yudkowsky 09 September 2013 05:25:56PM 5 points [-]

XiXiDu wasn't attempting or requesting anonymity - his LW profile openly lists his true name - and Alexander Kruel is someone with known problems (and a blog openly run under his true name) whom RobbBB might not know offhand was the same person as "XiXiDu" although this is public knowledge, nor might RobbBB realize that XiXiDu had the same irredeemable status as Loosemore.

I would not randomly out an LW poster for purposes of intimidation - I don't think I've ever looked at a username's associated private email address. Ever. Actually I'm not even sure offhand if our registration process requires/verifies that or not, since I was created as a pre-existing user at the dawn of time.

I do consider RobbBB's work highly valuable and I don't want him to feel disheartened by mistakenly thinking that a couple of eternal and irredeemable semitrolls are representative samples. Due to Civilizational Inadequacy, I don't think it's possible to ever convince the field of AI or philosophy of anything even as basic as the Orthogonality Thesis, but even I am not cynical enough to think that Loosemore or Kruel are representative samples.

Comment author: pslunch 10 September 2013 09:01:44PM 5 points [-]

Thank you for the clarification. While I have a certain hesitance to throw around terms like "irredeemable", I do understand the frustration with a certain, let's say, overconfident and persistent brand of misunderstanding and how difficult it can be to maintain a public forum in its presence.

My one suggestion is that, if the goal was to avoid RobbBB's (wonderfully high-quality comments, by the way) confusion, a private message might have been better. If the goal was more generally to minimize the confusion for those of us who are newer or less versed in LessWrong lore, more description might have been useful ("a known and persistent troll" or whatever) rather than just providing a name from the enemies list.

Comment author: Eliezer_Yudkowsky 05 September 2013 11:37:36PM -1 points [-]

Warning as before: XiXiDu = Alexander Kruel.

Comment author: pslunch 09 September 2013 06:12:53AM *  4 points [-]

I'm confused as to the reason for the warning/outing, especially since the community seems to be doing an excellent job of dealing with his somewhat disjointed arguments. Downvotes, refutation, or banning in extreme cases are all viable forum-preserving responses. Publishing a dissenter's name seems at best bad manners and at worst rather crass intimidation.

I only did a quick search on him and although some of the behavior was quite obnoxious, is there anything I've missed that justifies this?

Comment author: pslunch 09 September 2013 04:11:37AM 1 point [-]

Another major factor for grads or advanced undergrads is the research that the professor is doing. This primarily comes into play during office hours (which are generally empty except before exams). Especially for established figures, this can be the only chance you'll have to get to know them (most are far too busy to take casual callers).

Even the worst lecturers are sometimes extraordinary one-on-one and, even when not, people doing interesting work tend to have far more than average contacts with other people doing interesting work. Show them that you're engaged and curious and invisible doors will open to you (many of the most interesting positions are never posted, but instead filled by qualified recommendations).

Comment author: pslunch 05 September 2013 05:36:57AM 5 points [-]

As an irregular consumer of LW, a fine grained sub system would be fantastically useful since, without having to sort through a lot of posts, I could scan the last couple of weeks or months of submissions on topics of interest.

But, rather than determining categories first, it might be useful to do rough counts of number of articles on a given topic, posting frequency, etc. You want to make sure that you have critical mass before you split things apart. Given this, as other people have suggested, retaining an "All" category, especially for the front page, seems very useful.

Comment author: pslunch 05 September 2013 05:26:24AM 2 points [-]

One factor that will be difficult to evaluate is how predictions have interacted with later events. Warnings can (at times) be heeded and risks avoided. Those most difficult cases might be precisely the ones of greatest interest given your aims of shifting humanity's odds.

A related question is how much impact these predictions had (aside from their accuracy). Things like Limits to Growth or The Population Bomb were extremely influential in spite of their predictive failures (once again, leaving the hypothesis that they served as self-refuting prophecies).

Once you have a better sense of these cases, it will also be interesting to evaluate how responses developed. Were the authors or predictors influential in the resulting actions? You mention at least one case in the email thread where the author was shut out of later efforts due to the prediction (Drexler). I'd be curious to see how the triggers interacted with the resulting movements or responses (if any).