Eugine_Nier comments on Our Phyg Is Not Exclusive Enough - Less Wrong

25 [deleted] 14 April 2012 09:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eugine_Nier 15 April 2012 09:28:46PM 6 points [-]

Note that as Eliezer says here

But of course, not all the rationalists I create will be interested in my own project—and that's fine. You can't capture all the value you create, and trying can have poor side effects.

Comment author: XiXiDu 17 April 2012 09:18:05AM *  -2 points [-]

Note that as Eliezer says here

But of course, not all the rationalists I create will be interested in my own project—and that's fine. You can't capture all the value you create, and trying can have poor side effects.

I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:

(Please read up on the context.)

I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.

...

I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays...

...

...if Omega tells me that I’ve actually managed to do worse than nothing on Friendly AI, that of course has to change my opinion of how good I am at rationality or teaching others rationality,...

Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don't think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.

Comment author: [deleted] 17 April 2012 10:21:35AM 1 point [-]

You've got selective quotation down to an art form. I'm a bit jealous.