MichaelVassar comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelVassar 13 November 2009 05:08:13AM 9 points [-]

I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.

Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Comment author: timtyler 14 November 2009 12:35:45AM 4 points [-]

Which claims? The SIAI collectively seems to think some pretty strange things to me. Many are to do with the scale of the risk facing the world.

Since this is part of its funding pitch, one obvious explanation seems to be that the organisation is attempting to create an atmosphere of fear - in the hope of generating funding.

We see a similar phenomenon surrounding global warming alarmism - those promoting the idea of there being a large risk have a big overlap with those who benefit from related funding.

Comment author: MichaelVassar 15 November 2009 04:39:09PM 7 points [-]

You would expect serious people who believed in a large risk to seek involvement, which would lead the leadership of any such group to benefit from funding.

Just how many people do you imagine are getting rich off of AGI concerns? Or have any expectation of doing so? Or are even "getting middle class" off of them?

Comment author: timtyler 15 November 2009 04:55:09PM *  0 points [-]

Some DOOM peddlers manage to get by. Probably most of them are currently in Hollywood, the finance world, or ecology. Machine intelligence is only barely on the radar at the moment - but that doesn't mean it will stay that way.

I don't necessarily mean to suggest that these people are all motivated by money. Some of them may really want to SAVE THE WORLD. However, that usually means spreading the word - and convincing others that the DOOM is real and immanent - since the world must first be at risk in order for there to be SALVATION.

Look at Wayne Bent (aka Michael Travesser), for example:

"The End of The World Cult Pt.1"

The END OF THE WORLD - but it seems to have more to do with sex than money.

Comment author: Zack_M_Davis 24 November 2009 12:05:18AM 1 point [-]

an ideological attraction to libertarianism is BY FAR the main reason for disagreement [with singleton strategies/hypotheses]. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Any practical advice on how to overcome this failure mode, if and only if it is in fact a failure mode?