private_messaging comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 16 May 2012 10:31:38PM *  -2 points [-]

The thread about EY's failure to make make many falsifiable predictions is better ad hominem

I meant to provide priors for the expected value of communication with SI. Sorry, can't be done in non ad hominem way. There's been video or two where Eliezer was called "world's foremost expert on recursive self improvement", which normally implies making something self improve.

the speculation about launching terrorist attacks on fab plants is a much more compelling display of potential risk to life and property.

Ahh right, should of also linked this one. I see it was edited replacing 'we' with 'world government' and 'sabotage' with sanctions and military action. BTW that speculation is by gwern, is he working at SIAI?

What probability would you give to FinalState's assertion of having a working AGI?

AGI is ill defined. Of something that would foom as to pose potential danger, infinitesimally small.

Ultimately: I think risk to his safety is small, and payoff is negligible, while the risk from his software is pretty much nonexistent.

Comment author: Vladimir_Nesov 16 May 2012 10:47:25PM *  8 points [-]

There's been video or two where Eliezer was called "world's foremost expert on recursive self improvement"

This usually happens when the person being introduced wasn't consulted about the choice of introduction.

Comment author: private_messaging 16 May 2012 11:18:06PM 0 points [-]

It nonetheless results in significant presentation bias, what ever is the cause.

My priors, for one thing, were way off in SI's favour. My own cascade of updates was triggered by seeing Alexei say that he plans to make a computer game to make money to donate to SIAI. Before which I sort of assumed that the AI discussions here were about some sorta infinite power super-intelligence in scifi, not unlike Vinge's beyond, intellectually pleasurable game of wits (I even participated a little once or twice along the lines of how you can't debug superintelligence). I assumed that Eliezer had achievements from which he got the attitude (I sort of confused him with Hanson to some extent), etc etc etc. I looked into it more accurately since.