PhilGoetz comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread.

Comment author: PhilGoetz 03 November 2010 10:27:49PM *  0 points [-]

This post doesn't show up under "NEW", nor does it show up under "Recent Posts".

ADDED: Never mind. I forgot I had "disliked" it, and had "do not show an article once I've disliked it" set.

(I disliked it because I find it kind of shocking that Ben, who's very smart, and whom I'm pretty sure has read the things that I would refer him to on the subject, would say that the Scary Idea hasn't been laid out sufficiently. Maybe some people need every detail spelled out for them, but Ben isn't one of them. Also, he is committing the elementary error of not considering expected value.

ADDED: Now that I've read Ben's entire post, I upvoted rather than downvoted this post. Ben was not committing the error of not considering expected value, so much as responding to many SIAI-influenced people who are not considering expected value. And I agree with most of what Ben says. I would add that Eliezer's plan to construct something that will provably follow some course of action - any course of action - chosen by hairless primates, is likely to be worse in the long run than a hard-takeoff AI that kills all humans almost immediately. Explaining what I mean by "worse" is problematic; but no more problematic than explaining why I should care about propagating human values.)

I also disagree about what the Scary Idea is - to me, the idea that the AI will choose to keep humans around for all eternity, is scarier than that it will not. But that is something Eliezer either disagrees with, or has deliberately made obscure.)

Comment author: timtyler 31 March 2011 11:33:00PM 0 points [-]

to me, the idea that the AI will choose to keep humans around for all eternity, is scarier than that it will not. But that is something Eliezer either disagrees with, or has deliberately made obscure.

Wouldn't it make sense to keep some humans around for all eternity - in the history simul-books? That seems to make sense - and not be especially scary.

Comment author: PhilGoetz 31 March 2011 11:45:35PM *  0 points [-]

Sure. Tiling the universe largely with humans is the strong scary idea. Locking in human values for the rest of the universe is the weak scary idea. Unless the first doesn't imply the second; in which case I don't know which is more scary.

Comment author: timtyler 04 November 2010 08:27:00AM 0 points [-]

It does now for me. Strange.

Comment author: PhilGoetz 04 November 2010 03:10:00PM 1 point [-]

Oops. My mistake. It's a setting I had that I forgot about.

Comment author: ata 03 November 2010 10:35:15PM *  0 points [-]

It doesn't?

It's off the front page of NEW/Recent Posts, as there have been more than ten other posts since it was posted, but it's still there.

Comment deleted 04 November 2010 02:12:40AM *  [-]
Comment author: ata 04 November 2010 02:30:34AM 0 points [-]

Weird. It's there for me.

  • Qualia Soup, a rationalist and a skilled You Tube jockey by Raw_Power | 6
  • Value Deathism by Vladimir_Nesov | 21
  • Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) by ciphergoth | 24
  • Cambridge Meetups Nov 7 and Nov 21 by jimrandomh | 4
  • Making your explicit reasoning trustworthy by AnnaSalamon | 60