perpetualpeace1 comments on A question about Eliezer - Less Wrong

33 Post author: perpetualpeace1 19 April 2012 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: perpetualpeace1 19 April 2012 07:09:56PM *  4 points [-]

I would guess EY sees himself as more of a researcher than a forecaster, so you shouldn't be surprised if he doesn't make as many predictions as Paul Krugman.

OK. If that is the case, then I think that a fair question to ask is what have his major achievements in research been?

But secondly, a lot of the discussion on LW and most of EY's research presupposes certain things happening in the future. If AI is actually impossible, then trying to design a friendly AI is a waste of time (or, alternately, if AI won't be developed for 10,000 years, then developing a friendly AI is not an urgent matter). What evidence can EY offer that he's not wasting his time, to put it bluntly?

Comment author: Larks 19 April 2012 07:27:27PM 7 points [-]

If AI is actually impossible, then trying to design a friendly AI is a waste of time

No, if our current evidence suggests that AI is impossible, and does so sufficiently strongly to outweigh the large downside of a negative singularity, then trying to design a freindly AI is a waste of time.

Even if it turns out that your house doesn't burn down, buying insurance wasn't necessarily a bad idea. What is important is how likely it looked beforehand, and the relative costs of the outcomes.

Comment author: David_Gerard 19 April 2012 08:02:57PM 6 points [-]

Claiming AI constructed in a world of physics is impossible is equivalent to saying intelligence in a world of physics is impossible. This would require humans to work by dualism.

Of course, this is entirely separate from feasibility.

Comment author: JoshuaZ 19 April 2012 07:14:06PM *  3 points [-]

If AI is actually impossible, then trying to design a friendly AI is a waste of time

I would think that anyone claiming that AI is impossible would have the burden pretty strongly on their shoulders. However, if one was instead saying that a fast-take off was impossible or extremely unlikely there would be more of a valid issue.