You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

heredami comments on [LINK] Why I'm not on the Rationalist Masterlist - Less Wrong Discussion

21 Post author: Apprentice 06 January 2014 12:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (866)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 January 2014 12:19:48AM *  7 points [-]

More broadly, I'm skeptical of 'intelligence' in general. It doesn't seem like a useful term.

People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define 'intelligence' as an agent’s ability to achieve goals in a wide range of environments.

It seems your post seems to be more about free will than intelligence as defined by Muehlhauser in the above article. Free will has been covered quite comprehensibly on LessWrong so I'm not particularly interested debating about it.

Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn't really matter if the AI's actions are just an extension of what it was programmed to do. Even people are just extensions of what they were "programmed to do by evolution". Unless you believe in magical free will, one's actions have to come from some source and in this regard people don't differ from paper clip maximizers.

What would yours be?

I just think there are good optimizers and then there are really good optimizers. Between these there aren't any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn't any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.

Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so.

There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don't know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn't have to be sentient, just really good at cross-domain optimization.

Comment author: WalterL 08 January 2014 03:53:19AM 0 points [-]

I'm confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).

Comment author: [deleted] 08 January 2014 04:04:53AM 2 points [-]

I don't believe sentient AIs are impossible and I'm sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief - though I would prefer the word "AI" be taboo'd in this case. This doesn't mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.

If it was unclear, by "strong AI" I meant an AI that is capable of self-improving to the point of FOOM.