You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eitan_Zohar comments on Leaving LessWrong for a more rational life - Less Wrong Discussion

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread.

Comment author: Eitan_Zohar 23 May 2015 09:11:57PM *  0 points [-]

Philosophy is getting too much flak, I think. It didn't take me a lot of effort to realize that any correct belief we have puts us on some equal footing with the AI.

Comment author: [deleted] 24 May 2015 05:56:12PM 0 points [-]

You're anthropomorphizing far too much. It's possible for things which are easy for us to think to be very difficult for an AGI, if it is constructed in a different way.

Comment author: Eitan_Zohar 24 May 2015 06:04:40PM *  0 points [-]

Um... I don't follow this at all. Where do I anthropomorphize? I think that the concept of 'knowledge' would be pretty universal.

Comment author: [deleted] 24 May 2015 06:22:28PM 0 points [-]

Hrm. Maybe I'm reading you wrong? I thought you were making a commonly made argument that any belief we arrive at, the artificial intelligence would as well. And my response was that because the AI is running different mental machinery, it is entirely possible that there are beliefs we arrive at which the AI just doesn't consider and vice versa. Were you saying something different?

Comment author: Eitan_Zohar 24 May 2015 06:44:24PM *  0 points [-]

You, quite rightly, criticized the notion that an AI would have just as much of an epistemic advantage on humans that we would have on a cow. Any correct notions we have are just that- correct. It's not like there's more truth to them that a godlike intelligence could know.