RobbBB comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 12 September 2013 08:57:06PM 1 point [-]

Nearly all software that superficially looks like it's going to go skynet on you and kill you, isn't going to do that, either.

Sure. Because nearly all software that superficially looks to a human like it's a seed AI is not a seed AI. The argument for 'programmable indirect normativity is an important research focus' nowhere assumes that it's particularly easy to build a seed AI.

"If there are seasoned AI researchers who can't wrap their heads around the five theses", then you are going to feel more pleased with yourself, being a believer

Hm? No. Dissonance is painful. People feel happier agreeing than disagreeing.

which releases dopamine and reinforces what ever fallacies of reasoning you make.

Releasing dopamine also reinforces whatever correct reasoning one carries out. Good reasoning is just as much a brain process as bad reasoning.

Comment deleted 12 September 2013 09:22:41PM *  [-]
Comment author: RobbBB 12 September 2013 09:38:15PM *  1 point [-]

I didn't say everyone who rejects any of the theses does so purely because s/he didn't understand it. That doesn't make it cease to be a problem that most AGI researchers don't understand all of the theses, or the case supporting them. You may be familiar with the theses only from the Sequences, but they've all been defended in journal articles, book chapters, and conference papers. See e.g. Chalmers 2010 and Chalmers 2012 for the explosion thesis, or Bostrom 2012 for the orthogonality thesis.

Comment deleted 12 September 2013 10:12:33PM *  [-]
Comment author: RobbBB 13 September 2013 08:05:45AM *  0 points [-]

You're still picking those particular views due to the endorsement by Yudkowsky.

Your psychological speculation fails you. I actually read the articles I cited, and I found their arguments convincing.

With regards to Chalmers and Bostrom, they are philosophers with zero understanding of the actual issues involved in AI

This makes it sound like you've never read anything by those two authors on the subject. Possibly you're trying to generalize from your cached idea of a 'philosopher'. Expertise in philosophy does not in itself make one less qualified to weigh in on meta-ethics, normative ethics, epistemology, philosophy of mind, moral psychology, philosophy of cognitive and information sciences, or for that matter AI theory. Read the papers I cited. They get the empirical facts right, they focus on the issues philosophy is more or less built for, and they present their case clearly and concisely. If philosophers have no place in this debate, then they have no place in any debate.

that guy has quite distinct understanding of the whole issue from the others

What's the distinction you have in mind?

which is probably why you wouldn't list him.

No? I've cited Omohundro before, and I'll cite him again.