Kawoomba comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 07 September 2013 08:51:53PM 17 points [-]

Richard Loosemore is a professor of mathematics with about twenty publications in refereed journals on artificial intelligence.

I was at an AI conference--it may have been the 2009 AGI conference in Virginia--where Selmer Bringsjords gave a talk explaining why he believed that, in order to build "safe" artificial intelligence, it was necessary to encode their goal systems in formal logic so that we could predict and control their behavior. It had much in common with your approach. After his talk, a lot of people in the audience, including myself, were shaking their heads in dismay at Selmer's apparent ignorance of everything in AI since 1985. Richard got up and schooled him hard, in his usual undiplomatic way, in the many reasons why his approach was hopeless. You could've benefited from being there. Michael Vassar was there; you can ask him about it.

AFAIK, Richard is one of only two people who have taken the time to critique your FAI + CEV ideas, who have decades of experience trying to codify English statements into formal representations, building them into AI systems, turning them on, and seeing what happens. The other is me. (Ben Goertzel has the experience, but I don't think he's interested in your specific computational approach as much as in higher-level futurist issues.) You have declared both of us to be not worth talking to.

In your excellent fan-fiction Harry Potter and the Methods of Rationality, one of your themes is the difficulty of knowing whether you're becoming a Dark Lord when you're much smarter than almost everyone else. When you spend your time on a forum that you control and that is built around your personal charisma, moderated by votes that you are not responsible for, but that you know will side with you in aggregate unless you step very far over the line, and you write off as irredeemable the two people you should listen to most, that's one of the signs. When you have entrenched beliefs that are suspiciously convenient to your particular circumstances, such as that academic credential should not adjust your priors, that's another.

Comment author: Kawoomba 08 September 2013 09:05:14AM *  0 points [-]

It had much in common with your approach. After his talk, a lot of people in the audience, including myself, were shaking their heads in dismay at Selmer's apparent ignorance of everything in AI since 1985. Richard got up and schooled him hard, in his usual undiplomatic way, in the many reasons why his approach was hopeless.

Which are?

(Not asking for a complete and thorough reproduction, which I realize is outside the scope of a comment, just some pointers or an abridged version. Mostly I wonder which arguments you lend the most credence to.)

Edit: Having read the discussion on "nothing is mere", I retract my question. There's such a thing as arguments disqualifying someone from any further discourse in a given topic:

As a result, the machine is able to state, quite categorically, that it will now do something that it KNOWS to be inconsistent with its past behavior, that it KNOWS to be the result of a design flaw, that it KNOWS will have drastic consequences of the sort that it has always made the greatest effort to avoid, and that it KNOWS could be avoided by the simple expedient of turning itself off to allow for a small operating system update ………… and yet in spite of knowing all these things, and confessing quite openly to the logical incoherence of saying one thing and doing another, it is going to go right ahead and follow this bizarre consequence in its programming.

... yes? Unless the ghost in the machine saves it ... from itself!