Richard_Loosemore comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 16 May 2015 11:42:51AM *  0 points [-]

Loosemore's claim could be steelmanned into the claim that the Maverick Nanny isnt likely...it requires an AI with goals, with harcoded goals, with hardcoded goals including a full explicit definition of happiness, and a buggy full explicit definition of happiness.. That's a chain of premises.

Comment author: Richard_Loosemore 16 May 2015 04:26:31PM 0 points [-]

That isn't even remotely what the paper said. It's a parody.

Comment author: TheAncientGeek 16 May 2015 06:45:09PM *  0 points [-]

Since it is a steelman it isnt supposed to be what the paper is saying,

Are you maintaining, in contrast, that the maverick nanny is flatly impossible?

Comment author: Richard_Loosemore 18 May 2015 08:09:13PM 0 points [-]

Sorry, I may have been confused about what you were trying to say because you were responding to someone else, and I hadn't come across the 'steelman' term before.

I withdraw 'parody' (sorry!) but ... it isn't quite what the logical structure of the paper was supposed to be.

It feels like you steelmanned it onto some other railroad track, so to speak.