TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 17 May 2015 10:50:30PM *  0 points [-]

I will try to reply to this properly later.

Thanks, and take your time!

Don't forget that all of this analysis is supposed to be about situations in which we have, so to speak "done our best" with the AI design. That is sort of built into the premise. If there is a no-brainer change we can make to the design of the AI, to guard against some failure mode, then is assumed that this has been done.

I feel like this could be an endless source of confusion and disagreement; if we're trying to discuss what makes airplanes fly or crash, should we assume that engineers have done their best and made every no-brainer change? I'd rather we look for the underlying principles, we codify best practices, we come up with lists and tests.

Comment author: TheAncientGeek 18 May 2015 10:48:05AM 0 points [-]

If we're trying to discuss what makes airplanes fly or crash, should we assume that engineers have done their best and made every no-brainer change?

If you are in the business of pointing out to them potential problems they are not aware of, then yes, because they can be assumed to be aware of no brainer issues.

MIRI seeks to point out dangers in AI that aren't the result of gross incompetence or deliberate attempts to weaponise AI: it's banal to point out that these could read to danger.