You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open thread, Aug. 10 - Aug. 16, 2015 - Less Wrong Discussion

5 Post author: MrMind 10 August 2015 07:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (283)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 10 August 2015 10:10:27PM 2 points [-]

This is turning out to be harder to get across than I figured. First you thought I thought an AI should keep its programmers awake until they died, now it should wirehead them? I'm not an orc.

You set two goals. One is to maximize expressed desires which likely leads to wireheading. The other is to keep constant communication with doesn't allow sleep.

It isn't trying to figure out clever ways to get around your restriction, because it doesn't want to.

Controlling the information flow isn't getting around your restriction. It's the straightforward way of matching expressed desires with results. Otherwise the human might ask for two contradictory things and the AGI can't fulfill both. The AGI has to prevent that case from arising to get a 100% fulfillment score.

You are not the first person who thinks that taming an AGI is trival but MIRI thinks that taming an AGI is a hard task. That's the result of deep engagement with the issue.

Which you presumably come close to (since you are turning on an AI), but why not also have the red button? What does it hurt?

I don't object to a red button and you didn't call for one at the start. Maximizing expressed desires isn't a red button.