Phlebas comments on Satisficers want to become maximisers - Less Wrong

21 Post author: Stuart_Armstrong 21 October 2011 04:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread.

Comment author: [deleted] 21 October 2011 06:40:46PM *  1 point [-]

E(U(there exists an agent A maximising U) ≥ E(U(there exists an agent A satisficing U)

The reason this equation looks confusing is because (I presume) there ought to be a second closing bracket on both sides.

Anyhow, I agree that a satisficer is almost as dangerous a maximiser. However, I've never come across the idea that a satisficing agent "has a lot to recommend it" on Less Wrong.

I thought that the vast majority of possible optimisation processes - maximisers, satisficers or anything else - are very likely to destroy humanity. That is why CEV, or in general the incorporation into an AI of at least one complete set of human values, is necessary in order for AGI not to be almost certainly uFAI.

Comment author: Stuart_Armstrong 21 October 2011 06:52:08PM *  2 points [-]

The reason this equation looks confusing is because (I presume) there ought to be a second closing bracket on both sides.

There are second closing brakets on both sides. Look closely. They have always been there. Honest, guv. No, do not look into your cache or at previous versions. They lie! I would never have forgotten to put closing brakets.

Nor would I ever misspell the word braket. Or used irony in a public place ;-)

Comment author: Stuart_Armstrong 21 October 2011 06:49:42PM 2 points [-]

This is a simple argument, that I hadn't seen before as to why satisficers are not a good way to go about things.

I've been looking at Oracles and other non-friendly AGIs that may nevertheless be survivable, so it's good to know that satisficers are not to be counted among them.