timtyler comments on Satisficers want to become maximisers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (67)
Alternately, a satisficer could build a maximiser. For example, if you don't give it the ability to modify its own code. It also might build a paperclip-making Von Neumann machine that isn't anywhere near a maximizer, but is still insanely dangerous.
I notice a satisficing agent isn't well-defined. What happens when it has two ways of satisfying its goals? It may be possible to make a safe one if you come up with a good enough answer to that question.
Yep. Coding "don't unleash (or become) a maximiser or something similar" is very tricky.
It may be. But encoding "safe" for a satisficer sounds like it's probably just as hard as constructing a safe utility function in the first place.