eli_sennesh comments on Why are we not starting to map human values? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
Generally we aim to come up with things humans will both like and endorse. Optimizing for "like" but not "endorse" leads to various forms of drugging or wireheading (even if Eliezer does disturb me by being tempted towards such things). Optimizing for "endorse" but not "like" sounds like carrying the dystopia we currently call "real life" to its logical, horrid conclusion.
How well-founded does a set of notes or thoughts have to be in order to be worth posting here?
(shrug) Well, OK. If I consider the set of plans A which maximize our values when implemented, and the set of plans B which we endorse when they're explained to us, I'm prepared to believe that the AB intersection is nonempty. And really, any technique that stands a chance worth considering of coming up with anything in A is sufficiently outside my experience that I won't express an opinion about whether it's noticably less likely to come up with something in AB. So, go for it, I guess.
Depends on whom you ask. I'd say it's the product of (novel relevant concise entertaining coherent) that gets compared to threshold; well-founded is a nice benny but not critical. That said, posts that don't make the threshold will frequently be berated for being ill-founded if they are.