gwern comments on Objections to Coherent Extrapolated Volition - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
Indeed? May I suggest reading http://www.wired.com/wiredscience/2011/08/spoilers-dont-spoil-anything/ (PDF) ?
I don't follow this one. Is this just making the argument-by-definition that an omniscient being couldn't be curious? The universe seems to place hard limits on how much computation can be done and storage accessed, so there will always be things a FAI will not know. (I can also appeal to more general principles here: Godel, Turing, Hutter or Legge's no-elegant-predictor results, etc.)
Er, what? If the agent isn't reaping greater payoffs then it was simply mistaken (that happens sometimes) and can go back to not cooperating. If it had defecting as an intrinsically good thing, then why did it ever start cooperating?
If this is the basic point, you're missing a lot of more germane results than what you put down. Openness and parasite load (or psilocybin), IQ and cooperation and taking normative economics stances (besides the linked cites, Pinker had a ton of relevant stuff in the later chapters of Better Angels), etc.
I think XiXiDu is actually saying that if you model a given human, but with changed context that flows from their inferred values (smarter, more the people we wished we were, etc...) you will wind up with a model of a completely different human whose values are not coherent with those of the source human, because our context is extremely important in determining what we think, know, want, and value.