hairyfigment comments on Why AGI is extremely likely to come before FAI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
This almost has to be false. I personally think CEV sounds like the best direction I currently know about, but maybe the process of extrapolation has a hidden 'gotcha'. Hopefully a decision theory that can model self-modifying agents (like our extrapolated selves, perhaps, as well as the AI) will help us figure out what we should be asking. Settling on one approach before then seems premature, and in fact neither the SI nor Eliezer has done so.