XiXiDu comments on Perfectly Friendly AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
This is similar to a discussion I wanted to start, I'll just leave a comment here instead:
If we were to detect the presence of an alien civilisation before the SIAI implements CEV, should they account for their extrapolated volition?
Eliezer Yudkowsky
Eliezer Yudkowsky (counterfactual)
There are a few problems:
Both arguments are bilateral. If you accept the premise that the best way is to account for all agents then we are left with the problem of possibly being a minority. But what appears much more likely is that we'll be risk-averse and expect the aliens not to follow the same line of reasoning. The FAI's of both civilizations might try to subdue the other.
What implications would arise from the detection of an alien civilization technologically similar to ours?
For analogous reasons CEV<humanity> does not sound particularly 'Friendly' to me.
If we account for alien volition directly, then yes, this could be a problem. But if we only care about aliens because we're implementing CEV and some humans care about aliens, then scope insensitivity comes into play and the amount of resources that will be dedicated to the aliens is limited.
Scope insensitivity is a failure to properly account for certain things. CEV is designed to account for everything. It is possible that some conclusions arrived at due to scope insensitivity will be upheld, but we do not yet know whether that is true and current human choices that we know to be the product of biases definitely do not count as evidence about how CEV will choose.
If we only implement CEV for people working for the SIAI and some of them care about the rest of humanity...what's the difference?