This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
LW
Login
driplikesake
Posts
Sorted by New
Wiki Contributions
Comments
Sorted by
Newest
Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry]
driplikesake
7y
2
0
Counter to point 4.5.1.: Couldn't a RAI simulate an FAI to create indexical uncertainty as well?
Reply
Counter to point 4.5.1.: Couldn't a RAI simulate an FAI to create indexical uncertainty as well?