The Regulatory Option: A response to near 0% survival odds
This is inspired by Eliezer’s “Death with Dignity” post. Simply put, AI Alignment has failed. Given the lack of Alignment technology AND a short timeline to AGI takeoff, chances of human survival have dropped to near 0%. This bleak outlook only considers one variable (the science) as a lever for...
This is a fun thought experiment, but taken seriously it has two problems:
This is about as difficult as a horse convincing you that you are in a simulation run by AIs that want you to maximize the number and wellbeing as horses. And I don't meant a superintelligent humanoid horse. I mean an actual horse that doesn't speak any human language. It may be the case that the gods created Man to serve Horse, but there's not a lot Seabiscuit... (read more)