pop comments on "Stupid" questions thread - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (850)
Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Your tone reminded me of super religious folk who are convinced that, say "Jesus is coming back soon!" and it'll be "totally awesome".
That's nice.
Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.
I do imagine how much better the world could be. I actually do want MIRI to succeed. Though currently I have low confidence in their future success, so I don't feel "bliss" (if that's the right word.)
BTW I'm actually slightly agnostic because of the simulation argument.
Enthusiasm? Excitement? Hope?
Yep. I don't take it too seriously, but its at least coherent to imagine beings outside the universe who could reach in and poke at us.