Squark comments on Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link] - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (28)
Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time "partying" rather than developing FAI).
Neat. One way it might work is the FAI running much-faster-than-realtime WBE's so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.