Squark comments on Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link] - Less Wrong

18 Post author: Dr_Manhattan 21 April 2014 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 24 April 2014 09:57:51AM *  0 points [-]

...a FAI gains you as much difference as available, minus the opportunity cost of FAI's development...

Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time "partying" rather than developing FAI).

For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history.

Neat. One way it might work is the FAI running much-faster-than-realtime WBE's so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.