Comment author: Heighn 12 April 2016 04:56:16PM 0 points [-]

Agreed, especially when compared to http://www.fhi.ox.ac.uk/gcr-report.pdf.

Comment author: Heighn 12 April 2016 05:04:58PM 0 points [-]

Although, now that I think about it, this survey is about risks before 2100, so the 5% risk of superintelligent AI might be that low because some of the responders belief such AI not to happen before 2100. Still, it seems in sharp contrast with Yudkowsky's estimate.

Comment author: Stuart_Armstrong 12 April 2016 03:23:28PM *  1 point [-]

He is a bit overconfident in that regards, I agree.

Comment author: Heighn 12 April 2016 04:56:16PM 0 points [-]

Agreed, especially when compared to http://www.fhi.ox.ac.uk/gcr-report.pdf.

Comment author: Heighn 12 April 2016 12:49:27PM 0 points [-]

Commenting on the first myth, Yudkowsky himself seems to be pretty sure of this when reading his comment here: http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html. I know Yudkowsky's post is written after this LessWrong article, but it still seems relevant to mention.

Comment author: Heighn 06 March 2014 06:22:32PM 1 point [-]

By the same logic of Quantum Immortality, shouldn't we expect never to fall asleep, since we can't observe ourselves while being asleep?

Comment author: Heighn 05 March 2014 07:11:45PM *  0 points [-]

I was thinking about this post and thought up the following experiment. Suppose, by some quantum mechanism, Bob has a 50% probability of falling asleep for the next 8 hours and a 50% probability of staying awake for the next 8 hours. By the same logic as QI, should Bob expect (with 100% certainty) to be awake after 2 hours, since he cannot observe himself being asleep? I would say no. But then, doesn't QI fail as a result?