ArisKatsaris comments on Less Wrong Q&A with Eliezer Yudkowsky: Video Answers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
20: What is the probability that this is the ultimate base layer of reality?
Eliezer gave the joke answer to this question, because this is something that seems impossible to know.
However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.
The zero one infinity rule also makes it seem more unlikely this is the base level of reality. http://catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html
It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.
I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?
The Zero-One-Infinity Rule hasn't been shown to apply to our reality, and even if it applied to our reality it would also permit "One".
Can you give us a list of most-to-least interesting periods in human history? You have an anglo name, and I think you're living in a particularly boring period of Anglo-American history. (If you had an Arab name, this might be an interesting period though, though not as interesting as if you were an Arab in the period of Mohammed or the first few Caliphs)
You don't actually know what you would want with a transhuman mind. If simulations are fully conscious (the only sort of simulation relevant to our argument) I think that would be a particularly cruel thing for a transhuman mind to want.