Took the survey, even though I've mostly only lurked.
I don't know what an "ontologically basic mental entity" is. Also, I only left the Singularity question blank because I think it's overall probability of happening is less that 50%.
Took the survey, even though I've mostly only lurked.
I don't know what an "ontologically basic mental entity" is. Also, I only left the Singularity question blank because I think it's overall probability of happening is less that 50%.
Eclipse Phase is a sci-fi RPG dealing with AI, nanotech, biotech, mind copying, and other far-future issues, all played in a straight manner. By default, characters are part of an organization created to fight against existential risks, after they've become all too real.
I have run a game of Eclipse Phase at an RPG convention in Sydney. I found it to be a very cool game, the setting in particular is very interesting and varied however the rules are a little complex for people who want to just try it.
There are a lot of bits which don't quite fit into hard sci-fi - aliens, psychics, nanotech that works like magic. However, it's pretty easy to leave these out, except it's difficult to know how realistic nanotech would work. I doubt we'll ever use it to create mundane things due to energy constraints, but I guess in the context of the game it works.
Where the game really shines is in dealing with mind uploading, "re-sleeving", virtual worlds and psychic surgery - you can copy minds and re-merge them, even edit them to some degree. It gives a lot of scope for games that work with meta-levels of reality and manipulation of minds. I would like to explore possible transhumanism using it if I ever get the time to.
This is an incorrect description of 5-and-10. The description given is of a different problem (one of whose aspects is addressed in the recent cousin_it's writeup, the problem is resolved in that setting by Lemma 2).
5-and-10 problem is concerned with the following (incorrect) line of reasoning by a hypothetical agent:
"I have to decide between $5 and $10. Suppose I decide to choose $5. I know that I'm a money-optimizer, so if I do this, $5 must be more money than $10, so this alternative is better. Therefore, I should choose $5."
It seems to me that any agent unable to solve this problem would be considerably less intelligent than a human.