https://www.fhi.ox.ac.uk/vacancies-for-research-assistants/ It was not up on the website at the time you asked, but it is up now.
There was some confusion in the comments to my original post “Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife” (http://lesswrong.com/r/discussion/lw/mxu/newcomb_bostrom_calvin_credence_and_the_strange/) which makes me think I was not nearly clear enough in the original. I am sincerely sorry for this. I am also really appreciative to everyone who left such interesting comments despite this. I have added some notes in an update to clarify my argument. I also responded to comments in a way that I hope will further illustrate som...
I agree with you about the inside / outside view. I also think I agree with you about the characteristics of the simulators in relationship to the simulation.
I think I just have a vaguely different, and perhaps personal, sense of how I would define "divine" and "god." If we are in a simulation, I would not consider the simulators gods. Very powerful people, but not gods. If they tried to argue with me that they were gods because they were made of a lot of organic molecules whereas I was just information in a machine, I would suggested it was a distinction without a difference. Show me the uncaused cause or something outside of physics and we can talk
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what?
By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ...
I don't think this is true. The correct version is your following sentence:
A lot of people on LW do not
People on LW, of course, are not terribly representative of people in general.
LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal ...
First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.
Absolutely. I...
the proposal here
I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have die...
I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.
I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.
I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption tha...
I think that this sort of risks being an argument about a definition of a word, as we can mostly agree on the potential features of the set-up. But because I have a sense that this claim comes with an implicit charge of fideism, I’ll take another round at clarifying my position. Also, I have written a short update to my original post to clarify some things that I think I was too vague on in the original post. There is a trade-off between being short enough to encourage people to read it, and being thorough enough to be clear, and I think I under-wrote it a...
This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.
Maybe? We can increase our credence, but I think whether or not it increases the likelihood is an open question. The intuitions seem to split between two-boxers and a subset of one-boxers.
That said, thank you for the secondary thought experiment, which is really interesting.
My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.
I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinct...
Consider a different argument.
Our world is either simulated or not.
If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.
If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.
If you do two-box, realize that a lot of people do not. A lot of p...
Simulations of long-ago ancestors..?
Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?
What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, the...
I think you and I might be missing one another. Or that I am at least missing your point. Accordingly, my responses below might be off point. Hopefully they are not.
“Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".”
I don’t think that necessarily follows. Creationism implies divinity, and gods implies someth...
I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely h...
When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility?
I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his. Also, I am not really trying to promote it so much as to feel it out. I have not yet found any reason to think I am wrong about it being a possibility, though I myself do not “feel” it to be likely. That said, with stuff like this, I have no sense that intui...
I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed sho...
Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.
I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more l...
I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.
I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to ...
My sense from talking with Professor Dafoe is that he is primarily interested in recruiting people based on their general aptitude, interest, and dedication to the issue rather than relying heavily on specific educational credentials.