The University of Cambridge Centre for the Study of Existential Risk (CSER) is hiring!

6 crmflynn 06 October 2016 04:53PM

The University of Cambridge Centre for the Study of Existential Risk (CSER) is recruiting for an Academic Project Manager. This is an opportunity to play a shaping role as CSER builds on its first year's momentum towards becoming a permanent world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and project management responsibilities.

The Academic Project Manager will work with CSER's Executive Director and research team to co-ordinate and develop CSER's projects and overall profile, and to develop new research directions. The post-holder will also build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide, and will act as an ambassador for the Centre’s research externally. Research topics will include AI safety, bio risk, extreme environmental risk, future technological advances, and cross-cutting work on governance, philosophy and foresight. Candidates will have a PhD in a relevant subject, or have equivalent experience in a relevant setting (e.g. policy, industry, think tank, NGO).

Application deadline: November 11th. http://www.jobs.cam.ac.uk/job/11684/

The Global Catastrophic Risk Institute (GCRI) seeks a media engagement volunteer/intern

5 crmflynn 14 September 2016 04:42PM

Volunteer/Intern Position: Media Engagement on Global Catastrophic Risk

http://gcrinstitute.org/volunteerintern-position-media-engagement-on-global-catastrophic-risk/

The Global Catastrophic Risk Institute (GCRI) seeks a volunteer/intern to contribute on the topic of media engagement on global catastrophic risk, which is the risk of events that could harm or destroy global human civilization. The work would include two parts: (1) analysis of existing media coverage of global catastrophic risk and (2) formulation of strategy for media engagement by GCRI and our colleagues. The intern may also have opportunities to get involved in other aspects of GCRI.

All aspects of global catastrophic risk would be covered. Emphasis would be placed on GCRI’s areas of focus, including nuclear war and artificial intelligence. Additional emphasis could be placed on topics of personal interest to the intern, potentially including (but not limited to) climate change, other global environmental threats, pandemics, biotechnology risks, asteroid collision, etc.

The ideal candidate is a student or early-career professional seeking a career at the intersection of global catastrophic risk and the media. Career directions could include journalism, public relations, advertising, or academic research in related social science disciplines. Candidates seeking other career directions would also be considered, especially if they see value in media experience. However, we have a strong preference for candidates intending a career on global catastrophic risk.

The position is unpaid. The intern would receive opportunities for professional development, networking, and publication. GCRI is keen to see the intern benefit professionally from this position and will work with the intern to ensure that this happens. This is not a menial labor activity, but instead is one that offers many opportunities for enrichment.

A commitment of at least 10 hours per month is expected. Preference will be given to candidates able to make a larger time commitment. The position will begin during August-September 2016. The position will run for three months and may be extended pending satisfactory performance.

The position has no geographic constraint. The intern can work from anywhere in the world. GCRI has some preference for candidates from American time zones, but we regularly work with people from around the world. GCRI cannot provide any relocation assistance.

Candidates from underrepresented demographic groups are especially encouraged to apply.

Applications will be considered on an ongoing basis until 30 September, 2016.

To apply, please send the following to Robert de Neufville (robert [at] gcrinstitute.org):

* A cover letter introducing yourself and explaining your interest in the position. Please include a description of your intended career direction and how it would benefit from media experience on global catastrophic risk. Please also describe the time commitment you would be able to make.

* A resume or curriculum vitae.

* A writing sample (optional).

The Future of Humanity Institute is hiring!

13 crmflynn 18 August 2016 01:09PM

FHI is accepting applications for a two-year position as a full-time Research Project Manager. Responsibilities will include coordinating, monitoring, and developing FHI’s activities, seeking funding, organizing workshops and conferences, and effectively communicating FHI’s research. The Research Program Manager will also be expected to work in collaboration with Professor Nick Bostrom, and other researchers, to advance their research agendas, and will additionally be expected to produce reports for government, industry, and other relevant organizations. 

Applicants will be familiar with existing research and literature in the field and have excellent communication skills, including the ability to write for publication. He or she will have experience of independently managing a research project and of contributing to large policy-relevant reports. Previous professional experience working for non-profit organisations, experience with effectiv altruism, and a network in the relevant fields associated with existential risk may be an advantage, but are not essential. 

To apply please go to https://www.recruit.ox.ac.uk and enter vacancy #124775 (it is also possible to find the job by searching choosing “Philosophy Faculty” from the department options). The deadline is noon UK time on 29 August. To stay up to date on job opportunities at the Future of Humanity Institute, please sign up for updates on our vacancies newsletter at https://www.fhi.ox.ac.uk/vacancies/.

Comment author: Clarity 03 May 2016 01:17:41AM 1 point [-]

What sort of experience and education would make a candidate competitive for this position?

Comment author: crmflynn 04 May 2016 05:56:37PM 1 point [-]

My sense from talking with Professor Dafoe is that he is primarily interested in recruiting people based on their general aptitude, interest, and dedication to the issue rather than relying heavily on specific educational credentials.

Comment author: AlexMennen 03 May 2016 06:56:21AM 1 point [-]

Source?

Comment author: crmflynn 04 May 2016 05:52:43PM 1 point [-]

https://www.fhi.ox.ac.uk/vacancies-for-research-assistants/ It was not up on the website at the time you asked, but it is up now.

Paid research assistant position focusing on artificial intelligence and existential risk

7 crmflynn 02 May 2016 06:27PM

Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email allan.dafoe@yale.edu with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.

Comment author: crmflynn 12 November 2015 09:18:38AM 0 points [-]

There was some confusion in the comments to my original post “Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife” (http://lesswrong.com/r/discussion/lw/mxu/newcomb_bostrom_calvin_credence_and_the_strange/) which makes me think I was not nearly clear enough in the original. I am sincerely sorry for this. I am also really appreciative to everyone who left such interesting comments despite this. I have added some notes in an update to clarify my argument. I also responded to comments in a way that I hope will further illustrate some of the trickier bits. This should make it more interesting to read and perhaps inspires some more discussion. I was vaguely tempted to repost the original with the update, but thought that was probably bad etiquette. My hope is that anyone who was turned off by it being unclear initially might take a second look at it and the discussion if it seems like an interesting topic to them. Thank you.

Comment author: Lumifer 10 November 2015 05:36:16PM *  1 point [-]

They did not really “create” this world so much as organized certain aspects of the environment. ... If I am in the environment of a video game, I do not think that anyone has created a different world, I just think that they have created a different environment by arranging bits of pre-existing world.

That's what creation is. The issue here is inside view / outside view. Take Pac-Man. From the outside, you arranged bits of existing world to make the Pac-Man world. From the inside, you have no idea that such things as clouds, or marmosets, or airplanes exist: your world consists of walls, dots, and ghosts.

and “defying” it is not really any more miraculous than me breaking the Mars off of a diorama of the solar system

Outside/inside view again. If I saw Mars arbitrarily breaking out of its orbit and go careening off to somewhere, that would look pretty miraculous to me.

I think that for you, “gods” emerge as a being grows in power, whereas I tend to think that divinity implies something different not just in scale, but in type.

I agree about the difference in type. It is here: these beings are not of this world. The difference between you and a character in a MMORG is a difference in type.

Re one/two-boxers, see my answer to the other post...

Comment author: crmflynn 12 November 2015 06:16:11AM 0 points [-]

I agree with you about the inside / outside view. I also think I agree with you about the characteristics of the simulators in relationship to the simulation.

I think I just have a vaguely different, and perhaps personal, sense of how I would define "divine" and "god." If we are in a simulation, I would not consider the simulators gods. Very powerful people, but not gods. If they tried to argue with me that they were gods because they were made of a lot of organic molecules whereas I was just information in a machine, I would suggested it was a distinction without a difference. Show me the uncaused cause or something outside of physics and we can talk

Comment author: Lumifer 10 November 2015 05:59:45PM 0 points [-]

I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50.

Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part -- Omega is basically God, so do you try to contest His knowledge..?

can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?

Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what?

As a turn of phrase, I was referring two types.

My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about "types" -- one can certainly imagine them, but that has nothing to do with reality.

I am asking you to adjust your credence based on new information.

Which new information?

Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?

Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq?

You are conflating here two very important concepts, that is, "present" and "future".

People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.

As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated.

Correct.

However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.”

My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it's not affecting any decisions I'm making. I still don't see why the number of one-boxers around should cause me to update this probability to anything more significant.

Comment author: crmflynn 12 November 2015 06:04:25AM 0 points [-]

Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what?

By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.

For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.

For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1/52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1/50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.

Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.

One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.

My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about "types" -- one can certainly imagine them, but that has nothing to do with reality.

I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.

Which new information?

Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?

Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.

Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/

People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.

The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)

As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.

For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.

If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.

The probability is non-zero, but it's not affecting any decisions I'm making. I still don't see why the number of one-boxers around should cause me to update this probability to anything more significant.

Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….

Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.

Comment author: Lumifer 05 November 2015 04:58:24PM 0 points [-]

I am guessing you two-box in the Newcomb paradox as well, right?

Yes, of course.

a lot of people do not

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Comment author: crmflynn 10 November 2015 01:29:17PM 1 point [-]

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?

I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.

If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?

One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.

View more: Next