Updating towards the simulation hypothesis because you think about AI

9 SoerenMind 05 March 2016 10:23PM

(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)


Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:


1) P(involved in AI* | ¬sim) = very low

2) P(involved in AI | sim) = high


Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.


Premise 1


Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.


Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.


(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)


Therefore, P(involved in AI* | ¬sim) = very low.


Premise 2

 

Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.


Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.


Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.


Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.


Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.


If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,

P(involved in AI | sim) = high


The closer the causal chain to (capabilities research etc)


If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.


Why could this be wrong?


I could think of four general ways in which this argument could go wrong:


1) Our position in the history of the universe is not that unlikely

2) We would expect to see something else if we were in one of the aforementioned simulations.

3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI

4) My anthropics are flawed


I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.


Conclusion

Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?


Appendix


My friend Matiss Apinis (othercenterism) put the first premise like this:


“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"

Working at MIRI: An interview with Malo Bourgon

8 SoerenMind 01 November 2015 12:54PM

[Cross-posted on the EA Forum]

This post is part of the Working At EA Organizations series on the EA Forum. The posts so far:


The Machine Intelligence Research Institute (MIRI) does “foundational mathematical research to ensure that smarter-than-human artificial intelligence has a positive impact”. AI alignment is a popular cause within the effective altruism community and MIRI has made the case for their cause and approach. The following are my notes from an interview with Malo Bourgon (program management analyst and generalist at MIRI) which he reviewed before publishing.

Current talent needs

Since the size of the research team has just doubled, MIRI is not actively looking for new researchers for the next 6 months. However, if you are interested in working at MIRI you should still express your interest.

Researchers

In the foreseeable future, MIRI is planning to further grow the research team. The main ingredients for a good fit are interest in the problems that MIRI works on and strong talent in math and other quantitative subjects. Being further on the academic career path will therefore naturally help by teaching you more math, but absolutely isn’t necessary.


One of the best ways to evaluate your fit is to take a look at MIRI’s research guide. Look at MIRI’s problem areas and study the one that looks most interesting to you. In tandem, grab one of the textbooks to get acquainted with the relevant math. Contrary to what some EAs think, it’s not necessary to understand all of the research guide in order to start engaging with MIRI’s research. It’s a good indicator if you can develop an understanding of a specific problem in the guide and even more so if you can start contributing new ideas or angles of attack on that problem.

 

Fundraiser

It’s turned out very hard to find a fundraiser who is both very good at talking to donors and deeply understands the problems MIRI works on. If someone comes along they would potentially be open to hiring such a person.

How can you get involved on a lower commitment basis?

Although there are presently no official volunteer or intern positions, hires always go through a phase of lower commitment involvement through one of the following.


MIRIx workshops

These are independently run workshops about MIRI’s research around the world. Check here if there’s one in your area. If not, you can run one yourself. Organizing a MIRIx workshop is as easy as organizing an EA or Lesswrong meetup. It’s fine to just meet at home and study MIRI’s problems in a group.

You organize the logistics and get in contact with MIRI beforehand via this form. If you organize the workshop, MIRI will pay for the expenses that the participants make as a result of attending (e.g. snacks and drinks).

MIRIx workshops can take the form of a study group or a research group centered around MIRI (or related) problems. All you need to take care of is advertising it - it’s good when you already have someone in your city who would be interested. The group will be listed on the MIRIx page and you could advertise it to a relevant university department, on Lesswrong or to your local EA chapter.

MIRIx workshops not only let you learn about MIRI’s problems but also potentially provide an opportunity to contribute to MIRI’s research (this varies between groups). If you’re doing well at a MIRIx workshop, it will be noticed.

MIRI workshops

These workshops in the Bay Area are an absolutely essential part of the MIRI application process, but even if you don’t plan to work at MIRI, your application is encouraged. MIRI pays for all expenses, including international flights. This could also be a great chance to visit an EA hub.

Research assistance

Want to write a thesis on some problem related to MIRI research? Researching an adjacent area in math? Get in contact here to apply for research assistance. If you have a research idea that’s not obviously within MIRI’s research focus but could be interesting, or you have an interest in type theory, do get in contact as well.

Research forum

Contribute to the research forum at agentfoundations.org by sharing a link to a post you made on Lesswrong, GitHub or your personal blog etc. If it gets at least two upvotes, your post will appear on the agentfoundations.org website. Read the How to Contribute page for more information.

How competitive are the positions?

MIRI are looking for top research talent and can only hire a few people. Do have a backup plan. Malo wants to encourage more people to have a go with the math problems, though. Learning more math can contribute to your backup plan as well and you may be able to employ the knowledge to research AI safety problems in other contexts. (From another source I heard that if you’re among the best of your year in a quantitative subject at a top university, that’s a good indicator that you should give it a shot.)

What's the application process like?

The application process is very different from most organizations. It is less formalized and includes a period of progressive engagement. Usually you start off by working on MIRI-style problems via e.g. one of the channels named above and notice that you develop an interest in (one of) them. By then you may have been in contact with someone at MIRI in some way.

Attending a MIRI workshop is usually an essential step. This will be a good opportunity for MIRI researchers to get to know you, and for you to get to know them. If it goes well, you could work remotely or on-site as a research contractor spending some share of your time on MIRI-research. Once again, if both sides are interested, this could potentially lead to a full-time engagement.

At what yearly donation would you prefer marginal hires to earn to give for you instead of directly working for you?

The people who get hired as researchers are hard to replace. Hundreds of thousands or millions of dollars would be appropriate depending on the person. It’s hard to imagine that someone who could join the research team and wants to work on AI safety could make a bigger impact by earning to give.

Anything else that would be useful to know?

In the long term, not everyone who wants to work on the mathematical side of AI safety should work for MIRI. The field is set to grow. Being familiar with MIRI’s problems may prove useful even if you don’t think there will be a good fit with MIRI in particular. All in all, if you’re interested in AI safety work, try to familiarize yourself with the problems, get in contact with people in the field (or the community) early and build flexible career capital at the same time.

Meetup : 'The Most Good Good You Can Do' (Effective Altruism meetup)

1 SoerenMind 14 May 2015 06:32PM

Discussion article for the meetup : 'The Most Good Good You Can Do' (Effective Altruism meetup)

WHEN: 31 May 2015 02:00:00PM (+0200)

WHERE: Utrecht

We'll discuss Peter Singer's new book about effective altruism. The book gives an overview of the EA movement, and some very nice personal stories of people who devote a significant part of their lives on doing good effectively.

You can buy the book on bol.com, amazon and probably in many other (web)shops.

New people are very welcome!

We'll make sure the discussion is interesting for people who haven't found the time to read the book.

Practical

Note that the location has been changed to the La Place. We have this Google Doc where anyone can make suggestions, also for the coming meetups. And feel free to join our facebook group as well.

Location: La Place at central station, top floor

If you have trouble finding us you can reach Sören at 0031684140766 or Imma (0031610524989).

Discussion article for the meetup : 'The Most Good Good You Can Do' (Effective Altruism meetup)

Meetup : Utrecht- Brainstorm and ethics discussion at the Film Café

1 SoerenMind 19 May 2014 08:49PM

Discussion article for the meetup : Utrecht- Brainstorm and ethics discussion at the Film Café

WHEN: 31 May 2014 02:00:00PM (+0200)

WHERE: Slachtstraat 5, 3512 BC Utrecht

Brainstorm

We would like to make our meetups more useful for the attendants. Therefore the meetup will start with a brainstorm to collect ideas and decide on how to proceed in future. We will talk about both the discussion meetups themselves and potential projects to work on. If you have any suggestions or ideas, this will be a great opportunity to have your say.

Ethics discussion

Should your driverless car kill you to save two other people? When is it ethical to hand out our decisions over to machines? We will discuss the following article: http://aeon.co/magazine/world-views/can-we-design-systems-to-automate-ethics/ To get most out of the discussion, you are recommended to read the article in advance. It's a nice stepping stone to topics like moral psychology and AI as well. Very relevant to this topic as well to rational ethics in general: this presentation by Harvard philosopher Joshua Greene on learning to use our moral brains: https://www.youtube.com/watch?v=_-vleKVkMec . When is our moral intuition correct, and when not? This one is an optional addition. Looking forward to meeting you again! New people will be warmly welcomed!

Practical

Mind that the time has changed to 2pm!

We will meet at Film Café Oscar, which is just around the corner from De Winkel van Sinkel. I will be holding a sign that says 'LW' on it. If you have trouble finding us you can reach me at 0684140766 or me Imma at 0610524989.

We have this Google Doc where anyone can make suggestions, also for the coming meetups.

Discussion article for the meetup : Utrecht- Brainstorm and ethics discussion at the Film Café

Meetup : Utrecht - Social discussion at the Film Café

1 SoerenMind 12 May 2014 01:10PM

Discussion article for the meetup : Utrecht - Social discussion at the Film Café

WHEN: 17 May 2014 05:00:00PM (+0200)

WHERE: Slachtstraat 5

Hey everyone, This meetup will be an informal social discussion. The topics could be anything concerning effective altruism and rationality. There will be plenty of time for your personal questions and topics.

Imma is going to share some thoughts and videos about GiveWell and effective giving. We have a Google Doc where anyone can make topic suggestions, also for the coming meetups.

This time we will meet at Film Café Oscar, which is just around the corner from De Winkel van Sinkel. The kitchen there is open until 21:00 and is a little bit cheaper. Looking forward to meeting you again! New people will be warmly welcomed!

I will be holding a sign that says 'LW' on it. If you have trouble finding us you can reach me at 0684140766. And feel free to join our facebook group and/or meetup.com group as well.

Discussion article for the meetup : Utrecht - Social discussion at the Film Café

Meetup : Utrecht

1 SoerenMind 20 April 2014 10:14AM

Discussion article for the meetup : Utrecht

WHEN: 03 May 2014 05:00:00PM (+0200)

WHERE: Utrecht, Catharijnensingel 49-A

A growing number of rationalists and effective altruists is joining us to share ideas and help each other to be rational, to improve themselves and to make the world a better place as effectively as possible.

Agenda

Amongst other things we will talk about the charity evaluator GiveWell (http://www.givewell.org/). GiveWell is looking for outstanding giving opportunities: where to give in order to do the most good per dollar or euro spent. How could that be possible? How does GiveWell (try to) do that? If there is another topic you would like to present or discuss with the group, please add the topic here: https://docs.google.com/document/d/16bBtla1iVzkJjie-JK7Ozb9Ao8SbyJ9U924XyaEXTqY/edit .

There is room for your questions, personal discussions, smalltalk, etc.

Everyone is invited, and new people will be warmly welcomed!

We have a new location now. We're meeting at the headquaters of Seats2meet (Cyberdigma), which is in the Trindeborch building at Catharijnesingel 49-55 (7th floor). You can see the entry on this map https://goo.gl/maps/q027I

If you have trouble finding us, for this time you can reach Jurjen at +31650495866, since I will be abroad.

Discussion article for the meetup : Utrecht

Meetup : Utrecht: Behavioural economics, game theory...

2 SoerenMind 07 April 2014 01:54PM

Discussion article for the meetup : Utrecht: Behavioural economics, game theory...

WHEN: 19 April 2014 05:00:00PM (+0200)

WHERE: Cafe de Zaak, Korte Minrebroederstraat 9, Utrecht

In the this meetup we will discuss more rationality-related topics, like behavioral economics and game theory. Sjoerd, who used to be a PhD student in these areas will give a little intro. Marius can share some thoughts on goal factoring with us, which he recently learned at a CFAR workshop. (Notes on goal factoring from another LW meetup: tinyurl.com/ptsmlcj)

If you would like, you can add more discussion topics here: https://docs.google.com/document/d/16bBtla1iVzkJjie-JK7Ozb9Ao8SbyJ9U924XyaEXTqY/edit?usp=sharing

We're moving to Cafe de Zaak, which is more quiet and we can bring own food or order in from other places (last time we got kind of hungry). It's around the corner from De Winkel van Sinkel, where we were the last times. Food is a little pricy there. In case de Zaak is too crowded we will just walk over. If you have trouble finding us you can call me at 0684140766.

Everyone is invited, and new people will be warmly welcomed! We are particularly interested in the experiences of the people that attend the European Community weekend in Berlin!

Discussion article for the meetup : Utrecht: Behavioural economics, game theory...

Meetup : Utrecht: More on effective altruism

1 SoerenMind 27 March 2014 12:40AM

Discussion article for the meetup : Utrecht: More on effective altruism

WHEN: 05 April 2014 05:00:00PM (+0200)

WHERE: Oudegracht 158, 3511 AZ Utrecht, The Netherlands

Our last meetup went well and we hope to keep the momentum growing. As of yet, we have no specific agenda items. Thus, if you have a topic in the Less Wrong / Effective Altruism / rationality scope, please add a comment below. Since this group is still rather young and informal, we'll use the opportunity to meet each other, share opinions on EA/LW and share our desires for what we want the group to become. There will also be plenty of semi-serious discussion over relevant issues. We will meet in a café called De Winkel van Sinkel, which is 400m walking distance from Utrecht Centraal. The meetup will be held in English. I will be in front of the café with a a sign that says "LW" on it at 17:00. If you have trouble finding us you can call me at 0684140766. We have a meetup.com presence of Lesswrong/Effective Altruism Nederland where you can click attend if you want to join: http://www.meetup.com/Less-Wrong-Nederland/events/173124892/ Additionally we also have a facebook group: https://www.facebook.com/groups/262932060523750/

Discussion article for the meetup : Utrecht: More on effective altruism

Meetup : Utrecht: Famine, Affluence and Morality

0 SoerenMind 16 March 2014 07:56PM

Discussion article for the meetup : Utrecht: Famine, Affluence and Morality

WHEN: 22 March 2014 07:00:00PM (+0100)

WHERE: Oudegracht 158, Utrecht

At the last meetup we decided to have biweekly meetups on Saturday evenings. We will see if this turns out to be convenient. This time we want to discuss Peter Singer's essay "Famine, Affluence and Morality". Link: http://commonsenseatheism.com/wp-content/uploads/2010/04/Singer-Famine-Affluence-and-Morality.pdf Wiki-link: en.wikipedia.org/wiki/Famine,Affluence,and_Morality The first half of the essay contains the most important information. Other effective altruism and rationality topics are welcome too of course. Especially if you have any specific questions to discuss don't hesitate to bring them up. We have a meetup.com presence of Lesswrong/Effective Altruism Nederland now. Please click attend if you want to join. http://www.meetup.com/Less-Wrong-Nederland/events/170182532/

We will meet in a café called De Winkel van Sinkel, which is 400m walking distance from Utrecht Centraal. The meetup will be held in English. I will be holding a sign that says "LW" on it.

Discussion article for the meetup : Utrecht: Famine, Affluence and Morality

Meetup : Utrecht: Effective Altruism

3 SoerenMind 03 March 2014 07:55PM

Discussion article for the meetup : Utrecht: Effective Altruism

WHEN: 07 March 2014 07:00:00PM (+0100)

WHERE: Oudegracht 158, Utrecht, the Netherlands

In this meetup we will discuss topics related to effective altruism. This meetup is not directly related to the previous one in Utrecht. It is also not purely a LessWrong meetup. We have already created an event on Facebook, where 5 people are planning to attend (I can add you to the event if you comment here). Most of those are also active on LW and we would be more than happy to have more LWers on board.

Some topics we may discuss are altruistic career choice, selection of causes, and whether we can create an effective altruism community in the Netherlands. Getting to know each other is also an important part.

We will meet in a café called De Winkel van Sinkel, which is 400m walking distance from Utrecht Centraal. The meetup will be held in English, since we have at least one German participant..

I will be holding a sign that says 'LW' on it.

Discussion article for the meetup : Utrecht: Effective Altruism

View more: Next