Comment author: turchin 05 March 2016 11:10:54PM *  4 points [-]

I agree this the main premises of this text.

That the fact that I am in special position should rise my estimation that I am in simulation. And that any AI would have (as instrumental goal) creation of millions of simulations to solve numerically Fermi paradox by modeling different civilizations near the time of global risks and to model different AI-goal systems near AI creation time.

But now I will try different type of reasoning which may be used against such logic. Let's consider following example: "Given that my name is Alex, what is the probability that my name is Alex?" Of course, 1.

Given that I am interested in AI, what is the probability that I know about simulation argument? Its high, almost 1. And given that I know about simulation argument, what is the probability that I think that I am in simulation - it is also high. So it is not surprising that I estimate it high, if i am already in this field.

The core of this objection is that not only you are special, but that everybody is special, yet in their own believe system. Like "Given that I believe in God Zeus, it makes Zeus more likely to be real". Because we have many people, and everybody think that their believe system is special, so there is nothing special in any believe system.

I am not sure that this line of reasoning cancels our conclusion that we may be inside simulation.

Comment author: SoerenMind 22 March 2016 10:12:04AM 0 points [-]

I guess an answer to "Given that my name is Alex, what is the probability that my name is Alex?" could be that the hypothesis is highly selected. When you're still the soul that'll be assigned to a body, looking at the world from above, this guy named Alex won't stick out because of his name. But the people who will influence the most consequential event in the history of that world will.

Comment author: turchin 05 March 2016 11:10:54PM *  4 points [-]

I agree this the main premises of this text.

That the fact that I am in special position should rise my estimation that I am in simulation. And that any AI would have (as instrumental goal) creation of millions of simulations to solve numerically Fermi paradox by modeling different civilizations near the time of global risks and to model different AI-goal systems near AI creation time.

But now I will try different type of reasoning which may be used against such logic. Let's consider following example: "Given that my name is Alex, what is the probability that my name is Alex?" Of course, 1.

Given that I am interested in AI, what is the probability that I know about simulation argument? Its high, almost 1. And given that I know about simulation argument, what is the probability that I think that I am in simulation - it is also high. So it is not surprising that I estimate it high, if i am already in this field.

The core of this objection is that not only you are special, but that everybody is special, yet in their own believe system. Like "Given that I believe in God Zeus, it makes Zeus more likely to be real". Because we have many people, and everybody think that their believe system is special, so there is nothing special in any believe system.

I am not sure that this line of reasoning cancels our conclusion that we may be inside simulation.

Comment author: SoerenMind 06 March 2016 05:31:02PM 0 points [-]

"The core of this objection is that not only you are special, but that everybody is special"

Is your point sort of the same thing I'm saying with this? "Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it."

Updating towards the simulation hypothesis because you think about AI

9 SoerenMind 05 March 2016 10:23PM

(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)


Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:


1) P(involved in AI* | ¬sim) = very low

2) P(involved in AI | sim) = high


Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.


Premise 1


Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.


Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.


(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)


Therefore, P(involved in AI* | ¬sim) = very low.


Premise 2

 

Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.


Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.


Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.


Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.


Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.


If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,

P(involved in AI | sim) = high


The closer the causal chain to (capabilities research etc)


If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.


Why could this be wrong?


I could think of four general ways in which this argument could go wrong:


1) Our position in the history of the universe is not that unlikely

2) We would expect to see something else if we were in one of the aforementioned simulations.

3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI

4) My anthropics are flawed


I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.


Conclusion

Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?


Appendix


My friend Matiss Apinis (othercenterism) put the first premise like this:


“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"

Comment author: SoerenMind 03 March 2016 09:02:01PM *  1 point [-]

Seems like a bad meme to spread for precautionary reasons.

http://reducing-suffering.org/the-importance-of-insect-suffering/

http://reducing-suffering.org/why-i-dont-support-eating-insects/

Warning: Bringing this argument at a dinner party with trendsetting, ecologically conscious consumers might cost you major idiosyncrasy credits.

Comment author: Kaj_Sotala 12 December 2015 06:02:25PM 2 points [-]

Neat!

You only seem to mention self-reports. What about the part of the pre- and postsurveys where you had the workshop participant's friends rate them?

Comment author: SoerenMind 12 December 2015 09:50:47PM 3 points [-]

"We have been conducting a peer survey of CFAR workshop participants, which involves sending surveys about the participant to 2 of their friends, both before the workshop and again approximately one year later. We are in the final stages of data collection on those surveys, and expect to begin the data analysis later this month."

Comment author: SoerenMind 30 November 2015 11:39:22AM 1 point [-]

I'd also like to see the results on LW this year!

Comment author: SoerenMind 30 November 2015 11:38:48AM *  2 points [-]

How about promoting in Main? Was promoted last year IIRC. I think the overlap of the communities can justify this. Disclosure: I'm biased as an aspiring effective altruist.

Comment author: Elo 26 October 2015 09:17:45AM *  12 points [-]

this week on the slack discussions:

  • Art and media - HPMOR readership and considering spoilers. A few movie clips. microbiases, "Secret Habitat" game and abstract art analysis. Quality VS quantity, one hit wonder - and what effort it would take to make one (200-500 hours maybe). " short-circuiting happiness in the brain; by - instead of spending money on a bed (normal thing), spend money on icecream (happy thing), even when you neeeed a better bed in your life. In order to trick your brain to being happier than it is."
  • Bot test - We have a logging bot; and are building a prediction bot to help us keep track of predictions.
  • Business and startups - Comparing startup ideas, Thesaurus for words that don't exist - i.e. "logicalness" so you can find a real synonym to use instead. Healthy fast food opportunities (why isn't fast food already healthy - people probably don't care about healthyness when buying fast food QED healthy fast food is not as great an idea as it sounds.), living on a super-limited budget. Mealsquares.
  • effective altruism - provision of condoms to african nations to reduce the birth rate (and why that won't really work), looking to contact people within the EA movement who can explain what happened to the videos on the global conference, and maybe help us find the video in mountainview on x-risk. It's proving quite hard to do...
  • goals of lesswrong - determining success of lesswrong, (ways to show the world we are actually doing well - get us a few famous people made out of lesswrong growth). LW needs a symbol, like a logo but super cool. considering "¬□⊥", "how to function as an adult" as an article, website or guide in how to do that. because often enough people end up legal adults without knowing these things. learning how to teach, the nature of local lw groups.
  • human relationships - energy intake and exercise, alcohol and social lubrication, nootropics for improving social skills. establishing productive bandwidth with other intelligent humans. theories of confidence/prestiege/charisma (different but similar things that we defined in order to talk about them), fluids of human sexuality (and how much we don't know), owning/responsibility towards others/Significant others VS freedom and miscommunication around that, and the burden to communicate that is placed on people by their significant others. Polyamory. "twist yourself into noodles" (not sure what this related to), Smartphone to the rescue (reminders to do things i.e. remember birthdays, talk to friends who want to talk regularly, etc.)
  • linguistics - methods of communicating understanding; or asking for clarification in speech. "Decision making is a process", consider the intention behind the words (not just the words) (caveat: be careful with this)
  • open - running lw local groups, r/askscience, is stockholm syndrome real?, changing people's beliefs (specifically making atheists take up spirituality maybe), Aubrey de Grey and beard extension, Sidekicks, Meta: our channels, knowing things in advance does not decrease reported hedonic payoff. Too many messages to keep up with on the slack, sensory language and communication. And more - being the open channel, reasoning with the consultation of your feelings.
  • parenting - guilty parents for not knowing things they should know... Chocolate chip game trials are going excellently, kid-proof dividers, sickness during pregnancy, dealing with kid's monsters, (putting it in the wardrobe VS patting it and feeding it to make it friendly - the kid's idea)
  • philosophy - occam, occam as a rule/proof (but also not), bayes. CBT - cognitive behavioural therapy, ACT - acceptance commitment therapy.
  • projects - https://tangoapp.herokuapp.com/ , some writings that are now lesswrong posts, how do I become a more interesting person, sleep post, AMA among friends on the slack.
  • real life - dieting, work hours, sleep, mealsquares, secular solstice, allergies, human superpowers, negotiating with other humans, bikeshedding, how to make complicated decisions, cheap food, advice should be specific, how other people perceive you, and how they act on those perceptions, and the differences. wearable BP monitors, and more...
  • resources and links - not much new here...
  • RSS feeds - we get a bunch of rss feeds from around the leswrong sphere.
  • Science and technology - evolution, space moths, facebook algorithms, chess, privacy in america, grey goo, successful businesspeople, getting data about the internet, cool cap for sleep
  • scratchpad - a place to ramble, just in case any of the other places weren't the right place for it, or they were busy with other conversations at the time.
  • welcome - introductions and also a discussion on wellbeing.
Comment author: SoerenMind 05 November 2015 05:40:58PM 1 point [-]

The EA Global videos will be officially released soon. You can already watch them here, but I couldn't find the xrisk video among them. I'd suggest just asking the speakers for their slides. I remember two of them were Nate Soares and Owen Cotton-Barrat.

Comment author: LessWrong 02 November 2015 07:16:48AM 1 point [-]

I was very interested in reading about the 80,000 hours one but the link unfortunately links to a page that does not exist. Was it deleted?

Comment author: SoerenMind 02 November 2015 12:17:42PM 0 points [-]

Thanks for mentioning that. For some reason the link changed from effective-altruism.com to lesswrong.com when I copy-pasted the article. Fixed!

Working at MIRI: An interview with Malo Bourgon

8 SoerenMind 01 November 2015 12:54PM

[Cross-posted on the EA Forum]

This post is part of the Working At EA Organizations series on the EA Forum. The posts so far:


The Machine Intelligence Research Institute (MIRI) does “foundational mathematical research to ensure that smarter-than-human artificial intelligence has a positive impact”. AI alignment is a popular cause within the effective altruism community and MIRI has made the case for their cause and approach. The following are my notes from an interview with Malo Bourgon (program management analyst and generalist at MIRI) which he reviewed before publishing.

Current talent needs

Since the size of the research team has just doubled, MIRI is not actively looking for new researchers for the next 6 months. However, if you are interested in working at MIRI you should still express your interest.

Researchers

In the foreseeable future, MIRI is planning to further grow the research team. The main ingredients for a good fit are interest in the problems that MIRI works on and strong talent in math and other quantitative subjects. Being further on the academic career path will therefore naturally help by teaching you more math, but absolutely isn’t necessary.


One of the best ways to evaluate your fit is to take a look at MIRI’s research guide. Look at MIRI’s problem areas and study the one that looks most interesting to you. In tandem, grab one of the textbooks to get acquainted with the relevant math. Contrary to what some EAs think, it’s not necessary to understand all of the research guide in order to start engaging with MIRI’s research. It’s a good indicator if you can develop an understanding of a specific problem in the guide and even more so if you can start contributing new ideas or angles of attack on that problem.

 

Fundraiser

It’s turned out very hard to find a fundraiser who is both very good at talking to donors and deeply understands the problems MIRI works on. If someone comes along they would potentially be open to hiring such a person.

How can you get involved on a lower commitment basis?

Although there are presently no official volunteer or intern positions, hires always go through a phase of lower commitment involvement through one of the following.


MIRIx workshops

These are independently run workshops about MIRI’s research around the world. Check here if there’s one in your area. If not, you can run one yourself. Organizing a MIRIx workshop is as easy as organizing an EA or Lesswrong meetup. It’s fine to just meet at home and study MIRI’s problems in a group.

You organize the logistics and get in contact with MIRI beforehand via this form. If you organize the workshop, MIRI will pay for the expenses that the participants make as a result of attending (e.g. snacks and drinks).

MIRIx workshops can take the form of a study group or a research group centered around MIRI (or related) problems. All you need to take care of is advertising it - it’s good when you already have someone in your city who would be interested. The group will be listed on the MIRIx page and you could advertise it to a relevant university department, on Lesswrong or to your local EA chapter.

MIRIx workshops not only let you learn about MIRI’s problems but also potentially provide an opportunity to contribute to MIRI’s research (this varies between groups). If you’re doing well at a MIRIx workshop, it will be noticed.

MIRI workshops

These workshops in the Bay Area are an absolutely essential part of the MIRI application process, but even if you don’t plan to work at MIRI, your application is encouraged. MIRI pays for all expenses, including international flights. This could also be a great chance to visit an EA hub.

Research assistance

Want to write a thesis on some problem related to MIRI research? Researching an adjacent area in math? Get in contact here to apply for research assistance. If you have a research idea that’s not obviously within MIRI’s research focus but could be interesting, or you have an interest in type theory, do get in contact as well.

Research forum

Contribute to the research forum at agentfoundations.org by sharing a link to a post you made on Lesswrong, GitHub or your personal blog etc. If it gets at least two upvotes, your post will appear on the agentfoundations.org website. Read the How to Contribute page for more information.

How competitive are the positions?

MIRI are looking for top research talent and can only hire a few people. Do have a backup plan. Malo wants to encourage more people to have a go with the math problems, though. Learning more math can contribute to your backup plan as well and you may be able to employ the knowledge to research AI safety problems in other contexts. (From another source I heard that if you’re among the best of your year in a quantitative subject at a top university, that’s a good indicator that you should give it a shot.)

What's the application process like?

The application process is very different from most organizations. It is less formalized and includes a period of progressive engagement. Usually you start off by working on MIRI-style problems via e.g. one of the channels named above and notice that you develop an interest in (one of) them. By then you may have been in contact with someone at MIRI in some way.

Attending a MIRI workshop is usually an essential step. This will be a good opportunity for MIRI researchers to get to know you, and for you to get to know them. If it goes well, you could work remotely or on-site as a research contractor spending some share of your time on MIRI-research. Once again, if both sides are interested, this could potentially lead to a full-time engagement.

At what yearly donation would you prefer marginal hires to earn to give for you instead of directly working for you?

The people who get hired as researchers are hard to replace. Hundreds of thousands or millions of dollars would be appropriate depending on the person. It’s hard to imagine that someone who could join the research team and wants to work on AI safety could make a bigger impact by earning to give.

Anything else that would be useful to know?

In the long term, not everyone who wants to work on the mathematical side of AI safety should work for MIRI. The field is set to grow. Being familiar with MIRI’s problems may prove useful even if you don’t think there will be a good fit with MIRI in particular. All in all, if you’re interested in AI safety work, try to familiarize yourself with the problems, get in contact with people in the field (or the community) early and build flexible career capital at the same time.

View more: Next