## Meetup : West LA - Big Numbers

## Discussion article for the meetup : West LA - Big Numbers

**How to Find Us**: Go into this Del Taco. We will be in the back room if possible.

**Parking** is free in the lot out front or on the street nearby.

**Discussion**: We will gain some respect how big infinity is by learning about some things that are smaller than it. Then we will apply what we learned to two philosophical thought experiments.

**Recommended Reading**:

Pascal's Mugging: Tiny Probabilities of Vast Utilities

No prior exposure to Less Wrong is required*; this will be generally accessible.

## Discussion article for the meetup : West LA - Big Numbers

## Meetup : West LA Meetup: Lightning Talks

## Discussion article for the meetup : West LA Meetup: Lightning Talks

How to Find Us: Go into the Del Taco. There will be a Rubik's Cube. Parking is completely free. There is a sign that claims there is a 45-minute time limit, but it is not enforced.

Discussion: Everyone attending is encouraged to bring a 5-10 minute presentation (or lead a 5-10 minute discussion) on any rationality topic that they like. You are welcome to attend even if you do not want to bring a topic. If you already know what you will be talking about, leave a comment, so people can get excited about it.

Note: it starts at 7:00 PM. I do not know why it says it starts at 8. That is wrong.

## Discussion article for the meetup : West LA Meetup: Lightning Talks

## Meetup : West LA Meetup: Lightning Talks

## Discussion article for the meetup : West LA Meetup: Lightning Talks

How to Find Us: Go into the Del Taco. There will be a Rubik's Cube. Parking is completely free. There is a sign that claims there is a 45-minute time limit, but it is not enforced.

Discussion: This week we will try a new type of discussion. Everyone attending is encouraged to bring a 5-10 minute presentation (or lead a 5-10 minute discussion) on any rationality topic that they like. You are welcome to attend even if you do not want to bring a topic. There will be a small rationality related prize for those who choose to participate!

## Discussion article for the meetup : West LA Meetup: Lightning Talks

## Meetup : The Prisoner's Dilemma

## Discussion article for the meetup : The Prisoner's Dilemma

How to Find Us: Go into this Del Taco. We will be in the back room.

Parking is free in the lot out front or on the street nearby.

Discussion: This week, we will talk all about the Prisoner's Dilemma. I will start with a lesson on matrix game theory in general, and how to find a Nash equilibrium. We will then talk about the prisoner's dilemma and why the best output is not a Nash equilibrium. I will explain exactly what makes a matrix game a type of prisoner's dilemma, and what a true prisoner's dilemma might look like in real life. We will also discuss what it takes to cooperate in finitely and infinitely repeated prisoner's dilemma, and the history of prisoner's dilemma tournaments. We will talk about methods for cooperating in the prisoner's dilemma when playing with someone who is either sufficiently similar to you or someone that you can simulate. Hopefully this will inspire everyone to come up with an entry for the upcoming program equilibrium prisoner's dilemma tournament.

No prior exposure to Less Wrong is required; this will be generally accessible.

## Discussion article for the meetup : The Prisoner's Dilemma

## Maximize Worst Case Bayes Score

In this post, I propose an answer to the following question:

Given a consistent but incomplete theory, how should one choose a random model of that theory?

My proposal is rather simple. Just assign probabilities to sentences in such that if an adversary were to choose a model, your Worst Case Bayes Score is maximized. This assignment of probabilities represents a probability distribution on models, and choose randomly from this distribution. However, it will take some work to show that what I just described even makes sense. We need to show that Worst Case Bayes Score can be maximized, that such a maximum is unique, and that this assignment of probabilities to sentences represents an actual probability distribution. This post gives the necessary definitions, and proves these three facts.

Finally, I will show that any given probability assignment is coherent if and only if it is impossible to change the probability assignment in a way that simultaneously improves the Bayes Score by an amount bounded away from 0 in all models. This is nice because it gives us a measure of how far a probability assignment is from being coherent. Namely, we can define the "incoherence" of a probability assignment to be the supremum amount by which you can simultaneously improve the Bayes Score in all models. This could be a useful notion since we usually cannot compute a coherent probability assignment so in practice we need to work with incoherent probability assignments which approach a coherent one.

I wrote up all the definitions and proofs on my blog, and I do not want to go through the work of translating all of the latex code over here, so you will have to read the rest of the post there. Sorry. In case you do not care enough about this to read the formal definitions, let me just say that my definition of the "Bayes Score" of a probability assignment P with respect to a model M is the sum over all true sentences s of m(s)log(P(s)) plus the sum over all false sentences s of m(s)log(1-P(s)), where m is some fixed nowhere zero probability measure on all sentences. (e.g. m(s) is 1/2 to the number of bits needed to encode s)

I would be very grateful if anyone can come up with a proof that this probability distribution which maximizes Worst Case Bayes Score has the property that its Bayes Score is independent of the choice of what model we use to judge it. I believe it is true, but have not yet found a proof.

## Second MIRIxLosAngeles Meeting

The second MIRIxLosAngeles meeting will be held on Saturday, June 14, at 10:00 AM. The location will be the same as last time:

USC Institute for Creative Technologies

12015 Waterfront Drive

Playa Vista, CA 90094-2536.

If you would like to join us, please let me know, so I can know to expect you and give you necessary contact information. Due to the primary interests of the organizers, and some recent results I have to share, I expect for us to focus on the questions of "How should an agent assign probabilities to logical statements?" and "How should an agent assign probabilities to statements about his own probability function?" However, I expect that other people will come and direct the conversation in other useful directions, so it is hard to predict.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera.

Food and refreshments will be provided for this event, courtesy of MIRI.

If you are not local to Los Angeles, you may want to consider hosting your own MIRIx event. You can find more information about this here.

## Meetup : Second MIRIxLosAngeles Meeting

## Discussion article for the meetup : Second MIRIxLosAngeles Meeting

The second MIRIxLosAngeles meeting will be held on Saturday, June 14, at 10:00 AM. The location will be the same as last time:

USC Institute for Creative Technologies

12015 Waterfront Drive

Playa Vista, CA 90094-2536.

If you would like to join us, please let me know, so I can know to expect you and give you necessary contact information. Due to the primary interests of the organizers, and some recent results I have to share, I expect for us to focus on the questions of "How should an agent assign probabilities to logical statements?" and "How should an agent assign probabilities to statements about his own probability function?" However, I expect that other people will come and direct the conversation in other useful directions, so it is hard to predict.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera.

Food and refreshments will be provided for this event, courtesy of MIRI.

If you are not local to Los Angeles, you may want to consider hosting your own MIRIx event. You can find more information about this here.

## Discussion article for the meetup : Second MIRIxLosAngeles Meeting

## Summary of the first SoCal FAI Workshop

Last Saturday, nine people met for the Southern California FAI Workshop. Unsurprisingly, we did not come up with any major results, but I know some people were curious about this experiment, so I a providing a summary anyway.

First, I would like to say that I consider this first meeting a success. The turnout was higher than I expected. We had 9 participants, and there were 2 other people who did not show up due to scheduling conflicts. We basically stayed on topic the entire 7 hours from 10:00 to 5:00, and then we had dinner at 5:00, generously provided by MIRI. We will be hosting these workshops again. In fact, we have decided to hold them monthly. We will probably follow a schedule of meeting the first Saturday of each month, starting in June. I will make another post announcing the second meetup once this date is finalized.

We talked about various ideas participants had about FAI, but most of our time was spent thinking about probability distributions on consistent theories. One thing we observed that if you view the space of all probability assignments to logical sentences as living inside the vector space of all functions from sentences to the real numbers, then the collection of coherent probability assignments (those which correspond to probability distributions on consistent theories), is an affine subspace. This is exciting, because we can set up an inner product on this vector space and orthogonally project probability assignments onto the closest point on this subspace (i.e. find a nearby coherent probability assignment to a given probability assignment). Further, while this projection is not computable, there exists a computable procedure which converges to this point. However, I am now convinced that this idea is a dead end, for the following reason: Just because the point you start with has all coordinates between 0 and 1, does not mean that the projection to the subspace containing coherent assignments still has all coordinates between 0 and 1. (Imagine a 3d unit cube, and imagine that a theory is coherent if x+y+z=1. If you project 1,1,0 onto this subspace, you get 2/3,2/3,-1/3, which is not a valid probability assignment) I am now convinced that this idea will not be fruitful.

However, we did get several good things out of the meeting. First, we introduced several new mathy people to the problems associated with FAI. Second, we set up an email list, so that we can bounce ideas we have off of people that we know personally and who are interested in this stuff. Third, and most importantly we have become excited about doing more. I personally spent most the day after the workshop writing up lots of stuff related to what we observed above (This was before I discovered that it did not work), and I know I am not the only one to have this reaction.

Thanks to all of the participants, and please let me know if you would be interested in joining us next time!

## Southern California FAI Workshop

This Saturday, April 26th, we will be holding a one day FAI workshop in southern California, modeled after MIRI's FAI workshops. We are a group of individuals who, aside from attending some past MIRI workshops, are in no way affiliated with the MIRI organization. More specifically, we are a subset of the existing Los Angeles Less Wrong meetup group that has decided to start working on FAI research together.

The event will start at 10:00 AM, and the location will be:

USC Institute for Creative Technologies

12015 Waterfront Drive

Playa Vista, CA 90094-2536.

This first workshop will be open to anyone who would like to join us. If you are interested, please let us know in the comments or by private message. We plan to have more of these in the future, so if you are interested but unable to makethis event, please also let us know. You are welcome to decide to join at the last minute. If you do, still comment here, so we can give you necessary phone numbers.

Our hope is to produce results that will be helpful for MIRI, and so we are starting off by going through the MIRI workshop publications. If you will be joining us, it would be nice if you read the papers linked to here, here, here, here, and here before Saturday. Reading all of these papers is not necessary, but it would be nice if you take a look at one or two of them to get an idea of what we will be doing.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine. Even if you are under-qualified, there is very little risk of holding anyone back or otherwise having a negative impact on the workshop. If you think you would enjoy the experience, go ahead and join us.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera.

Food and refreshments will be provided for this event, courtesy of MIRI.

## Open Thread: March 4 - 10

# If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

View more: Next