This even will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic.
Are these guidelines written somewhere? In particular, is there a policy for what sort of ideas are safe for publication? Assuming there is, is there a procedure for handling ideas which falls outside that category (e.g. submitting them to some sort of trusted mailing list)?
I don't believe so. The policy we will be following will attempt to line up with how it was explained at the workshop I attended. The official policy is to think hard about whether publication increases or decreases risk. The first approximation to this policy is to not publish information which is useful for AI generally, but do publish things which have specific application to FAI and don't help other approaches.
Hi Abram, thx for commenting!
I'm not sure this rule is sufficient. When you have a cool idea about AGI, there is strong emotional motivation to rationalize the notion that your idea decreases risk. "Think hard" might be a procedure that is not sufficiently trustworthy, since you're running on corrupted hardware.
This Saturday, April 26th, we will be holding a one day FAI workshop in southern California, modeled after MIRI's FAI workshops. We are a group of individuals who, aside from attending some past MIRI workshops, are in no way affiliated with the MIRI organization. More specifically, we are a subset of the existing Los Angeles Less Wrong meetup group that has decided to start working on FAI research together.
The event will start at 10:00 AM, and the location will be:
USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536.
This first workshop will be open to anyone who would like to join us. If you are interested, please let us know in the comments or by private message. We plan to have more of these in the future, so if you are interested but unable to makethis event, please also let us know. You are welcome to decide to join at the last minute. If you do, still comment here, so we can give you necessary phone numbers.
Our hope is to produce results that will be helpful for MIRI, and so we are starting off by going through the MIRI workshop publications. If you will be joining us, it would be nice if you read the papers linked to here, here, here, here, and here before Saturday. Reading all of these papers is not necessary, but it would be nice if you take a look at one or two of them to get an idea of what we will be doing.
Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine. Even if you are under-qualified, there is very little risk of holding anyone back or otherwise having a negative impact on the workshop. If you think you would enjoy the experience, go ahead and join us.
This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera.
Food and refreshments will be provided for this event, courtesy of MIRI.