Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.
Also some discussion about what strategies to follow for AI friendliness.
I think this may be of interest to you.
If you have an intelligence in a box, it will have access to its source code. If it modifies itself enough, you will not be able to predict what it wants.
I've read that.
Eliezer thinks he can write a self-modifying AI that will self-modify to want the same things its original self wanted. I'm proposing that he choose a different thing for the AI to want that will be easier to code, as an intermediate step to building a truly friendly AI.
The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara. Apologies for the late notice. We'll have fun, food, and attempts at rationality, as well as good general conversation. Details at the bay area OB/LW meet-up page.