Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.
Also some discussion about what strategies to follow for AI friendliness.
For example, having a goal of not going outside its box.
It would be nice if you could tell an AI not to affect anything outside its box.
10 points will be awarded to the first person who spots why "don't affect anything outside your box" is problematic.
There's a difference between "don't affect anything outside your box" and "don't go outside your box." My point is that we don't necessarily have to make FAI before anyone makes a self-improving AI. There are goal systems that, while not reflecting human values and goals, would still prevent an AI from destroying humanity.
The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara. Apologies for the late notice. We'll have fun, food, and attempts at rationality, as well as good general conversation. Details at the bay area OB/LW meet-up page.