Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.
Also some discussion about what strategies to follow for AI friendliness.
I was also interested in the discussion on AI risk reduction strategies. Although SIAI espouses friendly AI, there hasn't been much thought about risk mitigation for possible unfriendly AIs. One example is the AI box. Although it is certainly not 100% effective, it's better than nothing (assuming it doesn't encourage people to run more UFAIs). Another would be to program an unfriendly AI with goals that would cause it to behave in a manner such that it does not destroy the world. For example, having a goal of not going outside its box.
While the problem of friendly AI is hard enough to make people give up, I also think the problem of controlling unfriendly AI is hard enough to make some of the pro-FAI people do the same.
For example, having a goal of not going outside its box.
It would be nice if you could tell an AI not to affect anything outside its box.
10 points will be awarded to the first person who spots why "don't affect anything outside your box" is problematic.
The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara. Apologies for the late notice. We'll have fun, food, and attempts at rationality, as well as good general conversation. Details at the bay area OB/LW meet-up page.