Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.
Also some discussion about what strategies to follow for AI friendliness.
It might be cheaper in terms of resources to build a safeguard that is 50% reliable than to look over your code once more to have it go from being 99.9% reliable to 99.95% reliable.
If AI research is anything like research in certain other fields, there's a significant speed benefit to be gained from being able to experiment and prototype.
If AI research is anything like research in certain other fields, there's a significant speed benefit to be gained from being able to experiment and prototype.
Yes, you can learn much more from witnessing one failure than from dozens of thought experiments. But things change when a failure has a significant chance of exterminating the human race.
The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara. Apologies for the late notice. We'll have fun, food, and attempts at rationality, as well as good general conversation. Details at the bay area OB/LW meet-up page.