Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.
Also some discussion about what strategies to follow for AI friendliness.
OK, so you're saying that FAI is not hard because you have to formalize human morality, it's hard because you have to have a system for formalizing things in general?
I'm tempted to ask why you're so confident on this subject, but this debate probably isn't worth having because once you're at the point where you can formalize things, the relative difficulty of formalizing different utility functions will presumably be obvious.
OK, so you're saying that FAI is not hard because you have to formalize human morality, it's hard because you have to have a system for formalizing things in general?
This also seems to be the only way out. If human values are too complex to reimplement manually (which seems to be the case), you have to create a tool with the capability to do that automatically. And once you have that tool, cutting angles on the content of human values would just be useless: the tool will work on the whole thing. And you can't cut corners on the tool itself, like you can't have a computer with only randomly sampled 50% of circuitry.
The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara. Apologies for the late notice. We'll have fun, food, and attempts at rationality, as well as good general conversation. Details at the bay area OB/LW meet-up page.