Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.
Also some discussion about what strategies to follow for AI friendliness.
Do you disagree with her?
Nope. Specifying goal systems is FAI work, not AI work.
So then simpler utility functions will be easier to code and easier to prove correct.
Relative to ancient Greece, building a .45 caliber semiautomatic pistol isn't much harder than building a .22 caliber semiautomatic pistol. You might think the weaker weapon would be less work, but most of the problem doesn't scale all that much with the weapon strength.
OK, so you're saying that FAI is not hard because you have to formalize human morality, it's hard because you have to have a system for formalizing things in general?
I'm tempted to ask why you're so confident on this subject, but this debate probably isn't worth having because once you're at the point where you can formalize things, the relative difficulty of formalizing different utility functions will presumably be obvious.
The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara. Apologies for the late notice. We'll have fun, food, and attempts at rationality, as well as good general conversation. Details at the bay area OB/LW meet-up page.