I'm pleased to announce friendly-artificial-intelligence, a google group intended for research-level discussion of problems in FAI and AGI, in particular for discussions that are highly technical and/or math intensive.
Some examples of possible discussion topics: naturalized induction, decision theory, tiling agents / Loebian obstacle, logical uncertainty...
I invite everyone who want to take part in FAI research to participate in the group. This obviously includes people affiliated with MIRI, FHI and CSER, people who attend MIRI workshops and participants of the southern california FAI workshop.
Please, come in and share your discoveries, ideas, thoughts, questions et cetera. See you there!
I cannot imagine the circumstances under which a stray hobbyist would be able to beat a massive government or corporate effort to the punch in AI. The imbalance of resources is simply too great. Concern yourself with what the goals and methodologies of those large corporate or government efforts should look like.
I get the concept, but I am totally unconvinced that anything MIRI is putting out could increase x-risk; in fact I think it's wildly improbable that any research of any organization today could lower AI related x-risk with decent odds. We're so far from real AI that it's like if Ernest Rutherford were trying to direct the eventual weaponization of his discoveries.
Also, if MIRI actually were sitting on something they'd researched because of supposed potential x-risk increase, I'd take it substantially less seriously as a research organization.
Really? It seems to me that they bring up the possibility more often than they would if it was a problem they'd never actually encountered before. Then again, it's possible that they're playing one level higher than that, or just being typically precautionary (in which case I say kudos to them for taking the precaution).