Seriously?
Experience tells us to discount predictions of imminent AGI, to the point where only the strongest of reasons can overcome this. If AIXI represented a large enough increase in understanding of what we're even talking about, that could be part of a strong argument. But as I said in the great-grandparent, it doesn't.
Past predictive accuracy of expert opinions on the subject if AI superintelligence tells us nothing about what to infer from current predictions. If superintelligent AI were to actually arrive tomorrow, or 50 years from now, or 150 years from now, there would be no discernable difference in present expert opinion. On these sorts of subjects expert opinion is totally uncorrolated from reality. So no, experience tells us nothing about predictions of imminent or non-imminent AGI. We can thank our own Stuart Armstrong for this contribution.
But hey, let's take ...
Today, the Machine Intelligence Research Institute is launching a new forum for research discussion: the Intelligent Agent Foundations Forum! It's already been seeded with a bunch of new work on MIRI topics from the last few months.
We've covered most of the (what, why, how) subjects on the forum's new welcome post and the How to Contribute page, but this post is an easy place to comment if you have further questions (or if, maths forbid, there are technical issues with the forum instead of on it).
But before that, go ahead and check it out!
(Major thanks to Benja Fallenstein, Alice Monday, and Elliott Jin for their work on the forum code, and to all the contributors so far!)
EDIT 3/22: Jessica Taylor, Benja Fallenstein, and I wrote forum digest posts summarizing and linking to recent work (on the IAFF and elsewhere) on reflective oracle machines, on corrigibility, utility indifference, and related control ideas, and on updateless decision theory and the logic of provability, respectively! These are pretty excellent resources for reading up on those topics, in my biased opinion.