hairyfigment comments on New forum for MIRI research: Intelligent Agent Foundations Forum - Less Wrong

36 Post author: orthonormal 20 March 2015 12:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: hairyfigment 23 March 2015 05:47:31PM 0 points [-]

As people have said at length, AIXI is not a solution even in principle. Hence MIRI's work on an actual theory of AI and FAI. Speaking of, I've said this before, but I'll state it now more starkly: your timeline seems as delusional to me as MIRI apparently seems to you.

Comment author: [deleted] 23 March 2015 06:47:51PM 3 points [-]

Speaking of, I've said this before, but I'll state it now more starkly: your timeline seems as delusional to me as MIRI apparently seems to you.

That's great, I'd love to engage with you on that. What timeline would you give higher probability to, and why?

Comment author: hairyfigment 23 March 2015 08:13:03PM -2 points [-]

I roughly agree with Luke - that would be the director of MIRI - in placing the median close to 2070.

Comment author: [deleted] 23 March 2015 08:23:20PM 0 points [-]

What about the second half of the question, why?

Comment author: hairyfigment 23 March 2015 11:14:08PM -1 points [-]

Seriously?

Experience tells us to discount predictions of imminent AGI, to the point where only the strongest of reasons can overcome this. If AIXI represented a large enough increase in understanding of what we're even talking about, that could be part of a strong argument. But as I said in the great-grandparent, it doesn't.

Comment author: [deleted] 25 March 2015 12:07:06AM *  3 points [-]

Past predictive accuracy of expert opinions on the subject if AI superintelligence tells us nothing about what to infer from current predictions. If superintelligent AI were to actually arrive tomorrow, or 50 years from now, or 150 years from now, there would be no discernable difference in present expert opinion. On these sorts of subjects expert opinion is totally uncorrolated from reality. So no, experience tells us nothing about predictions of imminent or non-imminent AGI. We can thank our own Stuart Armstrong for this contribution.

But hey, let's take 2070 at face value. That'd be great news! We could completely forget about the existential threat due to unfriendly AI. After all, it'd be decades after even pessimistic estimates for whole-brain emulation[1] enables the first uploaded human intelligences. And a decade or so further after atomicly precise manufacturing[2] gives us the tools to do in-vivo[3] intelligence enhancement. By 2070 we'd already be in a world of human-derived superintellences, so thankfully we needn't fret over our own biological limitations preventing us from keeping pace with superintelligent AI.

Or is that not the future you imagined?

  1. http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
  2. https://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf
  3. http://www.nanomedicine.com/