JoshuaZ comments on A Proposed Adjustment to the Astronomical Waste Argument - Less Wrong

19 Post author: Nick_Beckstead 27 May 2013 03:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 27 May 2013 05:51:01PM *  5 points [-]

Is there any time line where if it hasn't happened by that point you'd start doubting whether it will occur?

Comment author: Eliezer_Yudkowsky 27 May 2013 07:18:47PM 8 points [-]

My difficulty imagining a genuinely realistic mechanism of impossibility is such that I want to see the details of how it doesn't happen before I update. I could make up dumb stories but they would be the wrong explanation if it actually happened, because I don't think those dumb stories are actually plausible.

Comment author: TheOtherDave 27 May 2013 06:12:21PM 8 points [-]

Is there any time line where if it has happened by that point you'd start doubting whether it will occur?

While I acknowledge that this sort of counterintuitive anti-inductivist position has precedent on this site, I suspect you mean "hasn't happened".

Comment author: JoshuaZ 27 May 2013 07:22:17PM 1 point [-]

Yes, fixed, thank you.

Comment author: Benja 27 May 2013 07:01:55PM *  4 points [-]

(1) I agree with the grandparent.

(2) Yes, of course. But I feel that there's enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.

If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards "superintelligence is impossible" would be very much quicker than the update towards "AGI is impossible" in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it's within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn't have but that don't lead super-large improvements.]