Thomas comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: Thomas 14 March 2012 02:06:46PM 0 points [-]

But that's hardly a "FOOM", and the Romans would have a hundred years to stop you,

Exactly. And here the parable breaks down. The upload just might have those centuries. Virtual subjective time of thousands of years to devise a cunning plan, before we the humans even discuss their advantage. Yudkowsky has wrote a short story about this. http://lesswrong.com/lw/qk/that_alien_message/

Comment author: asr 14 March 2012 03:57:44PM *  2 points [-]

Bugmaster's point was that it takes a century of action by external parties, not a century of subjective thinking time. The timetable doesn't get advanced all that much by super-intelligence. Real-world changes happen on real-world timetables. And yes, the rate of change might be exponential, but exponential curves grow slowly at first.

And meanwhile, other things are happening in that century that might upset the plans and that cannot be arbitrarily controlled even by super-intelligence.

Comment author: JohnWittle 14 March 2012 06:18:40PM 0 points [-]

Err... minor quibble.

Exponential curves grow at the same rate all the time. That is, if you zoom in on the x^2 graph at any point at any scale, it will look exactly the same as it did before you zoomed in.

Comment author: asr 14 March 2012 06:42:01PM 0 points [-]

I think we are using "rate" in different ways. The absolute rate of change per unit time for an exponential is hardly constant; If you look at the segment of e^x near, say, e^10, it's growing much faster than it is at e^(-10).

Comment author: Bugmaster 14 March 2012 04:33:43PM 0 points [-]

asr got my point exactly right.