Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Max Tegmark on our place in history: "We're Not Insignificant After All" - Less Wrong

18 [deleted] 04 January 2010 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 04 January 2010 05:35:54PM 4 points [-]

My current position is I don't know what the correct action to take to nudge the world the way I want. The world seems to be working somewhat at this point and any nudge may send it into a path towards something that doesn't work (even sub-human AI might change the order of the world so much it stops working).

So my strategy is to try and prepare a nudge that could be used in case of emergencies. While trying to live a semi-normal life as well and cope with akrasia etc, it is not going quickly.

Comment author: Wei_Dai 04 January 2010 06:04:22PM 3 points [-]

There are some actions that seem to be clear wins, like fighting against unFriendly AI. I find it difficult to see what kind of nudge you could prepare that would be effective in an emergency. Can you say more about the kind of thing you had in mind?

Comment author: whpearson 04 January 2010 08:12:53PM 1 point [-]

I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn't guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.

Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn't leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.

There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.

*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.