MrMind comments on On saving the world - Less Wrong

101 Post author: So8res 30 January 2014 08:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 31 January 2014 12:06:36AM *  3 points [-]

Write a bullet-point summary for each sequence and tell me that one would not be tempted to "dismiss them out of hand, even lacking a chain of arguments leading up to them", unless one is already familiar with the arguments.

Comment author: MrMind 31 January 2014 04:36:59PM 11 points [-]

I'll try, just for fun, to summarize Eliezer's conclusions of the pre-fun-theory and pre-community-building part of the sequence:

  • artificial intelligence can self-improve;
  • with every improvement, the rate at which it can improve increases;
  • AGI will therefore experience exponential improvement (AI fooms);
  • even if there's a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
  • an agent effectiveness does not constrain its utility function (orthogonality thesis);
  • humanity's utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
  • if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
  • an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.