wedrifid comments on Open Thread September, Part 3 - Less Wrong

2 Post author: LucasSloan 28 September 2010 05:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 02 October 2010 01:31:03PM *  2 points [-]

Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.

No, just Friendliness. Increasing intelligence has no weight whatsoever as a terminal goal. Of course, an AI that did not increase its intelligence to a level which it could do anything practical to aid me (or whatever the AI is Friendly to) is trivially not Friendly a posteriori.

Comment author: NancyLebovitz 02 October 2010 01:41:32PM 0 points [-]

That leads to an interesting question-- how would an FAI decide how much intelligence is enough?

Comment author: wedrifid 02 October 2010 01:51:12PM *  2 points [-]

I don't know. It's supposed to be the smart one, not me. ;)

I'm hoping it goes something like:

  • Predict the expected outcome of choosing to self improve some more.
  • Predict the expected outcome of choosing not to self improve some more.
  • Do the one that gives the best probability distribution of expected results.