Tim_Freeman comments on Against Discount Rates - Less Wrong

23 Post author: Eliezer_Yudkowsky 21 January 2008 10:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Freeman 29 April 2008 11:39:15AM 2 points [-]

Three points in response to Eliezer's post and one of his replies:

*** A limited time horizon works better than he says. If an AI wants to put its world into a state desired by humans, and it knows that the humans don't want to live in a galaxy that will be explode in a year, then an AI that closes its books in 1000 years will make sure that the galaxy won't explode one year later.

*** An unbounded utility works worse than he says. Recall the ^^^^ operator originally by Knuth (see http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation) that was used in the Pascal's Mugging article at http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/.

If one allows unbounded utilities, then one has allowed a utility of about 3^^^^3 that has no low-entropy representation. In other words, there isn't enough matter to represent a utility.

Humans have heads of a limited size that don't use higher math to represent their desires, so bounding the utility function doesn't limit our ability to describe human desire.

*** Ad-hominem is a fallacy. The merit of a proposed FAI solution is a function of the solution, not who proposed it or how long it took them. An essential step toward overcoming bias is to train oneself not to commit well-known fallacies. There's a good list in "The Art of Controversy" by Schopenhauer, see http://www.gutenberg.org/etext/10731.

Of course, I'm bothering to say this because I have a proposed solution out. See http://www.fungible.com/respect/paper.html.