timtyler comments on Transparency and Accountability - Less Wrong

16 Post author: multifoliaterose 21 August 2010 01:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (141)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 21 August 2010 09:26:12PM 3 points [-]
  1. Distance.
  2. Tradition in the field of economics
  3. Mathematical well-behavedness may demand this if the universal expansion is not slowing down.
  4. Reciprocity. Future folks aren't concerned about my wishes, so why should I be concerned about theirs?
  5. What makes a life at one time worth the same as a life at a different time?

In a sense, these are flip answers, because I am not really a utilitarian to begin with. And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate. Utilities are like subjective probabilities of different people - it might make sense to compute a weighted average, but how do you justify your weighting scheme?

I suspect that discussing this topic carefully would take too much of my time from other responsibilities, but I hope this sketch has at least given you some things to think about.

Comment author: timtyler 21 August 2010 09:48:29PM 0 points [-]
Comment author: Perplexed 22 August 2010 01:32:13AM 1 point [-]

Considered. Not convinced. If that was intended as an argument, then EY was having a very bad day.

He is welcome to his opinion but he is not welcome to substitute his for mine.

The ending was particularly bizarre. It sounded like he was saying that treasury bills don't pay enough interest to make up for the risk that the US may not be here 300 years from now. But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.

Comment author: Nick_Tarleton 22 August 2010 03:00:36AM *  2 points [-]

But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.

Eliezer doesn't disagree: as he says more than once, he's talking about pure preferences, intrinsic values. Other risks do need to be incorporated, but it seems better to do so directly, rather than through a discounting heuristic. Larks seems to implicitly be doing this with his P(AGI) = 10^-9.

Comment author: timtyler 22 August 2010 06:18:24AM *  1 point [-]

Time travel, the past "still existing" - and utilitariainism? I don't buy any of that either - but in the context of artificial intelligence, I do agree that building discounting functions into the agent's ultimate values looks like bad news.

Discounting functions arise because agents don't know about the future - and can't predict or control it very well. However, the extent to which they can't predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue - then it can't ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in - and let the machine itself figure out to what extent it can predict and control the future - and so figure out the relative value of the present.