ialdabaoth comments on A critique of effective altruism - Less Wrong

64 Post author: benkuhn 02 December 2013 04:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (152)

You are viewing a single comment's thread. Show more comments above.

Comment author: ialdabaoth 03 December 2013 01:35:25AM 0 points [-]

Consider a hypothetical paperclip maximizer. It has some resources, it has to choose between using them to make paperclips or using them to develop more efficient ways of gathering resources. A basic positive feedback calculation means the latter will lead to more paperclips in the long run. But if it keeps using that logic, it will keep developing more and more efficient ways of gathering resources and never actually get around to making paperclips.

Can't this be solved through exponential discounting? If paperclips made later are discounted more than paperclips made sooner, then we can settle on a stable strategy for when to optimize vs. when to execute, based on our estimations of optimization returns at each stage being exponential, super-exponential, or sub-exponential.

Comment author: Gurkenglas 03 December 2013 07:39:47PM *  3 points [-]

Finding a problem with the simple algorithm that usually gives you a good outcome doesn't mean you get to choose a new utility function.

Comment author: Gurkenglas 03 December 2013 08:00:56PM *  1 point [-]

Clarifying anti-tldr edit time! If you got the above, no need to read on. (I wanted this to be an edit, but apparently I fail at clicking buttons)

The simple algorithm is the greedy decision-finding method "Choose that action which leads to one-time-tick-into-future self having the best possible range of outcomes available via further actions", which you think could handle this problem if only the utility function employed exponential discounting (whether it actually could is irrelevant, since I adress another point).

But your utility function is part of the territory, and the utility function that you use for calculating your actions is part of the map; it is rather suspicious that you want to tweak your map towards a version that is more convenient to your calculations.

Comment author: Eugine_Nier 03 December 2013 01:49:24AM 0 points [-]

Yes, but Eliezer doesn't believe in discounting terminal values.

Comment author: ialdabaoth 03 December 2013 02:02:23AM 1 point [-]

So, let's be clear - are we talking about what works, or what we think Eliezer is dumb for believing?

Comment author: Eugine_Nier 05 December 2013 05:56:38AM *  -2 points [-]

Well, first I'm not a consequentialist.

However, the linked post has a point, why should we value future live less?

Comment author: owencb 03 December 2013 07:33:04PM 1 point [-]

There are questions about why we should discount at all, or if we are going to, how to choose an appropriate rate.

But even setting those aside: this isn't any more of a solution than the version without discounting. They're similarly reliant on empirical facts about the world (the rate of resource growth); they just give differing answers about how fast that rate needs to be before you should wait rather than cash out.