You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Academian comments on Proposal: Use logical depth relative to human history as objective function for superintelligence - Less Wrong Discussion

7 Post author: sbenthall 14 September 2014 08:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: Academian 15 September 2014 12:07:42AM *  5 points [-]

1) Logical depth seems super cool to me, and is perhaps the best way I've seen for quantifying "interestingness" without mistakenly equating it with "unlikeliness" or "incompressibility".

2) Despite this, Manfred's brain-encoding-halting-times example illustrates a way a D(u/h) / D(u) optimized future could be terrible... do you think this future would not obtain, because despite being human-brain-based, would not in fact make much use of being on a human brain? That is, it would have extremely high D(u) and therefore be penalized?

I think it would be easy to rationalize/over-fit our intuitions about this formula to convince ourselves that it matches our intuitions about what is a good future. More realistically, I suspect that our favorite futures have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u).

Comment author: sbenthall 15 September 2014 02:52:46AM 1 point [-]

1) Thanks, that's encouraging feedback! I love logical depth as a complexity measure. I've been obsessed with it for years and it's nice to have company.

2) Yes, my claim is that Manfred's doomsday cases would have very high D(u) and would be penalized. That is the purpose of having that term in the formula.

I agree with your suspicion that our favorite future have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u). I suppose I'd defend a weaker claim, that a D(u/h) / D(u) supercontroller would not be an existential threat. One reason for this is that D(u) is so difficult to compute that it would be pretty bogged down....

One reason for making a concrete proposal of an objective function is that if it pretty good, that means maybe it's a starting point for further refinement.

Comment author: cousin_it 18 September 2014 11:43:17AM *  2 points [-]

I agree with your suspicion that our favorite future have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u).

Many utility functions have the same feature. For example, I could give the AI some flying robots with cameras, and teach it to count smiling people in the street by simple image recognition algorithms. That utility function would also assign a high score to our favorite future, but not the highest score. Of course the smile maximizer is one of LW's recurring nightmares, like the paperclip maximizer.

I suppose I'd defend a weaker claim, that a D(u/h) / D(u) supercontroller would not be an existential threat. One reason for this is that D(u) is so difficult to compute that it would be pretty bogged down...

Any function that's computationally hard to optimize would have the same feature.

What other nice features does your proposal have?