Well, certainly I've been coming from the viewpoint that it doesn't capture these intuitions :P Human values are complicated (I'm assuming you've read the relevant posts here (e.g.)), both in terms of their representation and in terms of how they are cashed out from their representation into preferences over world-states. Thus any solution that doesn't have very good evidence that it will satisfy human values, will very likely not do so (small target in a big space).
In terms of the ratio of depth-relative-to-humanity to absolute-depth, there's going to be hardly any gains there.
I'd say, gains relative to what? When considering an enslaved Turing Factory full of people versus a happy Montessori School full of people, I don't see why there should be any predictable difference in their logical depth relative to fundamental physics. The Turing Factory only shows up as "simple" on a higher level of abstraction.
If we restrict our search space to "a planet full of people doing something human-ish," and we cash out "logical depth relative to humanity" as operating on a high level of abstraction where this actually has a simple description, then the process seems dominated by whatever squeezes the most high-level-of-abstraction deep computation out of people without actually having a simple pattern in the lower level of abstraction of fundamental physics.
Idea for generally breaking this: If we cash out the the "logical depth relative to humanity" as being in terms of fundamental physics, and allowing us to use a complete human blueprint, then we can use this to encode patterns in a human-shaped object that are simple and high-depth relative to the blueprint but look like noise relative to physics. If both "logical depth relative to humanity" and "logical depth" are on a high, humanish level of abstraction, one encodes high-depth computational results in slight changes to human cultural artifacts that look like noise relative to a high-level-of-abstraction description that doesn't have the blueprints for those artifacts. Etc.
Maybe this will be more helpful:
If the universe computes things that are not computational continuations of the human condition (which might include resolution to our moral quandaries, if that is in the cards), then it is, with respect to optimizing function g, wasting the perfectly good computational depth achieved by humanity so far. So, driving computation that is not somehow reflective of where humanity was already going is undesirable. The computational work that is favored is work that makes the most of what humanity was up to anyway.
To the extent t...
I attended Nick Bostrom's talk at UC Berkeley last Friday and got intrigued by these problems again. I wanted to pitch an idea here, with the question: Have any of you seen work along these lines before? Can you recommend any papers or posts? Are you interested in collaborating on this angle in further depth?
The problem I'm thinking about (surely naively, relative to y'all) is: What would you want to program an omnipotent machine to optimize?
For the sake of avoiding some baggage, I'm not going to assume this machine is "superintelligent" or an AGI. Rather, I'm going to call it a supercontroller, just something omnipotently effective at optimizing some function of what it perceives in its environment.
As has been noted in other arguments, a supercontroller that optimizes the number of paperclips in the universe would be a disaster. Maybe any supercontroller that was insensitive to human values would be a disaster. What constitutes a disaster? An end of human history. If we're all killed and our memories wiped out to make more efficient paperclip-making machines, then it's as if we never existed. That is existential risk.
The challenge is: how can one formulate an abstract objective function that would preserve human history and its evolving continuity?
I'd like to propose an answer that depends on the notion of logical depth as proposed by C.H. Bennett and outlined in section 7.7 of Li and Vitanyi's An Introduction to Kolmogorov Complexity and Its Applications which I'm sure many of you have handy. Logical depth is a super fascinating complexity measure that Li and Vitanyi summarize thusly:
The mathematics is fascinating and better read in the original Bennett paper than here. Suffice it presently to summarize some of its interesting properties, for the sake of intuition.