Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Thomas 21 March 2017 02:08:34PM 1 point [-]

I will present my (computer generated) solutions ASAP. Currently they are still evolving.

Comment author: Oscar_Cunningham 21 March 2017 03:56:20PM *  0 points [-]

I can improve my score to (620sqrt(3)-973)/191 = 0.528... using this arrangement.

EDIT: This arrangement does even better with a score of (19328sqrt(3)-30613)/6143 = 0.466... . Note that there is a tiny corner cut off each trapezium.

Comment author: Thomas 21 March 2017 02:08:34PM 1 point [-]

I will present my (computer generated) solutions ASAP. Currently they are still evolving.

Comment author: Oscar_Cunningham 21 March 2017 02:23:50PM 0 points [-]

Okay, sounds exciting!

Comment author: Thomas 15 March 2017 06:06:16AM 0 points [-]

I will publish my solution next Monday.

Comment author: Oscar_Cunningham 21 March 2017 11:59:28AM 0 points [-]

I'm interested to see your solution.

Comment author: turchin 20 March 2017 08:09:59PM 1 point [-]

I don't know where to put my stupid question: If we know examples where some DT is wrong, we probably have some meta-level DT which tells us that in this example given DT is wrong. So why not try to articulate and use this meta-level DT?

Comment author: Oscar_Cunningham 20 March 2017 10:36:10PM 4 points [-]

This is pretty much how TDT and UDT were discovered.

Comment author: Lumifer 18 March 2017 03:13:35AM 0 points [-]

In other words the VNM theorem says that our AGI has to have a utility function

Still nope. The VNM theorem says that if our AGI sticks to VNM axioms then a utility function describing its preferences exists. Exists somewhere in the rather vast space of mathematical functions. The theorem doesn't say that the AGI "has" it -- neither that it knows it, nor that it can calculate it.

Comment author: Oscar_Cunningham 18 March 2017 09:41:44AM 0 points [-]

That's what I meant.

Comment author: Lumifer 17 March 2017 07:53:19PM *  0 points [-]

Taking the VNM axioms as the definition of "coherent" then the VNM theorem proves precisely that "coherent" implies "has a utility function".

Sure, but that's an uninteresting tautology. If we define A as a set of conditions sufficient for B to happen then lo and behold! A implies B.

So in that context the VNM theorem raises the question "Exactly which of the axioms is it advantageous to violate?"

The VNM theorem posits that a utility function exists. It doesn't say anything about how to find it or how to evaluate it, never mind in real time.

It's like asking why humans don't do the Solomonoff induction all the time -- "there must be a reason, what is it?"

Comment author: Oscar_Cunningham 17 March 2017 09:17:22PM 1 point [-]

Sure, but that's an uninteresting tautology. If we define A as a set of conditions sufficient for B to happen then lo and behold! A implies B.

Come on, mathematics is sometimes interesting, right?

The VNM theorem posits that a utility function exists. It doesn't say anything about how to find it or how to evaluate it, never mind in real time.

It's like asking why humans don't do the Solomonoff induction all the time -- "there must be a reason, what is it?"

Yeah okay, I agree with this. In other words the VNM theorem says that our AGI has to have a utility function, but it doesn't say that we have to be thinking about utility functions when we build it or care about utility functions at all, just that we will have "by accident" created one.

I still think that using utility functions actually is a good idea though, but I agree that that isn't implied by the VNM theorem.

Comment author: Lumifer 17 March 2017 02:25:03PM 1 point [-]

I'm quite familiar with the VNM utility, but here we are talking about real live meatbag humans, not about mathematical abstractions.

Comment author: Oscar_Cunningham 17 March 2017 07:45:31PM 1 point [-]

You asked

but to the extent that any agent makes coherent goal-driven decisions, it has a utility function

That is not obvious to me. Why is it so? (defining "utility function" might be helpful)

Taking the VNM axioms as the definition of "coherent" then the VNM theorem proves precisely that "coherent" implies "has a utility function".

Anyway, the context of the original post was that humans had an advantage through not having a utility function. So in that context the VNM theorem raises the question "Exactly which of the axioms is it advantageous to violate?".

Comment author: Lumifer 16 March 2017 08:16:09PM *  1 point [-]

but to the extent that any agent makes coherent goal-driven decisions, it has a utility function

That is not obvious to me. Why is it so? (defining "utility function" might be helpful)

Comment author: Oscar_Cunningham 17 March 2017 02:10:56PM 2 points [-]

I'm not sure how rhetorical your question is but you might want to look at the Von Neumann–Morgenstern utility theorem.

Comment author: Oscar_Cunningham 16 March 2017 08:49:13PM 2 points [-]

I think utility functions can produce more behaviours than you give them credit for.

  1. Humans don't have a utility function and make very incoherent decisions. Humans are also the most intelligent organisms on the planet. In fact, it seems to me that the less intelligent an organism is, the easier its behavior can be approximated with model that has a utility function!

The less intelligent organisms are certainly more predictable. But I think that the less intelligent ones actually can't be described by utility functions and are instead predictable for other reasons. A classic example is the Sphex wasp.

Some Sphex wasps drop a paralyzed insect near the opening of the nest. Before taking provisions into the nest, the Sphex first inspects the nest, leaving the prey outside. During the inspection, an experimenter can move the prey a few inches away from the opening. When the Sphex emerges from the nest ready to drag in the prey, it finds the prey missing. The Sphex quickly locates the moved prey, but now its behavioral "program" has been reset. After dragging the prey back to the opening of the nest, once again the Sphex is compelled to inspect the nest, so the prey is again dropped and left outside during another stereotypical inspection of the nest. This iteration can be repeated several times without the Sphex changing its sequence; by some accounts, endlessly.

So it looks like the wasp has a utility function "ensure the survival of its children" but in fact it's just following one of a number of fixed "programs". Whereas humans are actually capable of considering several plans and choosing the one they prefer, which I think is much closer to having a utility function. Of course humans are less predictable, but one would always expect intelligent organisms to be unpredictable. To predict an agent's actions you essentially have to mimic its thought processes, which will be longer for more intelligent organisms whether they use a utility function or not.

  1. The randomness of human decisions seems essential to human success (on top of other essentials such as speech and cooking). Humans seem to have a knack for sacrificing precious lifetime for fool's errands that very occasionally create benefit for the entire species.

If trying actions at random produces useful results then a utility maximising AI will choose this course. Utility maximisers consider all plans and pick the one with the highest expected utility, and this can turn out to be one that doesn't look like it goes directly towards the goal. Eventually of course the AI will have to turn its attention towards its main goal. The question of when to do this is known as the exploration vs. exploitation tradeoff and there are mathematical results that utility maximisers tend to begin by exploring their options and then turn to exploiting their discoveries once they've learnt enough.

To define a utility function is to define a (direction towards a) goal. So a discussion of an AI with one, single, unchanging utility function is a discussion of an AI with one, single, unchanging goal. That isn't just unlike the intelligent organisms we know, it isn't even a failure mode of intelligent organisms we know. The nearest approximations we have are the least intelligent members of our species.

Again I think that this sort of behaviour (acting towards multiple goals) can be exhibited by utility maximizers. I'll give a simple example. Consider the agent who can by any 10 fruits from a market, and suppose its utility function is sqrt(number of oranges) + sqrt(number of apples). Then it buys 5 oranges and 5 apples (rather than just buying 10 apples or 10 oranges). The important thing about the example is the the derivative of the utility function is decreasing as the number of oranges increases, and so the more it has already the more it will prefer to buy apples instead. This creates a balance. This is just a simple example but by analogy it would be totally possible to create a utility function to describe a multitude of complex values all simultaneously.

  1. Two agents with identical utility functions are arguably functionally identical to a single agent that exists in two instances. Two agents with utility functions that are not identical are at best irrelevant to each other and at worst implacable enemies.

Just like humans, two agents with different utility functions can cooperate through trade. The two agents calculate the outcome if they trade and the outcome if they don't trade, and they make the trade if the utility afterwards is higher for both of them. It's only if their utilities are diametrically opposed that they can't cooperate.

Comment author: Good_Burning_Plastic 15 March 2017 06:19:17PM *  0 points [-]

I was about to say "Since you never specified that the shape must be a measurable set ..." and link to here, but since you mention the area of the shape, you do (implicitly) require it to have one.

Comment author: Oscar_Cunningham 15 March 2017 07:29:52PM 0 points [-]

Are all those pieces congruent though?

View more: Next