Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TheAncientGeek 25 March 2017 11:38:06AM 0 points [-]

I'm not using a definition, I'm pointing out that standard arguments about UFs depend on ambiguities.

Your definition is abstract and doens't capture anything that an actual AI could "have" -- for one thing, you can't compute the reals. It also fails to capture what UF's are "for".

Comment author: denimalpaca 25 March 2017 05:52:06PM 0 points [-]

Go read a textbook on AI. You clearly do not understand utility functions.

Comment author: TheAncientGeek 24 March 2017 12:35:09PM *  0 points [-]

Is there an article that presents multiple models of UF-driven humans and demonstrates that what you criticize as contrived actually shows there is no territory to correspond to the map?

Rather than trying to prove the negative, it is more a question of whether these models are known to be useful.

The idea of mulitple or changing UFs suffers from a problem falsifiability, as well. Whenever a human changes their apparent goals, that's a switch to another UF, or a change in UF? Reminiscent of ptolemaic epicycles, as Ben Goerzel says.

And you didn't answer my question: is there another way, besides UFs, to guide an agent towards a goal? It seems to me that the idea of moving toward a goal implies a utility function, be it hunger or human programmed.

Implies what kind of UF?

If you are arguing tautologously that having a UF just is having goal directed behaviour, then you are not going to be able to draw interesting conclusions. If you are going to define "having a UF broadly, then you are going to have similar problems, and in particular the problem that "the problem of making an AI safe simplifies to the problem of making its UF safe" only works for certain, relatively narrow, definitions of UF. In the context of a biological organism, or an artificial neural net or deep learning AI, the only thing "UF" could mean is some aspect of its functioning that is entangled with all the others. Neither a biological organism, nor an artificial neural net or deep learning AI is going to have a UF that can be conveniently separated out and reprogrammed. That definition of UF only belongs in the context of GOFAI or symbolic programming.

There is no point in defining a term broadly to make one claim come out true, if it was is only an intermediate step towards some other claim, which doesn't come out as true under the broad definition.

Comment author: denimalpaca 24 March 2017 08:10:02PM 0 points [-]

My definition of utility function is one commonly used in AI. It is a mapping of states to a real number: u:E -> R where u is a state in E (the set of all possible states), and R is the reals in one dimension.

What definition are you using? I don't think we can have a productive conversation until we both understand each other's definitions.

Comment author: gjm 24 March 2017 12:12:02AM 0 points [-]

I still don't understand. (Less tactfully, I think what you're saying is simply wrong; but I may be missing something.)

Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer -- by one bit -- and therefore may take a little more energy to do things with; but it's only 10% bigger than the first number.

Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it's not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that's larger than the amount of energy required to simulate one interaction -- which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out -- then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.

Comment author: denimalpaca 24 March 2017 03:58:30PM 0 points [-]

I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:

http://rationalwiki.org/wiki/Simulated_reality#Feasibility

I think the feasibility argument described here better encapsulates what I'm trying to get at, and I'll defer to this argument until I can better (more mathematically) state mine.

"Yet the number of interactions required to make such a "perfect" simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume "simulation" is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer - and therefore it can "calculate" itself. But then, that doesn't really say the same thing as "we exist in someone else's simulation"." (from the link).

This conclusion about the universe "simulating itself" is really what I'm trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a "self-simulating universe" is the most likely conclusion, which is of course just a base universe.

Comment author: gjm 23 March 2017 12:06:35AM 0 points [-]

To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern.

I don't understand this argument. If it's appealing to a general principle that "simulating something with energy E requires energy at least E" then I don't see any reason why that should be true. Why should it take twice as much energy to simulate a blue photon as a red photon, for instance?

(I am sympathetic to the overall pattern of your argument; I also do not expect civilizations like ours to run a lot of ancestral simulations and have never understood why they should be expected to, and I suspect that one reason why not is that the resources to do it well would be very large and even if it were possible there ought to be more useful things to do with those resources.)

Comment author: denimalpaca 23 March 2017 10:17:23PM 0 points [-]

Let me be a little more clear. Let's assume that we're in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.

Some machine that we're being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun's information is compressed, it would still have to be decompressed when used (or else we have a "lossy" sun, not good if you don't want your simulations to figure out they're in a simulation) - and compressing/decompressing takes energy.

We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.

Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.

As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.

Comment author: g_pepper 22 March 2017 11:10:16PM 0 points [-]

Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?

"Technical maturity" as used in the first disjunct means "capable of running high-fidelity ancestor simulations". So, it sounds like you are arguing for the 1st disjunct (or something very close to it) rather than the second, since you are arguing that, due to energy constraints, a civilization like ours would be incapable of reaching technological maturity.

Comment author: denimalpaca 23 March 2017 09:58:55PM 0 points [-]

Yes, then I'm arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....

Comment author: denimalpaca 22 March 2017 10:08:28PM 0 points [-]

"(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation."

Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?

Energy. If we are to run an ancestral simulation that even remotely wants to correctly simulate as complex phenomenon as weather, we would probably need the scale of the simulation to be quite large. We would definitely need to simulate the entire earth, moon, and sun, as the physical relationships between these three are very intertwined. Now, let's focus on the sun for a second, because it should provide us with all the evidence we need that a simulation would be implausible.

The sun has a lot of energy, and to simulate it would itself require a lot of energy. To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern. So just to properly simulate the sun, we'd need to generate more energy than the sun has, which already seems very implausible on earth, given we can't create a reactor larger than the sun on the earth. If we extend this argument to simulating the entire universe, it seems impossible that humans would ever have the necessary energy to simulate all the energy in the universe, so we must only be able to simulate a part of the universe or a smaller universe. This again follows from the fact that to perfectly simulate something, it requires more energy than the thing simulated.

Comment author: denimalpaca 22 March 2017 09:46:50PM 1 point [-]

You should look up the phrase "planned obsolescence". It's a concept taught in many engineering schools. Apple employs it in it's products. The basic idea is similar to your thoughts under "Greater Global Wealth": the machine is designed to have a lifetime that is significantly shorter than what is possible, specifically to get users to keep buying a machine. This is essentially subscription-izing products; subscriptions are, especially today in the start up world, generally a better business model than selling one product one time (or even a couple times).

With phones, this makes perfect sense, given the pace of advancements in the phones, generation after generation.

While you would think that a poor person would optimize for durability, often durability is more expensive, meaning that the poor person's only real choice is a lower-quality product that does not last as long.

"Better materials science: Globally, materials science has improved. Hence, at the local level, manufacturers can get away with making worse materials." This doesn't really follow to me. There are many reasons a manufacturer would use worse materials than the global "best materials", including lower costs. It seems to me that your idea of 'greater global implies worse local' can be equally explained as a phenomenon of capitalism, where the need to make an acceptable product as cheaply as possible does not often align with making the best product at whatever the cost.

Comment author: TheAncientGeek 22 March 2017 02:57:47PM 2 points [-]

The basic problem is the endemic confusion between the map, the UF as a way of modelling an entity, and the territory. the UF as an architectural feature that makes certain things happen.

The fact that there are multiple ways of modelling humans as UF-driven, and the fact that they are all a bit contrived, should be a hint that there may be no territory corresponding to the map.

Comment author: denimalpaca 22 March 2017 06:00:08PM 0 points [-]

Is there an article that presents multiple models of UF-driven humans and demonstrates that what you criticize as contrived actually shows there is no territory to correspond to the map? Right now your statement doesn't have enough detail for me to be convinced that UF-driven humans are a bad model.

And you didn't answer my question: is there another way, besides UFs, to guide an agent towards a goal? It seems to me that the idea of moving toward a goal implies a utility function, be it hunger or human programmed.

Comment author: TheAncientGeek 20 March 2017 10:13:34PM 0 points [-]

You could model humans as having varying UFs, or having multiple UFs...or you could give up on the whole idea.

Comment author: denimalpaca 21 March 2017 06:46:20PM 0 points [-]

Why would I give up the whole idea? I think you're correct in that you could model a human with multiple, varying UFs. Is there another way you know of to guide an intelligence toward a goal?

Comment author: denimalpaca 17 March 2017 05:51:57PM 0 points [-]

I think you're getting stuck on the idea of one utility function. I like to think humans have many, many utility functions. Some we outgrow, some we "restart" from time to time. For the former, think of a baby learning to walk. There is a utility function, or something very much like it, that gets the baby from sitting to crawling to walking. Once the baby learns how to walk, though, the utility function is no longer useful; the goal has been met. Now this action moves from being modeled by a utility function to a known action that can be used as input to other utility functions.

As best as I can tell, human general intelligence comes from many small intelligences acting in a cohesive way. The brain is structured like this, as a bunch of different sections that do very specific things. Machine models are moving in this direction, with the Deepmind Go neural net playing a version of itself to get better a good example.

View more: Next