Comment author: Zaq 14 May 2013 05:21:33AM *  7 points [-]

Just thought of something:

How sure are we that P(there are N people) is not at least as small as 1/N for sufficiently large N, even without a leverage penalty? The OP seems to be arguing that the complexity penalty on the prior is insufficient to generate this low probability, since it doesn't take much additional complexity to generate scenarios with arbitrarily more people. Yet it seems to me that after some sufficiently large number, P(there are N people) must drop faster than 1/N. This is because our prior must be normalized. That is:

Sum(all non-negative integers N) of P(there are N people) = 1.

If there was some integer M such that for all n > M, P(there are n people) >= 1/n, the above sum would not converge. If we are to have a normalized prior, there must be a faster-than-1/N falloff to the function P(there are N people).

In fact, if one demands that my priors indicate that my expected average number of people in the universe/multiverse is finite, then my priors must diminish faster than 1/N^2. (So that that the sum of N*P(there are N people) converges).

TL:DR If your priors are such that the probability of there being 3^^^3 people is not smaller than 1/(3^^^3), then you don't have a normalized distribution of priors. If your priors are such that the probability of there being 3^^^3 people is not smaller than 1/((3^^^3)^2) then your expected number of people in the multiverse is divergent/infinite.

Comment author: Zaq 09 May 2013 06:15:04PM 3 points [-]

Just gonna jot down some thoughts here. First a layout of the problem.

  1. Expected utility is a product of two numbers, probability of the event times utility generated by the event.
  2. Traditionally speaking, when the event is claimed to affect 3^^^3 people, the utility generated is on the order of 3^^^3
  3. Traditionally speaking, there's nothing about the 3^^^3 people that requires a super-exponentially large extension to the complexity of the system (the univers/multivers/etc). So the probability of the event does not scale like 1/(3^^^3)
  4. Thus Expected Payoff becomes enormous, and you should pay the dude $5.
  5. If you actually follow this, you'll be mugged by random strangers offerring to save 3^^^3 people or whatever super-exponential numbers they can come up with.

In order to avoid being mugged, your suggestion is to apply a scale penalty (leverage penalty) to the probability. You then notice that this has some very strange effects on your epistemology - you become incapable of ever believing the 5$ will actually help no matter how much evidence you're given, even though evidence can make the expected payoff large. You then respond to this problem with what appears to be an excuse to be illogical and/or non-bayesian at times (due to finite computing power).

It seems to me that an alternative would be to rescale the untility value, instead of the probability. This way, you wouldn't run into any epistemic issues anywhere because you aren't messing with the epistemics.

I'm not proposing we rescale Utility(save X people) by a factor 1/X, as that would make Utility(save X people) = Utility(save 1 person) all the time, which is obviously problematic. Rather, my idea is to make Utility a per capita quantity. That way, when the random hobo tells you he'll save 3^^^3 people, he's making a claim that requires there to be at least 3^^^3 people to save. If this does turn out to be true, keeping your Utility as a per capita quantity will require a rescaling on the order of 1/(3^^^3) to account for the now-much-larger population. This gives you a small expected payoff without requiring problematically small prior probabilities.

It seems we humans may already do a rescaling of this kind anyway. We tend to value rare things more than we would if they were common, tend to protect an endangered species more than we would if it weren't endangered, and so on. But I'll be honest and say that I haven't really thought the consequences of this utility re-scaling through very much. It just seems that if you need to rescale a product of two numbers and rescaling one of the numbers causes problems, we may as well try rescaling the other and see where it leads.

Any thoughts?

Comment author: [deleted] 07 January 2013 04:40:24AM *  0 points [-]

The reason saving lives is ~linear while watching the same movie is not, is about where you are on your utility curve.

Let's assume for a minute that utility over movies and lives are both a square root or something. Any increasing function with diminishing returns will do. The point is that we are going to get this result even if they are exactly the same utility curve.

Watching the movie once gives 1 utilon. Watching it 100 times gives 10 utilons. Easy peasy.

Saving lives is a bit different. We aren't literally talking about the difference between 0 people and n people, we are talking about the difference between a few billion and a few billion+n. Any increasing function with diminishing returns will be linear by this point, so for small games, shut up and multiply.

By this same argument, the fact that lives are locally linear is not much evidence at all (LR = ~1) that they are globally linear, because there aren't any coherent utility functions that aren't linear at this scale at this point. (unless you only care about how many lives you, individually save, which isn't exactly coherent either, but for other reasons.)

(I think the morally proper think to talk about is people dieing, not people living, because we are talking about saving lives, not birthing babies. But the argument is analogous; you get the idea.)

I hope this helps you.

In response to comment by [deleted] on Boredom vs. Scope Insensitivity
Comment author: Zaq 16 January 2013 10:34:26PM 0 points [-]

Uh... what?

Sqrt(a few billion + n) is approximately Sqrt(a few billion). Increasing functions with diminishing returns don't approach Linearity at large values, their growth becomes really Small (way sub-linear, or nearly constant) at high values.

This may be an accurate description of what's going on (if, say, our value for re-watching movies falls off slower than our value for saving multiple lives), but it does not at all strike me as an argument for treating lives as linear. In fact, it strikes me as an argument for treating life-saving as More sub-linear than movie-watching.

In response to Nonperson Predicates
Comment author: Zaq 22 October 2012 07:21:45PM 0 points [-]

Food for thought:

  1. This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person's existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person's existence? If not, doesn't that give us a moral obligation to create and maintain all the simulations we can, rather than avoiding their creation? The more I think about this post, the more it seems that the optimum response is to simulate as many super-happy people as possible, and to hell with the non-simulated world (assuming the simulated people would vastly outweigh the non-simulated people in terms of 'ammount experienced').

  2. You are going to die, and there's nothing your parents can do to stop that. Was it morally wrong for them to bring about your existence in the first place?

  3. Suppose some people have crippling disabilities that cause large amounts of suffering in their lives (arguably, some people do). If we could detect the inevitable development of such disabilities at an early embryonic stage, would we be morally obligated to abort the fetuses?

  4. If an FAI is going to run a large number of simulations, is there some Rule of Large Numbers result that tells us that the simulations experiencing great amounts of pleasure match or overwhelm the simulations experiencing great amounts of pain (or could we construct the algorithms in such a way as to produce this result)? If so, we may be morally obligated to not solve this problem.

  5. Assuming you support people's "right to die," what if we simply ensured that all simulated agents ask to be deleted at the end of their run? (I am here reminded of a vegetarian friend of mine who decided the meat industry would be even more horrible if we managed to engineer cows that asked to be eaten).

Comment author: rkyeun 30 July 2012 01:55:42AM 0 points [-]

"Perform the experiment a hundred times, and—" Okay, let's talk about the ten trillionth digit of pi, then. Single-shot problem, no "long run" you can measure.

And there goes my belief in any kind of probability as a phenomenon. I don't know what the ten trillionth digit of pi is, but I know the algorithm which generates it, and it never involves a die roll or coin flip of any kind. And if the universe is to be lawful, it doesn't roll dice either. There is no probability. To say there was is to say the ten trillionth digit of pi might somehow have come out differently. And that would be unlawful.

Comment author: Zaq 09 August 2012 04:25:51PM 1 point [-]

This is silly. To say that there is some probability in the universe is not to say that everything has randomness to it. People arguing that there is intrinsic probability in physics don't argue that this intrinsic probability finds its way into the trillionth digit of pi.

Many Physicists: If I fire a single electron at two slits, with a detector placed immediately after one of the slits, then I detect the electron half the time. Furthermore, leading physics indicates that no ammount of information will ever allow me to accurately predict which trials will result in a detected electron, I can determine a 50/50 chance for detection/non-detection and that's the limit of predictability. Thus it's safe to say that the 50/50 is a property of the experimental set-up, and not a property of how much I know about the setup.

Pretty Much Zero Physicists: The above indicates that the trillionth digit of pi is in a superposition until we calculate it, at which point it collapses to a single value.

In response to comment by Zaq on SotW: Be Specific
Comment author: Alicorn 14 April 2012 06:36:19AM -1 points [-]

Can you recite that whole list in under two seconds?

In response to comment by Alicorn on SotW: Be Specific
Comment author: Zaq 20 April 2012 05:59:43PM 0 points [-]

Replace "the next two seconds" with "the two seconds subsequent to my finishing this wish description"

In response to SotW: Be Specific
Comment author: thescoundrel 04 April 2012 03:26:00PM 1 point [-]

Genie's Folly

A near omnipotent being is offering you a single wish. It is known that the Genie will attempt to implement the wish in a way the results in a net decrease of utility for the wisher, but is bound by any constraints explicitly written into the wish. Write your wish in such a way that the Genie can only implement it in such a way that you have a net increase in utility. Bonus points if you wish for something related to a current problem you are solving; ie, I wish I ran a successful startup with x following properties, which avoids y pitfalls in z ways.

Comment author: Zaq 14 April 2012 06:01:57AM 0 points [-]

Constraint: Within the next two seconds, you must perform only the tasks listed, which you must perform in the specified order. Task 1. Exchange your definition of decrease with your definition of increase Task 2. --insert wish here-- Task 3. Self-terminate

This is of course assuming that the I don't particularly care for the genie's life.

In response to comment by Zaq on Timeless Physics
Comment author: Oscar_Cunningham 30 March 2012 11:05:29PM *  0 points [-]

Does it make sense to say that the global rate of motion could slow down, or speed up, over the whole universe at once—so that all the particles arrive at the same final configuration, in twice as much time, or half as much time? You couldn't measure it with any clock, because the ticking of the clock would slow down too.

This one doesn't make as much sense to me. This is not just a translation but is actually a re-scaling. If you rescale time separately from space then you will have problems because you will qualitatively change the metric (special relativity under t -> 2t no longer uses a minkowski metric). This in turn changes the geometric structure of spacetime. If you rescale both time and space then you have a conformal transformation, but this transformation is not a lorentz transformation. I'm not so sure physics is invariant under such transformations.

If you change the value of c as you scale time then physics will stay the same.

Comment author: Zaq 06 April 2012 08:19:01PM 0 points [-]

Uh... what?

c is the speed of light. It's an observable. If I change c, I've made an observable change in the universe --> universe no longer looks the same?

Or are you saying that we'll change t and c both, but the measured speed of light will become some function of c and t that works out to remain the same? As in, c is no longer the measured speed of light (in a vacuum)? Then can't I just identify the difference between this universe and the t -> 2t universe by seeing whether or not c is the speed of light?

I also think you're stuck on restricting yourself only to E&M using Special Relativity. If you take t -> 2t you change the metric from minkowski space to some other space, and that means that you'll have gravitational effects where there previously weren't gravitational effects. You might be able to salvage that in some way, but it's going to be a lot more complicated than just changing the value for c. The only thing I can think of is to re-define the 4-vector dot-product and the transformation laws for objects with Lorentz indeces, and even that might not end up being consistent.

In response to Timeless Physics
Comment author: Zaq 30 March 2012 10:34:14PM 1 point [-]

A coupleof things:

  1. You begin by describing time translation invariance, even relating it to space translation invariance. This is all well and good, except that you then you ask:

"Does it make sense to say that the global rate of motion could slow down, or speed up, over the whole universe at once—so that all the particles arrive at the same final configuration, in twice as much time, or half as much time? You couldn't measure it with any clock, because the ticking of the clock would slow down too."

This one doesn't make as much sense to me. This is not just a translation but is actually a re-scaling. If you rescale time separately from space then you will have problems because you will qualitatively change the metric (special relativity under t -> 2t no longer uses a minkowski metric). This in turn changes the geometric structure of spacetime. If you rescale both time and space then you have a conformal transformation, but this transformation is not a lorentz transformation. I'm not so sure physics is invariant under such transformations.

  1. The electroweak force has been observed to violate both charge conjugation symmetry and parity symmetry. However, any lorentz invariant physics must be symmetric under CPT (charge conjugation + parity + time reversal). Thus if our universe is lorentz invariant, it is not time-reversal invariant. So you will at least need to keep the direction of time, even if you are able to otherwise eliminate t.

"@Stirling: If you took one world and extrapolated backward, you'd get many pasts. If you take the many worlds and extrapolate backward, all but one of the resulting pasts will cancel out! Quantum mechanics is time-symmetric."

Um... no. As I explained above, lorentz invariance plus CP violation in electroweak experiments indicate that the universe is not invariant under time-reversal. http://en.wikipedia.org/wiki/CP_violation

Eh... correction. Quantum Mechanics may be time-symmetric, but quantum field theories including weak interactions are not.

Comment author: Eliezer_Yudkowsky 27 August 2007 03:58:08AM 5 points [-]

Creeping into his soul, he felt the first faint tinges of despair.

After all these posts on how the strength of an idea is what it excludes, forbids, prohibits, people are still citing positive examples as proof of the power of emergence? Tell me what it isn't!

Comment author: Zaq 21 November 2010 10:07:44PM 1 point [-]

The even/odd attribute of a collection of marbles is not an emergent phenomenon. This is because as I gradually (one by one) remove marbles from the collection, the collection has a meaningful even/odd attribute all the way down, no matter how few marbles remain. If an attribute remains meaningful at all scales, then that attribute is not emergent.

If the accuracy of fluid mechanics was nearly 100% for 500+ water molecules and then suddenly dropped to something like 10% at 499 water molecules, then I would not count fluid mechanics as an emergent phenomenon. I guess I would word this as "no jump discontinuities in the accuracy vs scale graph."

View more: Prev | Next