Wiki Contributions

Comments

Sorted by
jacobt00

Arrow's Theorem doesn't say anything about strategic voting. The only reasonable non-strategic voting system I know of is random ballot (pick a random voter; they decide who wins). I'm currently trying to figure out a voting system that is based on finding the Nash equilibrium (which may be mixed) of approval voting, and this system might also be strategy-free.

When I said linear combination of utility functions, I meant that you fix the scaling factors initially and don't change them. You could make all of them 1, for example. Your voting system (described in the last paragraph) is a combination of range voting and IRV. If everyone range votes so that their favorite gets 1 and everyone else gets -1, then it's identical to IRV, and shares the same problems such as non-monotonicity. I suspect that you will also get non-monotonicity when votes aren't "favorite gets 1 and everyone else gets -1".

EDIT: I should clarify: it's not 1 for your favorite and -1 for everyone else. It's 1 for your favorite and close to -1 for everyone else, such that when your favorite is eliminated, it's 1 for your next favorite and close to -1 for everyone else after rescaling.

jacobt40

It is bad to create a small population of creatures with humane values (that has positive welfare) and a large population of animals that are in pain. For instance, it is bad to create a population of animals with -75 total welfare, even if doing so allows you to create a population of humans with 50 total welfare.

Why do you believe this? I don't. Due to wild animal suffering, this proposition implies that it would have been better if no life had appeared on Earth, assuming average human/animal welfare and the human/animal ratio don't dramatically change in the future.

jacobt30

I couldn't access the "Aggregation Procedure for Cardinal Preferences" article. In any case, why isn't using an aggregate utility function that is a linear combination of everyone's utility functions (choosing some arbitrary number for each person's weight) a way to satisfy Arrow's criteria?

It should also be noted that Arrow's impossibility theorem doesn't hold for non-deterministic decision procedures. I would also caution against calling this an "existential risk", because while decision procedures that violate Arrow's criteria might be considered imperfect in some sense, they don't necessarily cause an existential catastrophe. Worldwide range voting would not be the best way of deciding everything, but it most likely wouldn't be an existential risk.

jacobt20

Ok, I agree with this interpretation of "being exposed to ordered sensory data will rapidly promote the hypothesis that induction works".

jacobt30

You could choose to single out a single alternative hypothesis that says the sun won't rise some day in the future. The ratio between P(sun rises until day X) and P(sun rises every day) will not change with any evidence before day X. If initially you believed a 99% chance of "the sun rises every day until day X" and a 1% chance of Solomonoff induction's prior, you would end up assigning more than a 99% probability to "the sun rises every day until day X".

Solomonoff induction itself will give some significant probability mass to "induction works until day X" statements. The Kolmogorov complexity of "the sun rises until day X" is about the Kolmogorov complexity of "the sun rises every day" plus the Kolmogorov complexity of X (approximately log2(x)+2log2(log2(x))). Therefore, even according to Solomonoff induction, the "sun rises until day X" hypothesis will have a probability approximately proportional to P(sun rises every day) / (X log2(X)^2). This decreases subexponentially with X, and even slower if you sum this probability for all Y >= X.

In order to get exponential change in the odds, you would need to have repeatable independent observations that distinguish between Solomonoff induction and some other hypothesis. You can't get that in the case of "sun rises every day until day X" hypotheses.

jacobt00

You're making the argument that Solomonoff induction would select "the sun rises every day" over "the sun rises every day until day X". I agree, assuming a reasonable prior over programs for Solomonoff induction. However, if your prior is 99% "the sun rises every day until day X", and 1% "Solomonoff induction's prior" (which itself might assign, say, 10% probability to the sun rising every day), then you will end up believing that the sun rises every day until day X. Eliezer asserted that in a situation where you assign only a small probability to Solomonoff induction, it will quickly dominate the posterior. This is false.

most of the evidence given by sunrise accrues to "the sun rises every day", and the rest gets evenly divided over all non-falsified "Day X"

Not sure exactly what this means, but the ratio between the probabilities "the sun rises every day" and "the sun rises every day until day X" will not be affected by any evidence that happens before day X.

jacobt30

Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works

Not if the alternative hypothesis assigns about the same probability to the data up to the present. For example, an alternative hypothesis to the standard "the sun rises every day" is "the sun rises every day, until March 22, 2015", and the alternative hypothesis assigns the same probability to the data observed until the present as the standard one does.

You also have to trust your memory and your ability to compute Solomonoff induction, both of which are demonstrably imperfect.

jacobt30

For every n, a program exists that will solve the halting problem for programs up to length n, but the size of this program must grow with n. I don't really see any practical way for a human to write this program other than generating an extremely large number and then testing all programs up to length n for halting within this bound, in which case you've already pretty much solved the original problem. If you use some proof system to try to prove that programs halt and then take the maximum running time of only those, then you might as well use a formalism like the calculus of constructions.

jacobt40

Game1 has been done in real life (without the murder): http://djm.cc/bignum-results.txt

Also:

Write a program that generates all programs shorter than length n, and finds the one with the largest output.

Can't do that, unless you already know the programs will halt. The winner of the actual contest used a similar strategy, using programs in the calculus of constructions so they are guaranteed to halt.

For Game2, if your opponent's program (say there are only 2 players) says to return your program's output + 1, then you can't win. If your program ever halts, they win. If it doesn't halt, then you both lose.

jacobt00

But if the choices only have the same expectation of v2, then you won't be optimizing for v1.

Ok, this correct. I hadn't understood the preconditions well enough. It seems that now the important question is whether things people intuitively think of as different values (my happiness, total happiness, average happiness) satisfy this condition.

Load More