All of aspera's Comments + Replies

aspera
10

It's true: if you're optimizing for altruism, cooperation is clearly better.

I guess it's not really a "dilemma" as such, since the optimal solution doesn't depend at all on what anyone else does. If you're trying to maximize EV, defect. If you're trying to maximize other people's EV, cooperate.

aspera
40

My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.

Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.

4EGI
I expected most of the LessWrong comunity to cooperate for two reasons: 1. I model them as altruistic as in Kurros comment. 2. I model them as oneboxing in newcombs problem. One consideration I did not factor into my prediction is, that - judging from the comments - many people refuse to cooperate in transfering money form CFAR/Yvain to a random community member.
2Kurros
You don't think people here have a term for their survey-completing comrades in their cost function? Since I probably won't win either way this term dominated my own cost function, so I cooperated. An isolated defection can help only me, whereas an isolated cooperation helps everyone else and so gets a large numerical boost for that reason.
aspera
140

Nice job on the survey. I loved the cooperate/defect problem, with calibration questions.

I defected, since a quick expected value calculation makes it the overwhelmingly obvious choice (assuming no communcation between players, which I am explicitly violating right now). Judging from comments, it looks like my calibration lower bound is going to be way off.

aspera
00

I agree that the statement is not crystal clear. It makes it possible to confuse the (change in the average) with the (average of the change).

Mathematically speaking, we represent our beliefs as a probability distribution on the possible outcomes, and change it upon seeing the result of a test (possibly for every outcome). The statement is that “if we average the possible posterior probability distributions weighted by how likely they are, we will end up with our original probability distribution.”

If that were not the case, it would imply that we were fail... (read more)

aspera
00

For the moment, I'm going to strike the comment from the post. I don't want to ascribe a viewpoint to VincentYu that he doesn't actually hold.

aspera
20

I added a section called "Deciding how to decide" that (hopefully) deals with this issue appropriately. I also amended the conclusion, and added you as an acknowledgement.

0VincentYu
I suggest using the phrase "additive utilitarianism" rather than "aggregative utilitarianism". It was entirely my fault for saying aggregative utilitarianism in my comment, which was a misnomer; I got it mixed up with aggregative consequentialism. (All flavors of utilitarianism are by definition aggregative because they take into account the utilities from some collection of beings, but not all flavors are additive.) Unfortunately, I think that ascribes too much power to VNM utility functions (that term itself is a LessWrongism; elsewhere, they would be called cardinal utility functions or just utility functions). If we had our hands on a VNM utility function, we would be okay (we simply ask it which option it prefers!), but the VNM theorem simply asserts the existence of a utility function given certain basic axioms, and it doesn't give us the utility function! So, unfortunately, VNM utility also falls flat on its face unless we already know what we prefer. (An important point is that VNM utility functions cannot work with the "utility" described in the problem. It's an unfortunate historical accident that the word "utility" is overloaded, because VNM utility requires careful handling.) Great, I think these are good clarifications!
aspera
00

I'm not sure why it got moved: maybe not central to the thesis of LW, or maybe not high enough quality. I'm going to add some discussion of counter-arguments to the limit method. Maybe that will make a difference.

I noticed that the discussion picked up when it got moved, and I learned some useful stuff from it, so I'm not complaining.

aspera
20

Ok, I think I've got it. I'm not familiar with VNM utility, and I'll make sure to educate myself.

I'm going to edit the post to reflect this issue, but it may take me some time. It is clear (now that you point it out) that we can think of the ill-posedness coming from our insistence that the solution conform to aggregative utilitarianism, and it may be possible to sidestep the paradox if we choose another paradigm of decision theory. Still, I think it's worth working as an example, because, as you say, AU is a good general standard, and many readers will be familiar with it. At the minimum, this would be an interesting finite AU decision problem.

Thanks for all the time you've put into this.

aspera
20

I would like to include this issue in the post, but I want to make sure I understand it first. Tell me if this is right:

It is possible mathematically to represent a countably infinite number of immortal people, as well as the process of moving them between spheres. Further, we should not expect a priori that a problem involving such infinities would have a solution equivalent to those solutions reached by taking infinite limits of an analogous finite problem. Some confusion arises when we introduce the concept of “utility” to determine which of the two c

... (read more)
2VincentYu
That's an accurate interpretation of my comment. I do think that confusion arises in this context from the concept of "utility", but not because "utility only serves as a basis on which to make decision for finite problems." The "utility" in the problem is clearly not that of VNM-utility (of which I previously gave a brief explanation) because we not assigning utility to actions, decisions, or choices (a VNM-utility function U would generally have no problem responding to an infinite set of choices, as it simply says: do argmax_{choice}(U(choice))). This severely undermines what we can do with the "utility" in the problem because we are left with the various flavors of aggregative utilitarianism, which suffer from intractable problems even in finite situations! Attempting to extend them to the situation at hand is problematic (and, as Kaj_Sotala remarked, dealing with infinities in aggregative consequentialism is the topic of one of Bostrom's papers). I think that the appearance of the paradox is a consequence of unfamiliarity with infinite sets, and that it is not too surprising that our intuition appears to contradict itself in this context (by presenting each option as better than the other). The contradictory intuitions don't correspond to a logical contradiction, so the apparent paradox needs no resolution. The actual problem (choosing between the two options) is a matter of preference, just as the choice between strawberry and chocolate is a matter of preference. Absolutely. I think aggregative utilitarianism (as a moral theory) is screwed even in finite scenarios, much less infinite scenarios. (But I also think aggregative utilitarianism is a good but ill-defined general standard for comparing consequences in real life.)
aspera
00

The final section has been edited to reflect the concerns of some of the commenters.

aspera
10

Thanks to whomever moved this to Discussion. From the FAQ, I wasn't sure where to put it. This is better, in retrospect.

0D_Alex
I'd prefer to see this in Main, it is interesting and important.
aspera
00

Thanks! Do you guys want to copy edit my journal papers? ;)

aspera
10

You're completely right! As stated, the problem is ill posed, i.e. it has no unique solution, so we didn't solve it.

Instead, we solved a similar problem by introducing a new parameter, \alpha. It was useful because we gained a mathematical description that works for very large n and s, and which matches our intuition about the problem.

It is important to recognize, as you point out, that that taking limits does not solve the problem. It just elucidates why we can't solve it as stated.

aspera
00

I agree that it's a lot to cover, but I wanted to work a full example. We talk a lot on LW about decision analysis and paradoxes in the abstract, but I'm coming from a math/physics background, and it's much more helpful for me to see concrete examples. I assume some other people feel the same way.

Self-referential problems would be an interesting area to study, but I'm not familiar with the techniques. I suspect you're right, though.

aspera
20

Fixed. Thanks for reading so closely. It's amazing how many little mistakes can survive after 10 read-throughs.

1VincentYu
Another small typo: Under "The Finite Problem", n_{A,s} and n_{B,s} should be n_{A,t} and n_{B,t} instead.
aspera
00

By the way, are you talking about this meme, or is there another problem with monkeys and bananas?

2Decius
I was talking about the math that spawned that meme, yes.
aspera
20

Great problem, thanks for mentioning it!

I think the answer to "how many balls did you put in the vase as T->\infty" and "How many balls have been destroyed as T->\infty" both have well defined answers. It's just a fallacy to assume that the "total number of balls in the vase as T->\infty" is equal to the difference between these quantities in their limits.

aspera
50

My parents stopped me from skipping a grade, and apart from a few math tricks, we didn't work on additional material at home. I fell into a trap of "minimum effort for maximum grade," and got really good at guessing the teacher's password. The story didn't change until graduate school, when I was unable to meet the minimum requirements without working, and that eventually led me to seek out fun challenges on my own.

I now have a young son of my own, and will not make the same mistake. I'm going to make sure he expects to fail sometimes, and that I praise his efforts to go beyond what's required. No idea if it will work.

aspera
90

The plots were done in Mathematica 9, and then I added the annotations in PowerPoint, including the dashed lines. I had to combine two color functions for the density plot, since I wanted to highlight the fact that the line s=n represented indifference. Here's the code:

r = 1; ua = 1;ub = -1; f1[n, s] := (ns - s^2r ) (ua - ub); Show[DensityPlot[-f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction -> "CherryTones", Frame -> False, PlotRange -> {-1000, 0}], DensityPlot[f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction -> "BeachColors", Frame -> False, PlotRange -> {-1000, 0}]]

aspera
00

No, I mean a function whose limit doesn't equal its defined value at infinity. As a trivial example, I could define a utility function to be 1 for all real numbers in [-inf,+inf) and 0 for +inf. The function could never actually be evaluated at infinity, so I'm not sure what it would mean, but I couldn't claim that the limit was giving me the "correct" answer.

0Decius
Um... -inf and +inf are not real numbers. (Noting that your function as described is undefined at -inf.) In addition, the definition of continuous restricts it to points which exist on an open interval; if the limit from below and limit from above are equal to the value at X, then the function is continuous on an open interval containing X. How do you determine the limit as X approaches +inf from above?
2MrMind
If you accept the Axiom of Infinity, there's no problem at evaluating a function at infinity. The problem is rather that omega is a regular limit cardinal, so there's no way to define the value at infinity from the value at the successor, unless you include in the definition an explicit step for limit cardinals. You can very well define a function that has 1 as value on 0 and on every successor cardinal, but 0 on every limit cardinal. The function will indeed be discontinuous, but its value at omega will be perfectly defined (I just did).
aspera
00

Thanks for clearing up the countability. It's clear that there are some cases where taking limits will fail (like when the utility is discontinuous at infinity), but I don't have an intuition about how that issue is related to countability.

0Decius
You said 'discontinuous at infinity'. Did you mean 'the infinite limit diverges or otherwise does not exist'?
aspera
20

In the above example, the number of people and the number of days they live were uncountable, if I'm not mistaken. The take-home message is that you do not get an answer if you just evaluate the problem for sets like that, but you might if you take a limit.

Conclusions that involve infinity don't map uniquely on to finite solutions because they don't supply enough information. Above, "infinite immortal people" refers to a concept that encapsulates three different answers. We had to invent a new parameter, alpha, which was not supplied in the origi... (read more)

5Decius
Provided you can assign a unique rational number to each day each person lives, they are countable. I will note that the expected time for a given person to remain in the sphere in which they started is infinite, provided they don't know in what order they will be removed. The summation for each day becomes (total of an infinite number of people)+(total of a finite number of people); if we assume that a person-day in bliss is positive and a person-day in agony is negative, then the answer is trivial. An infinite summation of terms of positive infinity is greater than an infinite sum of terms of negative infinity- the cardinalities are irrelevant.
aspera
40

Here is some clarification from Zinsser himself (ibid.):

"Who am I writing for? It's a fundamental question, and it has a fundamental answer: You're writing for yourself. Don't try to visualize the great mass audience. There is no such audience - every reader is a different person.

This may seem to be a paradox. Earlier I warned that the reader is... impatient... . Now I'm saying you must write for yourself and not be gnawed by worry over whether the reader is tagging along. I'm talking about two different issues. One is craft, the other is attitude.

... (read more)
1PrometheanFaun
That's not helpful. Say I've got an audience who wouldn't like me if they knew me as my inner circle does, who definitely wouldn't be convinced if I wrote as though I were writing for my own. What would Zinsser do? Give up? Write something else? I know that communicating effectively when you don't personally feel what you're saying tends to fail, well yes, it's hard, but that's precisely what I've got to do!
aspera
130

On Writing Well, by William Zinsser

Every word should do useful work. Avoid cliché. Edit extensively. Don’t worry about people liking it. There is more to write about than you think.

0wedrifid
"Don’t worry about people liking it"? This sounds dangerous.
aspera
00

It makes no sense to call something “true” without specifying prior information. That would imply that we could never update on evidence, which we know not to be the case for statements like “2 + 3 = 5.” Much of the confusion comes from different people meaning different things by the proposition “2 + 3 = 5,” which we can resolve as usual by tabooing the symbols.

Consider the propositions " A =“The next time I put two sheep and three sheep in a pen, I will end up with five sheep in the pen.”
B = “The universe works as if in all cases, combining two of s... (read more)

aspera
00

As usual, I'm late to the discussion.

The probability that a counterfactual is true should be handled with the same probabilistic machinery we always use. Once the set of prior information is defined, it can be computed as usual with Bayes. The confusing point seems to be that the prior information is contrary to what actually occurred, but there's no reason this should be different than any other case with limited prior information.

For example, suppose I drop a glass above a marble floor. Define:

sh = “my glass shattered”

f = “the glass fell to the floor und... (read more)

aspera
00

I'm confused about why this problem is different from other decision problems.

Given the problem statement, this is not an acausal situation. No physics is being disobeyed - Kramers Kronig still works, relativity still works. It's completely reasonable that my choice could be predicted from my source code. Why isn't this just another example of prior information being appropriately applied to a decision?

Am I dodging the question? Does EY's new decision theory account for truly acausal situations? If I based my decision on the result of, say, a radioactive decay experiment performed after Omega left, could I still optimize?

aspera
00

Ha - thanks. FIxed. But I guess if other people want to Skype in from around the world, they're welcome to.

aspera
10

Yes, we are running on corrupted hardware at about 100 Hz, and I agree that defining broad categories to make first-cut decisions is necessary.

But if we were designing a morality program for a super-intelligent AI, we would want to be as mathematically consistent as possible. As shminux implies, we can construct pathological situations that exploit the particular choice of discontinuities to yield unwanted or inconsistent results.

aspera
70

I think it would be possible to have an anti-Occam prior if the total complexity of the universe is bounded.

Suppose we list integers according to an unknown rule, and we favor rules with high complexity. Given the problem statement, we should take an anti-Occam prior to determine the rule given the list of integers. It doesn't diverge because the list has finite length, so the complexity is bounded.

Scaling up, the universe presumably has a finite number of possible configurations given any prior information. If we additionally had information that led us to take an Anti-Occam prior, it would not diverge.

aspera
-20

I'm also looking for a discussion of the symmetry related to conservation of probability through Noether's theorem. A quick Google search only finds quantum mechanics discussions, which relate it to spatial invariances, etc.

If there's no symmetry, it's not a conservation law. Surely someone has derived it carefully. Does anyone know where?

aspera
20

The idea that the utility should be continuous is mathematically equivalent to the idea that an infinitesimal change on the discomfort/pain scale should give an infinitesimal change in utility. If you don't use that axiom to derive your utility funciton, you can have sharp jumps at arbitrary pain thresholds. That's perfectly OK - but then you have to choose where the jumps are.

1mantis
I think that's probably more practical than trying to make it continuous, considering that our nervous systems are incapable of perceiving infinitesimal changes.
1Shmi
It could be worse than that: there might not be a way to choose the jumps consistently, say, to include different kinds of discomfort, some related to physical pain and others not (tickling? itching? anguish? ennui?)
aspera
00

I think that in physics we would deal with this as a mapping problem. Jonh's and Mary's beliefs about the planet live in different spaces, and we need to pick a basis on which to project them in order to compare them. We use language as the basis. But then when we try to map between concepts, we find that the problem is ill posed: it doesn't have a unique solution because the maps are not all 1:1.

aspera
170

Nice job writing the survey - fun times. I kind of want to hand it out to my non-LW friends, but I don't want to corrupt the data.

aspera
00

Thanks, I'll check it out.

aspera
10

Bravo, Eliezer. Anyone who says the answer to this is obvious is either WAY smarter than I am, or isn't thinking through the implications.

Suppose we want to define Utility as a function of pain/discomfort on the continuum of [dust speck, torture] and including the number of people afflicted. We can choose whatever desiderata we want (e.g. positive real valued, monotonic, commutative under addition).

But what if we choose as one desideratum, "There is no number n large enough such that Utility(n dust specks) > Utility(50 yrs torture)." What doe... (read more)

aspera
00

Also, I suggest you read Torture vs Dust Specks. I found it to be very troubling, and would love to talk about it at the meeting.

aspera
00

Is this the same as Jaynes' method for construction of a prior using transformation invariance on acquisition of new evidence?

Does conservation of expected evidence always uniquely determine a probability distribution? If so, it should eliminate a bunch of extraneous methods of construction of priors. For example, you would immediately know if an application of MaxEnt was justified.

aspera
10

That thought occurred to me too, and then I decided that EY was using "entropy" as "the state to which everything naturally tends" But after all, I think it's possible to usefully extend the metaphor.

There is a higher number of possible cultish microstates than non-cultish microstates, because there are fewer logically consistent explanations for a phenomenon than logically inconsistent ones. In each non-cultish group, rational argument and counter-argument should naturally push the group toward one describing observed reality. By contrast, cultish groups can fill up the rest of concept-space.

aspera
50

You can't remember whether or not bleggs exist in real life.

aspera
10

Maybe this is covered in another post, but I'm having trouble cramming this into my brain, and I want to make sure I get this straight:

Consider a thingspace. We can divide the thingspace into any number of partially-overlapping sets that don’t necessarily span the space. Each set is assigned a word, and the words are not unique.

Our job is to compress mental concepts in a lossy way into short messages to send between people, and we do so by referring to the words. Inferences drawn from the message have associated uncertanties that depended on the characteri... (read more)

aspera
00

That's very helpful, thanks. I'm trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.

aspera
00

There are a couple things I still don't understand about this.

Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a "floating belief?" It is not, in principle, falsifiable. It's not a question of measurement accuracy in this case (unless you're a frequentist, I guess). But I can gather some evidence for or against it, so it's not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.

Second, when LWers talk about beliefs, or "the map,"... (read more)

4TimS
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory. Your belief that the coin is bent does pay rent - you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations. Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions. When examining a belief, ask "What observations would make this belief less likely?" If your answer is "No such observations exist" then you should have grave concerns about the belief. Note the distinction between: * Observations that would make the proposition less likely * Observations I expect I don't expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I'd start having serious reservations about the theory of evolution.
aspera
30

I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can't predict what's meant to happen. In that case, he's equally good at predicting any outcome, so it's a perfectly uninformative hypothesis.

0MugaSofer
That was exactly my point. If he could make such a prediction, he would be correct. Since he can't...
aspera
10

Is this what CFAR is trying to do?

I would be interested to hear what other members of the community think about this. I accidentally found Bayes after being trained as a physicist, which is not entirely unlike traditional rationality. But I want to teach my brother, who doesn't have any science or rationality background. Has anyone had success with starting at Bayes and going from there?

aspera
60

I jest, but the sense of the question is serious. I really do want to teach the people I'm close to how to get started on rationality, and I recognize that I'm not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?

aspera
230

My mother's husband professes to believe that our actions have no control over the way in which we die, but that "if you're meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!" for example.

After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you "just can't apply numbers to this," and "Well, you shouldn't tempt fate."

My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?

5Strange7
Simply consider how likely it is that kicking them in the nuts will actually improve the situation.

Think of them as 3-year-olds who won't grow up until after the Singularity. Would you kick a 3-year-old who made a mistake?

0MugaSofer
Strictly speaking, if you somehow knew in advance (time travel?) that you would "die in a plane crash", then avoiding flying would indeed, presumably, result in a plane crash occurring as you walk down the street. If you know your attempt will fail in advance, you don't need to try very hard. If you don't, then it is reasonable to avoid dangerous situations.
0A1987dM
I often say stuff like that, but I don't mean it literally. When someone says “What if you do X and Y happens?” and I think Y is ridiculously unlikely (P(Y|X) < 1e-6), I sarcastically reply “What if I don't do X, but Z happens?” where Z is obviously even more ridiculous (P(Z|~X) < 1e-12, e.g. “a meteorite falls onto my head and kills me”).
6aspera
I jest, but the sense of the question is serious. I really do want to teach the people I'm close to how to get started on rationality, and I recognize that I'm not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?
4TheOtherDave
(grin) Mostly, by remembering that there are lots of decent people in the world who don't think very clearly.
2Shmi
Pick your battles. Most people happily hold contradictory beliefs. More accurately, their professed beliefs don't always match their aliefs). You are probably just as affected as the rest of us, so start by noticing this in yourself.
aspera
20

I agree with you, a year and a half late. In fact, the idea can be extended to EY's concept of "floating beliefs," webs of code words that are only defined with respect to one another, and not with respect to evidence. It should be noted that if at any time, a member of the web is correlated in some way with evidence, then so is the entire web.

In that sense, it doesn't seem like wasted effort to maintain webs of "passwords," as long as we're responsible about updating our best guesses about reality based on only those beliefs that are evidence-related. In the long term, given enough memory capacity, it should speed our understanding.

aspera
10

Unless I misunderstand, this story is a parable. EY is communicating with a handwaving example that the effectiveness of a code doesn't depend on the alphabet used. In the code used to describe the plate phenomenon, “magic” and “heat conduction” are interchangeable symbols which formally carry zero information, since the coder doesn't use them to discriminate among cases.

I’m sincerely confused as to why comments center on the motivations of the students and the professor. Isn't that irrelevant? Or did EY mean for the discussion to go this way? Does it matter?

2JonMcGuire
People focus on the motivations of the students and the professor because the professor's behavior is unorthodox. The students paid good money to learn about physics. As others have mentioned, you can't be too hard on them, they arrive at class expecting a physics lesson, not sleight-of-hand. Consequently, my initial response to the article was that I understood what EY meant to convey, but I thought there were probably other ways to illustrate it that didn't involve the unnecessary "trickery" demonstrated by the professor. However, upon further reflection, the professor's trickery itself could be characterized as relevant to EY's point. If we completely ignore the proferred "magic explanations" from the students, one might consider the professor's trick a lesson that all the physics education in the world may be inadequate to explain a puzzling observation. In other words, I found it helpful to assume that the professor was also trying to make a point similar to that which EY was making, instead of assuming that the professor just felt like being a jerk that day. As a bonus, by focusing on the conditions of the scenario instead of just the answers, a student who is smart enough to recognize that their education may be inadequate could still answer "I don't have enough information to explain this," which implies he still believes there is an explanation, which might be a better answer than just "I don't know," which sounds a lot like just giving up.
5wedrifid
EY can mean whatever he wants. He gets to choose what is in the post, everyone else gets to choose what they would like to talk about.
0chaosmosis
You'll quickly find that LessWrongers love tangents and digressions.
Load More