Comment author: Alicorn 17 July 2010 01:58:54AM *  11 points [-]

What I want to know is why no one sells half-bras. There's a market: most women are at least somewhat asymmetrical, plenty by enough to warrant different cup sizes. It wouldn't be revolutionary bra technology: it would just have to fasten in the front and the back both and be packaged individually. And it wouldn't take up much extra store space to stock the same range of sizes. I looked once, and there's a patent on it, but no one seems to actually manufacture the things.

Comment author: cupholder 17 July 2010 06:53:42AM 2 points [-]

There's an even more compelling market: women who have had a single mastectomy. I'd be surprised if there weren't medical half-bras out there already for them.

Comment author: Kaj_Sotala 12 July 2010 12:05:31AM 5 points [-]

Interesting tidbit from the article:

One avenue may involve self-esteem. Nyhan worked on one study in which he showed that people who were given a self-affirmation exercise were more likely to consider new information than people who had not. In other words, if you feel good about yourself, you’ll listen — and if you feel insecure or threatened, you won’t.

I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.

Comment author: cupholder 12 July 2010 11:52:11PM 3 points [-]

These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.

Presumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.

Comment author: Peter_de_Blanc 21 April 2009 12:40:32PM 5 points [-]

Different people will have different ideas of where on the 4chan - colloquium continuum a discussion should be, so here's a feature suggestion: post authors should be able to set a karma requirement to comment to the post. Beginner-level posts would draw questions about the basics, and other posts could have a karma requirement high enough to filter them out.

There could even be a karma requirement to see certain posts, for hiding Beisutsukai secrets from the general public.

Comment author: cupholder 11 July 2010 11:42:30PM *  2 points [-]

The negotiation of where LW threads should be on the 4chan-colloquium continuum is something I would let users handle by interacting with each other in discussions, instead of trying to force it to fit the framework of the karma system. I especially think letting people hide their posts from lurkers and other subsets of the Less Wrong userbase could set a bad precedent.

Comment author: zero_call 07 July 2010 06:10:28AM *  0 points [-]

This kind of rebuttal absolutely fails, because it simply doesn't address the point. You're taking the OP completely out of context. The OP is arguing against cryonics evidence in the context of having to dish out substantial money. The pro-cryonics LW community asserts that you must pay money if you believe in cryonics, since it's the only rational decision, or some such logic. In response, critics (such as the OP) contend that cryonics evidence isn't sufficient to justify paying money. This is totally different from asserting that you don't believe in cryonics or the possibility of cryonics out of context.

In your examples, you don't have to pay out of your wallet if you believe that 1) practical fusion power, 2) human mission to Mars, 3) substantial life extension exists. These examples are misleading.

Comment author: cupholder 07 July 2010 08:45:17AM 1 point [-]

So would it be right to say your objection is based on the expected utility of working cryonics instead of its probability?

Comment author: ciphergoth 07 July 2010 07:30:08AM 2 points [-]

According to Mike Darwin one cryonics facility (don't remember which, sorry) has already been shot at from the street.

Comment author: cupholder 07 July 2010 08:41:33AM 1 point [-]

For being a cryonics facility? Is there enough evidence to determine if it could've been just a random drive-by?

Comment author: cousin_it 06 July 2010 10:24:36AM *  1 point [-]

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Allow me to introduce to you the Brandeis dice problem. We have a six-sided die, sides marked 1 to 6, possibly unfair. We throw it many times (say, a billion) and obtain an average value of 3.5. Using that information alone, what's your probability distribution for the next throw of the die? A naive application of the maxent approach says we should pick the distribution over {1,2,3,4,5,6} with mean 3.5 and maximum entropy, which is the uniform distribution; that is, the die is fair. But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity! The reason: a die that's biased towards 3 and 4 makes a mean value of 3.5 even more likely than a fair die.

Does that mean you should give up your belief in maxent, your belief in Bayes, your belief in the existence of "perfect" priors for all problems, or something else? You decide.

Comment author: cupholder 07 July 2010 07:20:15AM 1 point [-]

But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity!

In this example, what information are we Bayesian updating on?

Comment author: Matt_Simpson 06 July 2010 04:22:27PM *  1 point [-]

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different.

They most certainly are. But it's semantics.

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Frankly, I'm not informed enough about priors commit to maxent, Kolmogorov complexity, or anything else.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

yes

There has to be a model because the model is what we use to calculate likelihoods.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.*

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

Comment author: cupholder 07 July 2010 07:10:19AM 0 points [-]

yes

OK. I agree with that insofar as agents having the same prior entails them having the same model.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

Ah, I think I get you; a PB (perfect Bayesian) doesn't see a need to test their model because whatever specific proposition they're investigating implies a particular correct model.

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Yeah, I figured you wouldn't have trouble with it since you talked about taking classes in this stuff - that footnote was intended for any lurkers who might be reading this. (I expected quite a few lurkers to be reading this given how often the Gelman and Shalizi paper's been linked here.)

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

It's a catch for the latter, the PB. In reality most scientists typically don't have a wholly unambiguous proposition worked out that they're testing - or the proposition they are testing is actually not a good representation of the real situation.

Comment author: Matt_Simpson 06 July 2010 07:38:31AM 2 points [-]

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model. The problem is, we don't know these things. In practice we can't exactly calculate our posteriors or precisely articulate our priors. So to approximate the correct posterior probability, we model our uncertainty about the proposition(s) in question. This includes every part of the model - the prior and the sampling model in the simplest case.

The rationale for model checking should be pretty clear at this point. How do we know if we have a good model of our uncertainty (or a good map of our map, to say it a different way)? One method is model checking. To forbid model checking when we know that we are modeling our uncertainty seems to be restricting the methods we can use to approximate our posteriors for no good reason.

Now I don't necessarily think that Cox, Jaynes, Yudkowsky, or any other famous Bayesian agrees with me here. But when we got to model checking in my Bayes class, I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty. Just like we check our models of physics to see if they correspond to what we are trying to describe (reality), we should check our models of our uncertainty to see if they correspond to what we are trying to describe.

I would be interested to hear EY's position on this issue though.

Comment author: cupholder 06 July 2010 09:40:07AM 0 points [-]

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different. I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model.

There has to be a model because the model is what we use to calculate likelihoods.

The rationale for model checking should be pretty clear ...

Agree with this whole paragraph. I am in favor of model checking; my beef is with (what I understand to be) Perfect Bayesianism, which doesn't seem to include a step for stepping outside the current model and checking that the model itself - and not just the parameter values - makes sense in light of new data.

I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.* From page 8 of their preprint:

If nothing else, our own experience suggests that however many different specifications we think of, there are always others which had not occurred to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered.

* This must be one of the most dense/opaque sentences I've posted on Less Wrong. If anyone cares enough about this comment to want me to try and break down what it means with an example, I can give that a shot.

Comment author: Blueberry 06 July 2010 01:42:52AM 3 points [-]

one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child.

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

Comment author: cupholder 06 July 2010 08:39:29AM 0 points [-]

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead.

True - we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn't contribute to the net expected value, but nor does it make it less positive.

There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

It sounds as though that data's based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life.

That's a good point, I know of nothing in utilitarianism that says whose utility I should care about.

The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn't make any entity that has a chance of suffering negative personal utility.

Comment author: Blueberry 05 July 2010 08:26:28PM 4 points [-]

But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

That your child might experience a great deal of pain which you could prevent by not having it.

That your child might regret being born and wish you had made the other decision.

That you can be a good parent, raise a kid, and improve someone's life without having a kid (adopt).

That the world is already overpopulated and our natural resources are not infinite.

Comment author: cupholder 05 July 2010 08:54:22PM 1 point [-]

Points taken.

Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn't think the antinatalism position has legs.

View more: Prev | Next