Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Bound_up 12 August 2017 06:15:58PM 0 points [-]

I think you've hit upon one of the side effects of this approach

All the smart people will interpret your words differently and note them to be straightforwardly false. You can always adjust your speaking to the abilities of the intelligent and interested, and they'll applaud you for it, but you do so at the cost of reaching everybody else

Comment author: Bobertron 13 August 2017 08:00:44AM 0 points [-]

I understand your post to be about difficult truths related to politics, but you don't actually give examples (except "what Trump has said is 'emotionally true'") and the same idea applies to simplifications of complex material in science etc. I just happened upon an example from a site teaching drawing in perspective (source):

Now you may have heard of terms such as one point, two point or three point perspective. These are all simplifications. Since you can have an infinite number of different sets of parallel lines, there are technically an infinite number of potential vanishing points. The reason we can simplify this whole idea to three, two, or a single vanishing point is because of boxes.

[...] . Because of this, people like to teach those who are new to perspective that the world can be summarized with a maximum of 3 vanishing points.

Honestly, this confused me for years

The author way lied to about the possible number of vanishing points in a drawing. But instead of realizing the falsehood he was confused.

Comment author: Bobertron 12 August 2017 09:37:40AM 0 points [-]

Suppose X is the case. When you say "X" your opposite will believe Y, which is wrong. So, even though "X" is the truth, you should not say it.

Your new idea as I understand it: Suppose saying "Z" will let your opposite will believe X. So, even though saying "Z" is, technically, lying, you should say "Z" because the listener will come to have a true believe.

(I'm sorry if I misunderstood you or you think I'm being uncharitable. But even if I misunderstood I think others might misunderstand in a similar way, so I feel justified in responding to the above concept)

First I dislike that approach because it makes things harder for people that could understand, if only people would stop lying to them or prefer to be told the truth along the lines of "study macro economics for two years and you will understand".

Second, that seems to me to be a form of the-end-justifies-the-means that, even though I think of myself as a consequentialist, I'm not 100% comfortable with. I'm open to the idea that sometimes it's okay, and even proper, to say something that's technically untrue, if it results in your audience coming to have a truer world-view. But if this "sometimes" isn't explained or restricted in any way, that's just throwing out the idea that you shouldn't lie.

Some ideas on that:

  • Make sure you don't harm your audience because you underestimate them. If you simplify or modify what you say to the point that it can't be considered true any more because you think your audience is limited in their capacity to understand the correct argument, make sure you don't make it harder to understand the truth for those that can. That includes the people you underestimated, people that you didn't intend to address but heard you all the same and people that really won't understand now, but will later. (Children grow up, people that don't care enough to follow complex arguments might come to care).
  • It's not enough that your audience comes to believe something true. It needs to be justified true believe. Or alternatively, your audience should not only believe X but know it. For a discussion on what is meant with "know" see most of the field of epistemology, I guess. Like, if you tell people that voting for candidate X will give them cancer and the believe you they might come to the correct believe that voting for candidate X is bad for them. But saying that is still unethical.
  • I guess if you could give people justified true believe, it wouldn't be lying at all and the whole idea is that you need to lie because some people are incapable of justified true believe on matter X. But then it should at least be "justified in some sense". Particularly, your argument shouldn't work just as well if "X" were false.
Comment author: Bobertron 29 July 2017 04:25:21PM 0 points [-]

When playing around in the sandbox, simpleton always bet copy cat (using default values put a population of only simpleton and copycat). I don't understand why.

Comment author: Vaniver 22 December 2016 06:47:56PM 1 point [-]

the kid is just being stupid

"Just being stupid" and "just doing the wrong thing" are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas "just being stupid" doesn't.

I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less.

I think this misses the point, and damages your "should" center. You want to get into a state where if you think "I should X," then you do X. The set of beliefs that allows this is "Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing." (You can see how updating the first one from "Smoking isn't that bad for my health" to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)

Comment author: Bobertron 23 December 2016 06:01:29PM *  1 point [-]

"Just being stupid" and "just doing the wrong thing" are rarely helpful views

I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I'm not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it's actually solvable using the described skill.

I think this misses the point, and damages your "should" center

Potentially, yes. I'm deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.

"Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing."

That's definitely not good enough for me. I never smoked in my life. I don't think smoking is worth it. And if I were smoking, I don't think I would stop just because I think it's a net harm. And I do think that, because I don't want to think about the harm of smoking or the diffiicutly of quitting, I'd avoid learning about either of those two.

ADDED: First meaning of "I should-1 do X" is "a rational agent would do X". Second meaning (idiosyncratic to me) of "I should-2 do X" is "do X" is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret "I shoud-1 do X" to mean that I should feel guilty if I don't do X, which is definitely not helpful.

Comment author: Bobertron 20 December 2016 11:05:47PM 2 points [-]

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the startup, I should not try it". As long as I believe that, how can I separate these two and not flinch?

Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don't want to look into the health risks of smoking, because you don't want to quit. I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less. But I'm fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won't be any reason to look into the health risks of smoking anyway.

Maybe you should model yourself as two people. One person is rationality. It's responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it's recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it's advice and stop giving it queries.

Comment author: Bobertron 18 December 2016 10:56:26AM 2 points [-]

Here are some things that I, as an infrequent reader, find annoying about the LW interface.

  • The split between main and discussion doesn't make any sense to me. I always browse /r/all. I think there shouldn't be such a distinction.
  • My feed is filled with notices about meetups in faraway places that are pretty much guaranteed to be irrelevant to me.
  • I find the most recent open thread to be pretty difficult to find on the side bar. For a minute I thought it just wasn't there. I'd like it if the recent open thread and rationality quotes were sticked at the top of r/discussion.
Comment author: ThisSpaceAvailable 22 November 2016 06:08:30AM 0 points [-]

This may seem pedantic, but given that this post is on the importance of precision:

"Some likely died."

Should be

"Likely, some died".

Also, I think you should more clearly distinguish between the two means, such as saying "sample average" rather than "your average". Or use x bar and mu.

The whole concept of confidence is rather problematic, because it's on the one hand one of the most common statistical measures presented to the public, but on the other hand it's one of the most difficult concepts to understand.

What makes the concept of CI so hard to explain is that pretty every time the public is presented with it, they are presented with one particular confidence interval, and then given the 95%, but the 95% is not a property of the particular confidence interval, it's a property of the process that generated it. The public understands "95% confidence interval" as being an interval that has a 95% chance of containing the true mean, but actually a 95% confidence interval is an interval generated by a process, where the process has a 95% chance of generating a confidence interval that contains the true mean.

Comment author: Bobertron 18 December 2016 10:41:17AM 0 points [-]

I don't get this (and I don't get Benquo's OP either. I don't really know any statistics. Only some basic probability theory.).

"the process has a 95% chance of generating a confidence interval that contains the true mean". I understand this to mean that if I run the process 100 times, 95 times the resulting CI contains the true mean. Therefore, if I look at random CI amongst those 100 there is a 95% chance that the CI contains the true mean.

Comment author: Bobertron 26 November 2016 08:59:27PM 7 points [-]

"Effective self-care" or "effective well-being".

Okay. The "effective"-part in Effective Altruism" refers to the tool (rationality). "Altruism" refers to the values. The cool thing about "Effective Altruism", compared to rationality (like in LW or CFAR), is that it's specific enough that it allows a community to work on relatively concrete problems. EA is mostly about the global poor, animal welfare, existential risk and a few others.

What I'd imagine "Effective self-care" would be about is such things as health, fitness, happiness, positive psychology, life-extension, etc. It wouldn't be about "everything that isn't covered by effective altruism", as that's too broad to be useful. Things like truth and beauty wouldn't be valued (aside from their instrumental value) by either altruism nor self-care.

"Effective Egoism" sounds like the opposite of Effective Altruism. Like they are enemies. "Effective self-care" sounds like it complements Effective Altruism. You could argue that effective altruists should be interested in spreading effective self-care both amongst others since altruism is about making others better off, and amongst themselves because if you take good care for yourself you are in a better position to help others, and if you are efficient about it you have more resources to help others.

On the negative side, both terms might sound too medical. And self-care might sound too limited compared to what you might have in mind. For example,one might be under the impression that "self-care" is concerned with bringing happiness levels to "normal" or "average", instead of super duper high.

Comment author: moridinamael 11 November 2016 02:40:56PM *  12 points [-]

One flaw in this argument could be the assumption that "Clinton will maintain the Level B status quo" implicitly means "everything is fine now and therefor will continue to be fine for much the same reasons".

Eliezer views a Trump election as accepting a higher% risk of annihilation for essentially no reason. What if it's not no reason? What if all the Level B players are just wrong, irrationally buying into a status quo where we need to be engaging in brinksmanship with Russia and China and fighting ground battles in the Middle East in order to defend ourselves? You have to admit it's possible, right? "Smart people can converge en mass on a stupid conclusion" is practically a tenet of our community.

Hillary's campaign strategy has already shown this in principle. The obviously intelligent party elite all converged on a losing strategy, and repeatedly doubled down on it. It is reminiscent of our foreign policy.

Saying "we haven't had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good" is an obvious fallacy. Maybe we've just been lucky. Shit, maybe we've been unlucky and we're having this conversation due to anthropic miracles.

The last countless elections have seen candidates running on "more humble foreign policy" and then changing their stance once in office. There a semi-joke that the new president is taken into a smoke filled room and told what is really going on. Maybe so, but in that case, we're putting a lot of unexamined faith in the assessments of the people in that smoke filled room.

None of this is so much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

Comment author: Bobertron 11 November 2016 08:45:27PM 3 points [-]

None of this is a much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

I don't understand. Could you correct the grammar mistakes or rephrase that?

The way I understand the argument isn't that the status quo in the level B game is perfect. It isn't that Trump is a bad choice because his level B strategy is taking too much risk and therefore bad. I understand the argument as saying: "Trump doesn't even realize that there is a level B game going on and even when he finds out he will be unfit to play in that game".

Comment author: WalterL 11 November 2016 02:49:10PM *  13 points [-]

"People who voted for Trump are unrealistically optimists,"

I don't think that's really a fair charge.

Like, reading through Yudkowsky's stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.

One guy is like "Here are all of these things you need to think about to make sure that you are effective at getting your values implemented". I love that guy. Read his stuff. Big fan.

Other guy is like "Here are my values!" That guy...eh, not a fan. Reading him you get the idea that the whole "I am a superhero and I am killing God" stuff is not sarcastic.

It is the second guy who writes his facebook posts.

So when he is accusing us of not paying sufficient attention to the consequences of a Trump victory, I'm more inclined to say that we paid attention, but we don't value those consequences the way he does.

To spell it out: I don't share (and I don't think my side shares), Yudkowsky's fetish for saving every life. When he talks about malaria nets as the most effective way to save lives, I am nodding, but I am nodding along to the idea of finding the most effective way to get what you want done, done. Not at the idea that I've got a duty to preserve every pulse.

That belief, the idea that any beating heart means we have a responsibility to keep it that way, leads to the insane situations where the bad guys can basically take themselves hostage. It is silly.

The whole "most variations from the equilibria are disasters", only really works if you share my guy's mania about valuing the other team's welfare. In terms of America's interests, Trump is a much safer choice than Hillary. Given our invincible military, the only danger to us is a nuclear war (meaning Russia). Hillary -> Putin is a chilly, fraught relationship, with potential flashpoints in Crimea / Syria. Trump -> Putin is less likely to involve conflict. Putin will thug around his neighbors, Trump will (probably not) build a wall between us and Mexico.

I didn't reply to Yudkowsky's facebook post. I don't know him, and it wouldn't be my place. But he is making a typical leftist mistake, which is dismissing the right as a defective left.

You've seen it everywhere. The left can't grok the idea that the right values different things, and just can't stop proving that the left's means lead to the left's ends way better than the right's means lead to the left's ends. "What's the Matter With Kansas", if you want a perfect example. The Home School wars if you want it rubbed in your face.

Yes, electing Hillary Clinton would have been a better way to ensure world prosperity than electing Donald Trump would. That is not what we are trying to do. We want to ensure American prosperity. We'd like to replace our interventionist foreign policy with an isolationist one.

LW isn't a place to argue about politics, so I'm not going to go into why we have the values that we have here. I just want to point out that Yudkowsky is making the factual mistake of modeling us as being shitty at achieving his goals, when in truth we are canny at achieving our own.

Comment author: Bobertron 11 November 2016 08:37:40PM 1 point [-]

As I understand it you are criticizing Yudkowski's ideology. But MrMind wants to hear our opinion on whether or not Scott and Yudkowski's reasoning was sound, given their ideologies.

View more: Next