Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Vaniver 22 December 2016 06:47:56PM 1 point [-]

the kid is just being stupid

"Just being stupid" and "just doing the wrong thing" are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas "just being stupid" doesn't.

I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less.

I think this misses the point, and damages your "should" center. You want to get into a state where if you think "I should X," then you do X. The set of beliefs that allows this is "Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing." (You can see how updating the first one from "Smoking isn't that bad for my health" to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)

Comment author: Bobertron 23 December 2016 06:01:29PM *  1 point [-]

"Just being stupid" and "just doing the wrong thing" are rarely helpful views

I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I'm not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it's actually solvable using the described skill.

I think this misses the point, and damages your "should" center

Potentially, yes. I'm deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.

"Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing."

That's definitely not good enough for me. I never smoked in my life. I don't think smoking is worth it. And if I were smoking, I don't think I would stop just because I think it's a net harm. And I do think that, because I don't want to think about the harm of smoking or the diffiicutly of quitting, I'd avoid learning about either of those two.

ADDED: First meaning of "I should-1 do X" is "a rational agent would do X". Second meaning (idiosyncratic to me) of "I should-2 do X" is "do X" is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret "I shoud-1 do X" to mean that I should feel guilty if I don't do X, which is definitely not helpful.

Comment author: Bobertron 20 December 2016 11:05:47PM 2 points [-]

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the startup, I should not try it". As long as I believe that, how can I separate these two and not flinch?

Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don't want to look into the health risks of smoking, because you don't want to quit. I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less. But I'm fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won't be any reason to look into the health risks of smoking anyway.

Maybe you should model yourself as two people. One person is rationality. It's responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it's recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it's advice and stop giving it queries.

Comment author: Bobertron 18 December 2016 10:56:26AM 2 points [-]

Here are some things that I, as an infrequent reader, find annoying about the LW interface.

  • The split between main and discussion doesn't make any sense to me. I always browse /r/all. I think there shouldn't be such a distinction.
  • My feed is filled with notices about meetups in faraway places that are pretty much guaranteed to be irrelevant to me.
  • I find the most recent open thread to be pretty difficult to find on the side bar. For a minute I thought it just wasn't there. I'd like it if the recent open thread and rationality quotes were sticked at the top of r/discussion.
Comment author: ThisSpaceAvailable 22 November 2016 06:08:30AM 0 points [-]

This may seem pedantic, but given that this post is on the importance of precision:

"Some likely died."

Should be

"Likely, some died".

Also, I think you should more clearly distinguish between the two means, such as saying "sample average" rather than "your average". Or use x bar and mu.

The whole concept of confidence is rather problematic, because it's on the one hand one of the most common statistical measures presented to the public, but on the other hand it's one of the most difficult concepts to understand.

What makes the concept of CI so hard to explain is that pretty every time the public is presented with it, they are presented with one particular confidence interval, and then given the 95%, but the 95% is not a property of the particular confidence interval, it's a property of the process that generated it. The public understands "95% confidence interval" as being an interval that has a 95% chance of containing the true mean, but actually a 95% confidence interval is an interval generated by a process, where the process has a 95% chance of generating a confidence interval that contains the true mean.

Comment author: Bobertron 18 December 2016 10:41:17AM 0 points [-]

I don't get this (and I don't get Benquo's OP either. I don't really know any statistics. Only some basic probability theory.).

"the process has a 95% chance of generating a confidence interval that contains the true mean". I understand this to mean that if I run the process 100 times, 95 times the resulting CI contains the true mean. Therefore, if I look at random CI amongst those 100 there is a 95% chance that the CI contains the true mean.

Comment author: Bobertron 26 November 2016 08:59:27PM 7 points [-]

"Effective self-care" or "effective well-being".

Okay. The "effective"-part in Effective Altruism" refers to the tool (rationality). "Altruism" refers to the values. The cool thing about "Effective Altruism", compared to rationality (like in LW or CFAR), is that it's specific enough that it allows a community to work on relatively concrete problems. EA is mostly about the global poor, animal welfare, existential risk and a few others.

What I'd imagine "Effective self-care" would be about is such things as health, fitness, happiness, positive psychology, life-extension, etc. It wouldn't be about "everything that isn't covered by effective altruism", as that's too broad to be useful. Things like truth and beauty wouldn't be valued (aside from their instrumental value) by either altruism nor self-care.

"Effective Egoism" sounds like the opposite of Effective Altruism. Like they are enemies. "Effective self-care" sounds like it complements Effective Altruism. You could argue that effective altruists should be interested in spreading effective self-care both amongst others since altruism is about making others better off, and amongst themselves because if you take good care for yourself you are in a better position to help others, and if you are efficient about it you have more resources to help others.

On the negative side, both terms might sound too medical. And self-care might sound too limited compared to what you might have in mind. For example,one might be under the impression that "self-care" is concerned with bringing happiness levels to "normal" or "average", instead of super duper high.

Comment author: moridinamael 11 November 2016 02:40:56PM *  12 points [-]

One flaw in this argument could be the assumption that "Clinton will maintain the Level B status quo" implicitly means "everything is fine now and therefor will continue to be fine for much the same reasons".

Eliezer views a Trump election as accepting a higher% risk of annihilation for essentially no reason. What if it's not no reason? What if all the Level B players are just wrong, irrationally buying into a status quo where we need to be engaging in brinksmanship with Russia and China and fighting ground battles in the Middle East in order to defend ourselves? You have to admit it's possible, right? "Smart people can converge en mass on a stupid conclusion" is practically a tenet of our community.

Hillary's campaign strategy has already shown this in principle. The obviously intelligent party elite all converged on a losing strategy, and repeatedly doubled down on it. It is reminiscent of our foreign policy.

Saying "we haven't had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good" is an obvious fallacy. Maybe we've just been lucky. Shit, maybe we've been unlucky and we're having this conversation due to anthropic miracles.

The last countless elections have seen candidates running on "more humble foreign policy" and then changing their stance once in office. There a semi-joke that the new president is taken into a smoke filled room and told what is really going on. Maybe so, but in that case, we're putting a lot of unexamined faith in the assessments of the people in that smoke filled room.

None of this is so much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

Comment author: Bobertron 11 November 2016 08:45:27PM 3 points [-]

None of this is a much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

I don't understand. Could you correct the grammar mistakes or rephrase that?

The way I understand the argument isn't that the status quo in the level B game is perfect. It isn't that Trump is a bad choice because his level B strategy is taking too much risk and therefore bad. I understand the argument as saying: "Trump doesn't even realize that there is a level B game going on and even when he finds out he will be unfit to play in that game".

Comment author: WalterL 11 November 2016 02:49:10PM *  13 points [-]

"People who voted for Trump are unrealistically optimists,"

I don't think that's really a fair charge.

Like, reading through Yudkowsky's stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.

One guy is like "Here are all of these things you need to think about to make sure that you are effective at getting your values implemented". I love that guy. Read his stuff. Big fan.

Other guy is like "Here are my values!" That guy...eh, not a fan. Reading him you get the idea that the whole "I am a superhero and I am killing God" stuff is not sarcastic.

It is the second guy who writes his facebook posts.

So when he is accusing us of not paying sufficient attention to the consequences of a Trump victory, I'm more inclined to say that we paid attention, but we don't value those consequences the way he does.

To spell it out: I don't share (and I don't think my side shares), Yudkowsky's fetish for saving every life. When he talks about malaria nets as the most effective way to save lives, I am nodding, but I am nodding along to the idea of finding the most effective way to get what you want done, done. Not at the idea that I've got a duty to preserve every pulse.

That belief, the idea that any beating heart means we have a responsibility to keep it that way, leads to the insane situations where the bad guys can basically take themselves hostage. It is silly.

The whole "most variations from the equilibria are disasters", only really works if you share my guy's mania about valuing the other team's welfare. In terms of America's interests, Trump is a much safer choice than Hillary. Given our invincible military, the only danger to us is a nuclear war (meaning Russia). Hillary -> Putin is a chilly, fraught relationship, with potential flashpoints in Crimea / Syria. Trump -> Putin is less likely to involve conflict. Putin will thug around his neighbors, Trump will (probably not) build a wall between us and Mexico.

I didn't reply to Yudkowsky's facebook post. I don't know him, and it wouldn't be my place. But he is making a typical leftist mistake, which is dismissing the right as a defective left.

You've seen it everywhere. The left can't grok the idea that the right values different things, and just can't stop proving that the left's means lead to the left's ends way better than the right's means lead to the left's ends. "What's the Matter With Kansas", if you want a perfect example. The Home School wars if you want it rubbed in your face.

Yes, electing Hillary Clinton would have been a better way to ensure world prosperity than electing Donald Trump would. That is not what we are trying to do. We want to ensure American prosperity. We'd like to replace our interventionist foreign policy with an isolationist one.

LW isn't a place to argue about politics, so I'm not going to go into why we have the values that we have here. I just want to point out that Yudkowsky is making the factual mistake of modeling us as being shitty at achieving his goals, when in truth we are canny at achieving our own.

Comment author: Bobertron 11 November 2016 08:37:40PM 1 point [-]

As I understand it you are criticizing Yudkowski's ideology. But MrMind wants to hear our opinion on whether or not Scott and Yudkowski's reasoning was sound, given their ideologies.

Comment author: Viliam 16 August 2016 08:39:40AM 0 points [-]

It also depends on how fast you read. And whether you only want information for yourself, or possibly to educate other people (because telling other people to read something in Kahneman will seem high-status, while telling them to read the Sequences may feel cultish to them).

By the way, have you read Stanovich before or after LW? Was that worth your time?

Comment author: Bobertron 16 August 2016 06:02:13PM 0 points [-]

I've read it those two books after LW. Assuming you have read the sequences: It wasn't a total waste, but from my memory I would recommend What Intelligence Tests Miss only if you have an interest specifically in psychology, IQ or the heuristics and biases field. I would not recommend it simply because you have a casual interest in rationality and philosophy ("LW-type stuff") or if you've read other books about heuristics and biases. The Robot's Rebellion is a little more speculative and therefore more interesting, Robot's Rebellion and What Intelligence Test Miss also have a significant overlap in covered material.

Comment author: Bobertron 14 August 2016 06:30:37PM 0 points [-]

I haven't read "Good and Real" or "Thinking, Fast and Slow" yet, because I think that I won't learn something new as a long term Less Wrong reader. In the case of "Good and Real" part seems to be about physics and I don't think I have the physics background to profit from hat (I feel a refresher on high school physics would be more appropirate for me). In the case of "Thinking, Fast and Slow" I have already read books by Keith Stanovich (What Intelligence Tests Miss and The Robot's Rebellion) and some chapters of academic books edited by Kahneman.

Does anyone think those two books are still worth my time?

Comment author: Viliam 06 May 2016 12:15:53PM 5 points [-]

Probably saying the obvious, but anyway:

What is the advantage of nice communication in a rationalist forum? Isn't the content of the message the only important thing?

Imagine a situation where many people, even highly intelligent, make the same mistake talking about some topic, because... well, I guess I shouldn't have to explain on this website what "cognitive bias" means... everyone here has read the Sequences, right? ;)

But one person happens to be a domain expert in an unusual domain, or happened to talk with a domain expert, or happened to read a book by a domain expert... and something clicked and they realized the mistake.

I think that at this moment the communication style on the website has a big impact on whether the person will come and share their insight with the rest of the website. Because it predicts the response they get. On a forum with a "snarky" debating culture, the predictable reaction is everyone making fun and not even considering the issue seriously, because that's simply how the debate is done there. Of course, predicting this reaction, the person is more likely to just avoid the whole topic, and discuss something else.

Of course -- yes, I can already predict the reactions this comment will inevitably get -- this has to be balanced against people saying stupid things, etc. Of course. I know already, okay? Thanks.

Comment author: Bobertron 07 May 2016 08:57:08PM 3 points [-]

View more: Next