Posts

Sorted by New

Wiki Contributions

Comments

I understand your post to be about difficult truths related to politics, but you don't actually give examples (except "what Trump has said is 'emotionally true'") and the same idea applies to simplifications of complex material in science etc. I just happened upon an example from a site teaching drawing in perspective (source):

Now you may have heard of terms such as one point, two point or three point perspective. These are all simplifications. Since you can have an infinite number of different sets of parallel lines, there are technically an infinite number of potential vanishing points. The reason we can simplify this whole idea to three, two, or a single vanishing point is because of boxes.

[...] . Because of this, people like to teach those who are new to perspective that the world can be summarized with a maximum of 3 vanishing points.

Honestly, this confused me for years

The author way lied to about the possible number of vanishing points in a drawing. But instead of realizing the falsehood he was confused.

Suppose X is the case. When you say "X" your opposite will believe Y, which is wrong. So, even though "X" is the truth, you should not say it.

Your new idea as I understand it: Suppose saying "Z" will let your opposite will believe X. So, even though saying "Z" is, technically, lying, you should say "Z" because the listener will come to have a true believe.

(I'm sorry if I misunderstood you or you think I'm being uncharitable. But even if I misunderstood I think others might misunderstand in a similar way, so I feel justified in responding to the above concept)

First I dislike that approach because it makes things harder for people that could understand, if only people would stop lying to them or prefer to be told the truth along the lines of "study macro economics for two years and you will understand".

Second, that seems to me to be a form of the-end-justifies-the-means that, even though I think of myself as a consequentialist, I'm not 100% comfortable with. I'm open to the idea that sometimes it's okay, and even proper, to say something that's technically untrue, if it results in your audience coming to have a truer world-view. But if this "sometimes" isn't explained or restricted in any way, that's just throwing out the idea that you shouldn't lie.

Some ideas on that:

  • Make sure you don't harm your audience because you underestimate them. If you simplify or modify what you say to the point that it can't be considered true any more because you think your audience is limited in their capacity to understand the correct argument, make sure you don't make it harder to understand the truth for those that can. That includes the people you underestimated, people that you didn't intend to address but heard you all the same and people that really won't understand now, but will later. (Children grow up, people that don't care enough to follow complex arguments might come to care).
  • It's not enough that your audience comes to believe something true. It needs to be justified true believe. Or alternatively, your audience should not only believe X but know it. For a discussion on what is meant with "know" see most of the field of epistemology, I guess. Like, if you tell people that voting for candidate X will give them cancer and the believe you they might come to the correct believe that voting for candidate X is bad for them. But saying that is still unethical.
  • I guess if you could give people justified true believe, it wouldn't be lying at all and the whole idea is that you need to lie because some people are incapable of justified true believe on matter X. But then it should at least be "justified in some sense". Particularly, your argument shouldn't work just as well if "X" were false.

When playing around in the sandbox, simpleton always bet copy cat (using default values put a population of only simpleton and copycat). I don't understand why.

"Just being stupid" and "just doing the wrong thing" are rarely helpful views

I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I'm not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it's actually solvable using the described skill.

I think this misses the point, and damages your "should" center

Potentially, yes. I'm deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.

"Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing."

That's definitely not good enough for me. I never smoked in my life. I don't think smoking is worth it. And if I were smoking, I don't think I would stop just because I think it's a net harm. And I do think that, because I don't want to think about the harm of smoking or the diffiicutly of quitting, I'd avoid learning about either of those two.

ADDED: First meaning of "I should-1 do X" is "a rational agent would do X". Second meaning (idiosyncratic to me) of "I should-2 do X" is "do X" is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret "I shoud-1 do X" to mean that I should feel guilty if I don't do X, which is definitely not helpful.

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the startup, I should not try it". As long as I believe that, how can I separate these two and not flinch?

Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don't want to look into the health risks of smoking, because you don't want to quit. I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less. But I'm fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won't be any reason to look into the health risks of smoking anyway.

Maybe you should model yourself as two people. One person is rationality. It's responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it's recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it's advice and stop giving it queries.

Here are some things that I, as an infrequent reader, find annoying about the LW interface.

  • The split between main and discussion doesn't make any sense to me. I always browse /r/all. I think there shouldn't be such a distinction.
  • My feed is filled with notices about meetups in faraway places that are pretty much guaranteed to be irrelevant to me.
  • I find the most recent open thread to be pretty difficult to find on the side bar. For a minute I thought it just wasn't there. I'd like it if the recent open thread and rationality quotes were sticked at the top of r/discussion.

I don't get this (and I don't get Benquo's OP either. I don't really know any statistics. Only some basic probability theory.).

"the process has a 95% chance of generating a confidence interval that contains the true mean". I understand this to mean that if I run the process 100 times, 95 times the resulting CI contains the true mean. Therefore, if I look at random CI amongst those 100 there is a 95% chance that the CI contains the true mean.

"Effective self-care" or "effective well-being".

Okay. The "effective"-part in Effective Altruism" refers to the tool (rationality). "Altruism" refers to the values. The cool thing about "Effective Altruism", compared to rationality (like in LW or CFAR), is that it's specific enough that it allows a community to work on relatively concrete problems. EA is mostly about the global poor, animal welfare, existential risk and a few others.

What I'd imagine "Effective self-care" would be about is such things as health, fitness, happiness, positive psychology, life-extension, etc. It wouldn't be about "everything that isn't covered by effective altruism", as that's too broad to be useful. Things like truth and beauty wouldn't be valued (aside from their instrumental value) by either altruism nor self-care.

"Effective Egoism" sounds like the opposite of Effective Altruism. Like they are enemies. "Effective self-care" sounds like it complements Effective Altruism. You could argue that effective altruists should be interested in spreading effective self-care both amongst others since altruism is about making others better off, and amongst themselves because if you take good care for yourself you are in a better position to help others, and if you are efficient about it you have more resources to help others.

On the negative side, both terms might sound too medical. And self-care might sound too limited compared to what you might have in mind. For example,one might be under the impression that "self-care" is concerned with bringing happiness levels to "normal" or "average", instead of super duper high.

None of this is a much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

I don't understand. Could you correct the grammar mistakes or rephrase that?

The way I understand the argument isn't that the status quo in the level B game is perfect. It isn't that Trump is a bad choice because his level B strategy is taking too much risk and therefore bad. I understand the argument as saying: "Trump doesn't even realize that there is a level B game going on and even when he finds out he will be unfit to play in that game".

As I understand it you are criticizing Yudkowski's ideology. But MrMind wants to hear our opinion on whether or not Scott and Yudkowski's reasoning was sound, given their ideologies.

Load More