It's more a relative thing---"not quite as extremely biased towards academia as the average group of this level of intellectual orientation can be expected to be".
If so, then we're actually more rational right? Because we're not biased against academia as most people are, and aren't biased toward academia as most academics are.
Well, you want some negative selection: Choose dating partners from among the set who are unlikely to steal your money, assault you, or otherwise ruin your life.
This is especially true for women, for whom the risk of being raped is considerably higher and obviously worth negative selecting against.
I don't think it's quite true that "fail once, fail forever", but the general point is valid that our selection process is too much about weeding-out rather than choosing the best. Also, academic doesn't seem to be very good at the negative selection that would make sense, e.g. excluding people who are likely to commit fraud or who have fundamentally anti-scientific values. (Otherwise, how can you explain how Duane Gish made it through Berkeley?)
I'm saying that the truth is not so horrifying that it will cause you to go into depression.
This is what I hope and desire to be true. But what I'm asking for here is evidence that this is the case, to counteract the evidence from depressive realism that would seem to say that no, actually the world is so terrible that depression is the only rational response.
What reason do we have to think that the world doesn't suck?
The mutilation of male genitals in question is ridiculous in itself but hardly equivalent to the kind of mutilation done to female genitals.
Granted. Female mutilation is often far more severe.
But I think it's interesting that when the American Academy of Pediatrics proposed allowing female circumcision that really just was circumcision, i.e. cutting of the clitoral hood, people were still outraged. And so we see that even when the situation is made symmetrical, there persists what we can only call female privilege in this circumstance.
I know with 99% probability that the item on top of your computer monitor is not Jupiter or the Statue of Liberty. And a major piece of information that leads me to that conclusion is... you guessed it, the circumference of Jupiter and the height of the Statue of Liberty. So there you go, this "irrelevant" information actually does narrow my probability estimates just a little bit.
Not a lot. But we didn't say it was good evidence, just that it was, in fact, evidence.
(Pedantic: You could have a model of Jupiter or Liberty on top of your computer, but that's not the same thing as having the actual thing.)
It's a subtle matter, but... you clearly don't really mean determinism here, because you've said a hundred times before how the universe is ultimately deterministic even at the quantum level.
Maybe predictability is the word we want. Or maybe it's something else, like fairness or "moral non-neutrality"; it doesn't seem fair that Hitler could have that large an impact by himself, even though there's nothing remotely non-deterministic about that assertion.
Macroscopic determinism, i.e., the belief that an outcome was not sensitive to small thermal (never mind quantum) fluctuations. If I'm hungry and somebody offers me a tasty hamburger, it's macroscopically determined that I'll say yes in almost all Everett branches; if Zimbabwe starts printing more money, it's macroscopically determined that their inflation rates will rise further.
Not critical to your point, but I can't stand this habitual exchange:
But there's a lot of small habits in everything we do, that we don't really notice. Necessary habits. When someone asks you how you are, the habitual answer is 'Fine, thank you,' or something similar. It's what people expect. The entire greeting ritual is habitualness, to the point that if you disrupt the greeting, it throws people off.
When people ask how I am, I want to give them information. I want to tell them, "Actually I've had a bad headache all day; and I'm underemployed r...
It's about ten times easier to become vegetarian than it is to reduce your consumption of meat. Becoming vegetarian means refusing meat every time no matter what, and you can pretty much manage that from day one. Reducing your meat consumption means somehow judging how much meat you're eating and coming up with an idea of how low you want it to go, and pretty soon you're just fudging all the figures and eating as much as you were anyway.
Likewise, I tried for a long time to "reduce my soda drinking" and could not achieve this. Now I have switched to "sucralose-based sodas only" and I've been able to do it remarkably well.
For the most part I agree with this post, but I am not convinced that this is true:
Anyone can develop any “character trait.” The requirement is simply enough years of thoughts becoming words becoming actions becoming habit.
A lot of measured traits are extremely stable over lifespan (IQ, conscientiousness, etc.) and seem very difficult, if not impossible, to train. So the idea that someone can just get smarter through practice does not appear to be supported by the evidence.
The answer should be obvious: Expected utility.
In practical terms, this means weighting according to severity, because the quantity of people affected is very close to equal. So we focus on the worst forms of oppression first, and then work our way up towards milder forms.
This in turn means that we should be focusing on genital mutilation and voting rights. (And things like Elevatorgate, for those of you who follow the atheist blogosphere, should obviously be on a far back burner.)
Because female circumcision is rare and illegal in developed nations?
There's obviously a female advantage here, at least in the Western world. Mutilating female genitals draws the appropriate outrage, while mutilating male genitals is ignored or even condoned. (I've seen people accused of "anti-Semitism" just for pointing out that male circumcision has virtually no actual medical benefits.)
Upvoted because it's a well-sourced and coherent argument.
Which is not to say that I agree with the conclusion. Okay, so there may be this effect of women being identified with their bodies.
But here's the thing: WE ARE OUR BODIES. We should be identifying with them, and if we're not, that's actually a very serious defect in our thinking (probably the defect that leads to such nonsense as dualism and religion).
Now, I guess you could say that maybe women are taught to care too much about physical appearance or something like that (they should care about othe...
I'm not sure I would call it "oppression", but it's clearly true that heterosexual men are by far the MOST controlled by restrictive gender norms. It is straight men who are most intensely shoehorned into this concept of "masculinity" that may or may not suit them, and their status is severely downgraded if they deviate in any way.
If you doubt this, imagine a straight man wearing eye shadow and a mini-skirt. Compare to a straight woman wearing a tuxedo.
See the difference?
Everyone getting an A isn't reinforcement. Reinforcement has to be conditional on something. If you give everyone who writes a long paper an A, that's reinforcing writing long papers. If you give everyone who writes a well-written paper an A, that's reinforcing well-written papers (and probably more what you want to do).
But if you just give everyone an A, that may be positive, but it simply isn't reinforcement.
Well, maybe. Depending on how much it costs to do that experimental treatment, compared to other things we could do with those resources.
(Actually a large part of the problem with rising medical costs in the developed world right now is precisely due to heavier use of extraordinary experimental treatments.)
I don't think you're just rationalizing. I think this is exactly what the philosophy of mathematics needs in fact.
If we really understand the foundations of mathematics, Godel's theorems should seem to us, if not irrelevant, then perfectly reasonable---perhaps even trivially obvious (or at least trivially obvious in hindsight, which is of course not the same thing), the way that a lot of very well-understood things seem to us.
In my mind I've gotten fairly close to this point, so maybe this will help: By being inside the system, you're always going to get &...
It looks like there's still some serious controversy on the issue.
But suppose for a moment that it's true: Suppose that depressed people really do have more accurate beliefs, and that this really is related to their depression.
What does this mean for rationality? Is it more rational to be delusional and happy or to be accurate and sad? Or can we show that even in light of this data there is a third option, to actually be accurate and happy?
Depressive realism is an incredibly, well, depressing fact about the world.
Is there something we're missing about it though? Is the world actually such that understanding it better makes you sad, or is it rather that for whatever reason sad people happen to be better at understanding the world?
And if it is in fact that understanding makes you sad... what does this mean for rationality?
Actually, realizing this parallel causes me to be even more dubious of the efficient market hypothesis.
As compelling as it may sound when you say it, this line or reasoning plainly doesn't work in scientific truth... so why should it work in finance?
Behavioral finance gives us plenty of reasons to think that whole markets can remain radically inefficient for long periods of time. What this means for the individual investor, I'm not sure. But what it means for the efficient market hypothesis? Death.
I think majoritarianism is ultimately opposed to tsuyoku naritai, because it prevents us from ever advancing beyond what the majority believes. We rely upon others to do knowledge innovation for us, waiting for the whole society to, for example, believe in evolution, or understand calculus, before we will do so.
Actually I think I tend to do the opposite. I undervalue subgoals and then become unmotivated when I can't reach the ultimate goal directly.
E.g. I'm trying to get published. Book written, check. Query letters written, check. Queries sent to agents, check. All these are valuable subgoals. But they don't feel like progress, because I can't check off the book that says "book published".
I largely agree with you, but I think that there's something we as rationalists can realize about these disagreements, which helps us avoid many of the most mind-killing pitfalls.
You want to be right, not be perceived as right. What really matters, when the policies are made and people live and die, is who was actually right, not who people think is right. So the pressure to be right can be a good thing, if you leverage it properly into actually trying to get the truth. If you use it to dismiss and suppress everything that suggests you are wrong, that's no...
There is another way: Look really really hard with tools that would be expected to work. If you find something? Yay, your hypothesis is confirmed. If you don't? You'd better start doubting your hypothesis.
You already do this in many situations I'm sure. If someone said, "You have a million dollars!" and you looked in your pockets, your bank accounts, your stock accounts (if any), etc. and didn't find a million dollars in them (or collectively in all of them put together), you would be pretty well convinced that the million dollars you allegedly have doesn't exist. (In fact, depending on your current economic status you might have a very low prior in the first place; I know I would.)
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without... (read more)