Here are a few examples of the sort of thing I have in mind; if you think they're badly unrepresentative, could you explain why?
Representative of the current culture war clashes? Sort of, I guess. But it's weird to me that you're reading e.g. Jordan Peterson asking people to please not attack him or make him say words that he doesn't want to as "evil conservatives attack trans people for no reason." Is your model of Peterson that he is only pretending to feel threatened, or that he just feels threatened by trans people in general? If so, that seems amazing...
I'm still not certain if I managed to get what I think is the issue across. To clarify, here's an example of the failure mode I often encounter:
Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it's objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in...
That makes no sense to me.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. "just a matter of opinion". I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual's health. It does not preclude a science of morality.
...However, as far as I know, he never gave an actual argument for why such a t
Everything you say is correct, except that I'm not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I've always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
All this is disregarding the empirical question of to what extend our ...
I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person's judgment is no better than another's (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.
RE: The survey: I have taken it.
I assume the salary question was meant to be filled in as Bruto, not netto. However that could result in some big differences depending on the country's tax code...
Btw, I liked the professional format of the test itself. Looked very neat.
No, it's total accuracy on factual questions, not the bias part...
More importantly, don't be a jerk for no reason.
Cool! I've been desperate to see a rationality test and so make improvements in rationality measurable (I think the Less Wrong movement really really needs this) so it's fantastic to see people working on this. I haven't checked the methodology yet but the basic principle of measuring bias seems sound.
Hm, a fair point, I did not take the context into account.
My objection there is based on my belief that Less Wrong over-emphasizes cleverness, as opposed to what Yudkowsky calls 'winning'. I see too many people come up with clever ways to justify their existing beliefs, or being contrarian purely to sound clever, and I think it's terribly harmful.
My point was that you're not supposed to stop thinking after finding a plausible explanation, and most certainly not after having found the singularly most convenient possible explanation. "Worst of all possible worlds" and all that.
If you feel this doesn't apply to you, then please do not feel as though I'm addressing you specifically. It's supposed to be advice for Less Wrong as a whole.
That is a perfectly valid interpretation, but it doesn't explain why several people independently felt the need to explain this to me specifically, especially since it was worded in general terms and at the time I was just stating facts. This implied that there was something about me specifically that was bothering them.
Hence the lesson: Translate by finding out what made them give that advice in the first place, and only then rephrase it as good advice.
The point is that you don't ignore countless people saying the same thing just because you can think of a reason to dismiss them. Even if you are right and that's all it is, you'll still have sinned for not considering it.
Otherwise clever people would always find excuses to justify their existing beliefs, and then where would we be?
Oh, I've thought of another example:
Less Wrongers and other rationalists frequently get told that "rationality is nice but emotion is important too". Less Wrongers typically react to this by:
1) Mocking it as a fallacy because "rationality is defined as winning so it is not opposed to emotion", before eagerly taking it up as a strawman and posting the erroneous argument all over the place to show everyone how poor the enemies of reason are at reasoning.
Instead of:
2) Actually considering for five minutes whether or not there might be a c...
Hm, okay, let me try to make it more concrete.
My main example is one where people (more than once, in fact) told me that "I might have my own truth, but other people have their truth as well". This was incredibly easy to dismiss as people being unable to tell map from territory, but after the third time I started to wonder why people were telling me this. So I asked them what made them bring it up in the first place, and they replied that they felt uncomfortable when I was stating facts with the confidence they warranted. I was reminded of someth...
This surprised me as well when I first heard it, but it's apparently a really common problem for shy people. I tend to shy back and do my own thing, and apparently some people took that as meaning I felt like I was too good to talk to them.
Now that I've trained myself to be more arrogant, it's become much less of an issue.
This is an extremely important lesson and I am grateful that you are trying to teach it.
In my experience it is almost impossible to actually succeed in teaching it, because you are fighting against human nature, but I appreciate it nonetheless.
(A few objections based on personal taste: Too flowery, does not get to the point fast enough, last paragraph teaches false lesson on cleverness)
Btw, I am curious as to whether a post like this one could be put in Main. I put it in discussion right now because I wrote it down hastily, but I think the lesson taught is important enough for main. Could someone tell me what I would need to change to make this main-worthy?
Hey, where are you guys? I am terrible at finding people and i see no number i can call
My own personal experience in the Netherlands did not show one specific bias, but rather multiple groups within the same university with different convictions. There was a group of people/professors who insisted that people were rational and markets efficient, and then there was the 'people are crazy and the world is mad' crowd. I actually really liked that people held these discussions, made it much more interesting and reduced bias overall I think.
In terms of social issues, I never noticed much discussion about this. People were usually pretty open and t...
Ooh, debiasing techniques, sounds cool. My brother and I will be attending this one. Is there any pre-reading we should do?
Interesting. However, I still don't see why the filter would work similarly to a chemical reaction. Unless it's a general law of statistics that any event is always far more likely to have a single primary cause, it seems like a strange assumption since they are such dissimilar things.
Maybe not explicitly, but I keep seeing people refer to "the great filter" as if it was a single thing. But maybe you're right and I'm reading too much into this.
Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it's most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single 'great filter' and I don't get why.
1) As far as I understand it, atoms don't have a specific 'location', there are only probabilities for where that atom might be at any given time. Given that it is silly to speak of individual atoms. Even if I misunderstood that part, it is still the case that two entities which have no discernible difference in principle are the same, as a matter of simple logic.
2) Asking "which body do you wake up in" is a wrong question. It is meaningless because there is no testable difference depending on your answer, it is not falsifiable even in principle...
That's irrelevant when you're considering whether or not to use the horcrux at all and the alternative is being dead.
That's not an issue when it comes to acquiring immortality though. I mean, if you lost all knowledge of algebra, would you say that means you "died"?
Yes, that ideology is precisely what bothers me. Eliezer has a bone to pick with death so he declares death to be the ultimate enemy. Dementors now represent death instead of depression, patronus now uses life magic, and a spell that is based on hate is now based on emptiness. It's all twisted to make it fit the theme, and it feels forced. Especially when there's a riddle and the answer is 'Eliezer's password'.
Yea, the concept of burden of proof can be a useful social convention, but that's all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two 'sceptics' both taking turns postinf 'you have the burden of proof', 'no you have the burden of proof!'... You'll see what i mean. Actual rationality isn't supposed to be easy.
The appropriateness of that probably depends on what kind of question it is...
I guess it is slightly more acceptable if it's a binary question. But even so it's terrible epistimology, since you are giving undue attention to a hypothesis just because it's the first one you came up with.
An equally awful method of doing things: Reading through someone's post and trying to find anything wrong with it. If you find anything --> post criticism, if you don't find anything --> accept conclusion. It's SOP even on Less Wrong, and it's not totally stupid but...
This needs to be on posters and T-shirts if it isn't already. Is it a well-known principle?
Sadly not. I keep meaning to post an article about this, but it's really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...
I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that's wrong wit...
The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they're irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care mo...
Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.
I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that's not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)
Your attitude makes me happy, thank you. :)
It's the most basic rationalist skill there is, in my opinion, but for some reason it's not much talked about here. I call it "thinking like the universe" as opposed to "thinking like a human". It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don't follow any specific chain of logic toward...
Hm, I didn't think I was reacting that strongly... If I was, it's probably because I am frustrated in general by people's inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the 'eyes' of that view. And I see the same mistak...
What do you mean with the term "scientifically" in that sentence? If I put identity into Google Scholar I'm fairly sure I fill find a bunch of papers in respectable scientific journals that use the term.
I mean that if you have two carbon atoms floating around in the universe, and the next instance you swap their locations but keep everything else the same, there is no scientific way in which you could say that anything has changed.
Combine this with humans being just a collection of atoms, and you have no meaningful way to say that an identica...
It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:
A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but th...
Yea, I was quite surprised to find that Quirrell believes in continuity of consciousness as being a fundamental problem, since it really is just an illusion to begin with (though you could argue the illusion itself is worthwhile). Surely you could just kill yourself the moment your horcrux does its job if you're worried about your other self living on? But maybe he doesn't know that scientifically there's no such thing as identity. Or maybe he's lying. Personally, I would be MUCH more concerned about the fact that the horcrux implants memories, but does no...
Again no, a computer being conscious does not necessitate it acting differently. You could add a 'consciousness routine' without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it's own code, then I imagine you could tell the difference.
Well no, of course merely being connected to a conscious system is not going to do anything, it's not magic. The conscious system would have to interact with the laptop in a way that's directly or indirectly related to its being conscious to get an observable difference.
For comparison, think of those scenario's where you're perfectly aware of what's going on, but you can't seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it's mere existence isn't going to do anything.
Sorry if this still doesn't answer your question.
The role of system A is to modify system B. It's meta-level thinking.
An animal can think: "I will beat my rival and have sex with his mate, rawr!"
but it takes a more human mind to follow that up with: "No wait, I got to handle this carefully. If I'm not strong enough to beat my rival, what will happen? I'd better go see if I can find an ally for this fight."
Of course, consciousness is not binary. It's the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it's meta ...
Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems
I'm going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.
To clarify what I mean, take the following imaginary conversation:
Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to ...
I don't really understand your objection. When I say that everything is objectively true or false, I mean that any particular thing is either part of the universe/reality at a given point in time/space or it isn't. I don't see any other possibility*. Perhaps you are confusing the map and the territory? It is perfectly possible to answer questions with "I don't know" or "mu" but that doesn't mean that the universe itself is in principle unknowable. The fact that consciousness is not properly understood yet does not mean that it occupies ...
To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It's a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)
To clarify what I mean, take the following imaginary conversation:
Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to ...
To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless.
In my experience, the problem is not with disagreeing, but rather that most people won't even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that's the end of rationality discourse.
So, for me the main message of LW is this: A better definition of rationality is possible.
Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality canno...
Uhm, no. I mean, this is exaggerating; we are not having any physical violence here. Worst case: poisoning of minds.
Yes of course it's an exaggeration, but it's the same meta-type of error: Seeing X used for evil and therefore declaring that all X is evil and anyone who says X isn't always evil is either evil or stupid themselves. It's the same mistake as the one Neoreactionaries always complain about: "Perceived differences based on race or sex have been used to excuse evil, therefore anyone who says there are differences between races or sexes i...
I don't think the first sense and the second are mutually exclusive.
A dog has half as much processing power as a human = a dog can think the same thoughts but only at half the speed.
A dog has half as much consciousness as a human = a dog is only half as aware as a human of what's going on.
And yes, I definitely think that this is how it works. For example, when I get up in the morning I am much less aware of what's going on than when I am fully awake. Sometimes I pause and go "wait what am I doing right now?" And of course there's those funny time...
Thank you for taking the time to write all that, it helps me see where you are coming from. You clearly have a large framework which you are basing your views on, but the thing you have to keep in mind is that I do, too. I have several partially-written posts about this which I hope to post on Less Wrong one day, but I’m very worried they’ll be misconstrued because it’s such a difficult subject. The last thing I want to do is defend the practices of oppressive regimes, believe me. I’m worried that people just read my posts thinking “oh he is defending cens...
Yeah, I'm willing to entertain the idea that there's a tradeof to be made between the short term and the long term or something like that... but to be honest, I don't think the people who push these ideas are even thinking along those lines. I think a rational discussion would just be a net plus for everyone involved, but people are unwilling to do that either because it's not in their interest to do so (lobby groups, media agencies) or because they don't understand why they should.
Don't get me wrong, I do think there are some left-wing groups who have had... (read more)