All of Sophronius's Comments + Replies

Yeah, I'm willing to entertain the idea that there's a tradeof to be made between the short term and the long term or something like that... but to be honest, I don't think the people who push these ideas are even thinking along those lines. I think a rational discussion would just be a net plus for everyone involved, but people are unwilling to do that either because it's not in their interest to do so (lobby groups, media agencies) or because they don't understand why they should.

Don't get me wrong, I do think there are some left-wing groups who have had... (read more)

Here are a few examples of the sort of thing I have in mind; if you think they're badly unrepresentative, could you explain why?

Representative of the current culture war clashes? Sort of, I guess. But it's weird to me that you're reading e.g. Jordan Peterson asking people to please not attack him or make him say words that he doesn't want to as "evil conservatives attack trans people for no reason." Is your model of Peterson that he is only pretending to feel threatened, or that he just feels threatened by trans people in general? If so, that seems amazing... (read more)

5Viliam
So, conservatives should not be judged by their politicians, but progressives should be judged by their Twitter users? In my opinion, Twitter users are much worse than politicians... :D Have you ever met someone who used "terfs" and "cis scum" in real life? I can't even imagine that.
3gjm
One other remark that's separate from the object-level arguments elsewhere in this thread: I see that your comments here have been quite heavily downvoted, and would like to remark that I have not downvoted any of them.
5gjm
I wish you wouldn't attribute things to me in freakin' quotation marks that are not things I have actually said, or thought, or meant. That would certainly be weird, if there were any truth to it. But literally nothing in those quotation marks is my opinion. I don't think Jordan Peterson is evil, I don't think that refusing to refer to trans people using the pronouns they find appropriate is "attacking" them (though I do think it's bloody rude), and I don't think he was doing it "for no reason" (though I don't find his reasons at all convincing). More to the point, regardless of what I think, I didn't say any of that or anything like any of that. Here, by way of reminder, is everything I said about Jordan Peterson. That's it. And it's a matter of readily verifiable fact that Jordan Peterson did express opposition to C-16. So how do you get from there to "evil conservatives attack trans people for no reason"? I guess it's because I said that "progressives are trying to make life easier for trans people, and conservatives are opposing them at every turn" and I cited Jordan Peterson's opposition to C-16 as one example. But, again, I never said "evil", or "attack trans people", or "for no reason", and if you take those bits out you get something like "conservatives oppose progressives' attempts to make life easier for trans people" ... which, so far as I can see, is simply a plain statement of fact. Again, if you disagree, show me how the examples I gave aren't representative. I think you may have slightly misunderstood the Red Skull thing (but it's also very possible that I have), but in any case I don't know what it has to do with anything here. If Marvel Comics did something stupid or evil, is that supposed to have any implications at all for whether social conservatism is wise or foolish? Might well do (though TERF seems to me to be pretty much flatly descriptive; if it's a slur, it's only because many people don't like the actual thing that it refers to), but
4Viliam
Possible explanation: People who are happy with status quo are more likely to end up defending it; people who are unhappy with status quo are more likely to end up trying to change it. If your argument is that politics causes people to be happy/unhappy, that would require evidence beyond correlation, which itself is easier to explain by causation in the opposite direction. (I think it is possible that you are right, but the correlation itself it not good evidence in your favor.)
9gjm
"Are conservatives really going around making trans people's lives harder?" Yup. Or, more precisely, progressives are going around trying to make trans people's lives easier, and conservatives are opposing them at every turn. Whether that's because they prefer trans people's lives to be harder (perhaps because they think this will make there be fewer trans people and trans people's lives are bad whatever they do), or because they oppose change as such, or just to stick it to the libs, I don't know. I'm honestly not sure how this could be in doubt, which suggests that at least one of us is badly mistaken about something. Here are a few examples of the sort of thing I have in mind; if you think they're badly unrepresentative, could you explain why? * "Bathroom bills", explicitly either allowing or forbidding trans people to use public toilet facilities matching the gender they now identify as, appear to break down cleanly as follows: ones proposed and sponsored by conservatives forbid, ones proposed and sponsored by progressives permit. Here are the first few specific cases I found in the Wikipedia article on the topic. Ones in parentheses are ones less directly relevant, for one reason or another. There are many more and all the ones I looked at seem to fit the pattern. Many are closely modelled on something put out by the "Alliance Defending Freedom", which is of course a conservative group. * (Anchorage, Alaska: couldn't quickly find information about party affiliation of the various people involved, but of course Alaska Family Action, which proposed a forbidding BB, is a conservative group.) * (Alabama: this bill wasn't exactly either of the "allowing" or the "forbidding" type, but does seem like maaaaybe it was designed to stoke fears about Scary Trans Impostors. It was proposed by a Republican.) * Arizona: forbidding BB, proposed by a Republican but later withdrawn. * (California: school-related allowing BB, proposed by Tom Ammiano; a quick look d
9Dale Udall
Much of the activism I hear about on the news falls into both the legibility trap and the movement trap. While allies are trying to simplify the issues to build steam for building an institution to work on creating an expert class who can manage the organizations that will obtain the workers who will provide integrated solutions to the impacted people, the impacted people are living the problem and finding their own grassroots solutions. For Italians in early New York City, the grassroots solution to racism was the Mafia and political machines, and now Italian-Americans are now considered white by pretty much everyone in America.

I'm still not certain if I managed to get what I think is the issue across. To clarify, here's an example of the failure mode I often encounter:

Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it's objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in... (read more)

4DanArmak
I understand now what you're referring to. I believe this is formally called normative moral relativism, which holds that: That is a minority opinion, though, and all of (non-normative) moral relativism shouldn't be implicated. Here's what I would reply in the place of your philosopher: Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that's wrong, right? Philosopher: It's considered wrong by many people, like the two of us. And it's considered right by some other people (or they wouldn't regularly do it in some countries). So while we should act to stop it, it's incorrect to call it simply wrong (because nothing is). But because most people don't make such precise distinctions of speech, they might misunderstand us to mean that "it's not really wrong", a political/social disagreement; and since we don't want that, we should probably use other, more technical terms instead of abusing the bare word "wrong". ---------------------------------------- Recognizing that there is no objective moral good is instrumentally important. It's akin to internalizing the orthogonality thesis (and rejecting, as I do, the premise of CEV). It's good to remember that people, in general, don't share most of your values and morals, and that a big reason much of the world does share them is because they were imposed on it by forceful colonialism. Which does not imply we should abandon these values ourselves. ---------------------------------------- Here's my attempt to steelman the normative moral relativist position: We should recognize our values genuinely differ from those of many other people. From a historical (and potential future) perspective, our values - like all values - are in a minority. All of our own greatest moral values - equality, liberty, fraternity - come with a historical story of overthrowing different past values, of which we are proud. Our "western" values today are widespread across the world in large degree because they w

That makes no sense to me.

I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. "just a matter of opinion". I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual's health. It does not preclude a science of morality.

However, as far as I know, he never gave an actual argument for why such a t

... (read more)
2DanArmak
I don't think these two are really different. An "opinion", a "belief", and a "preference" are fundamentally similar; the word used indicates how attached the person is to that state, and how malleable it appears to be. There exist different underlying mechanisms, but these words don't clearly differentiate between them, they don't cut reality at its joints. How is that different from beliefs or normative statements about the world, which depend on what opinions an individual holds? "Holding an opinion" seems to cash out in either believing something, or having a preference for something, or advocating some action, or making a statement of group allegiance ("my sports team is the best, but that's just my opinion"). Maybe you use the phrase "just an opinion" to signal something people don't actually care about, or don't really believe in, just say but never act on, change far too easily, etc.. That's true of a lot of opinions that people hold. But it's also true of a lot of morals. You can always make a science of other people's subjective attributes. You can make a science of people's "just an" opinions, and it's been done - about as well as making a science of morality.

Everything you say is correct, except that I'm not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I've always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).

All this is disregarding the empirical question of to what extend our ... (read more)

2DanArmak
That makes no sense to me. How is it different from saying nothing at all is subjective? This seems to just ignore the definition of "subjective", which is "an attribute of a person, such that you don't know that attribute's value without knowing who the person is". Or, more simply, a "subjective X" is a function from a person to X. I believe that's where the whole CEV story comes into play. That is, Eliezer believes or believed that while today the shared preferences of all humans form a tiny, mostly useless set - we can't even agree on which of us should be killed! - that something useful and coherent could be "extrapolated" from them. However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated, or why all humans could agree on an extrapolation procedure, and I don't believe it myself.

I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person's judgment is no better than another's (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.

4DanArmak
Subjectivism implies that morals are two-place concepts, just like preferences. Murder isn't moral or immoral, it can only be Sophronius!moral or Sophronius!immoral. This means Sophronius is probably best equipped to judge what is Sophronius!moral, so other people's judgements clearly aren't as good in that sense. But if you and I disagree about what's moral, we may be just confused about words because you're thinking of Sophronius!moral and I'm thinking of DanArmak!moral and these are similar but different things. Is that what you meant?

RE: The survey: I have taken it.

I assume the salary question was meant to be filled in as Bruto, not netto. However that could result in some big differences depending on the country's tax code...

Btw, I liked the professional format of the test itself. Looked very neat.

No, it's total accuracy on factual questions, not the bias part...

More importantly, don't be a jerk for no reason.

1VoiceOfRa
See my comment here on just how factual the "facutal" questions are.

Cool! I've been desperate to see a rationality test and so make improvements in rationality measurable (I think the Less Wrong movement really really needs this) so it's fantastic to see people working on this. I haven't checked the methodology yet but the basic principle of measuring bias seems sound.

-1VoiceOfRa
So, your biases are similar to Stefan's, congratulations.

Hm, a fair point, I did not take the context into account.

My objection there is based on my belief that Less Wrong over-emphasizes cleverness, as opposed to what Yudkowsky calls 'winning'. I see too many people come up with clever ways to justify their existing beliefs, or being contrarian purely to sound clever, and I think it's terribly harmful.

My point was that you're not supposed to stop thinking after finding a plausible explanation, and most certainly not after having found the singularly most convenient possible explanation. "Worst of all possible worlds" and all that.

If you feel this doesn't apply to you, then please do not feel as though I'm addressing you specifically. It's supposed to be advice for Less Wrong as a whole.

That is a perfectly valid interpretation, but it doesn't explain why several people independently felt the need to explain this to me specifically, especially since it was worded in general terms and at the time I was just stating facts. This implied that there was something about me specifically that was bothering them.

Hence the lesson: Translate by finding out what made them give that advice in the first place, and only then rephrase it as good advice.

The point is that you don't ignore countless people saying the same thing just because you can think of a reason to dismiss them. Even if you are right and that's all it is, you'll still have sinned for not considering it.

Otherwise clever people would always find excuses to justify their existing beliefs, and then where would we be?

3Jiro
Doesn't the very fact that I have a reason imply that I must have considered it? And at any rate, how is "They got their ideas about rationality from popular fiction" a failure to consider? Things are not always said by countless people because they have merit. And in this case, there's a very well known, fairly obvious, reason why countless people would say such a thing. You may as well ask why countless people think that crashed cars explode.

Oh, I've thought of another example:

Less Wrongers and other rationalists frequently get told that "rationality is nice but emotion is important too". Less Wrongers typically react to this by:

1) Mocking it as a fallacy because "rationality is defined as winning so it is not opposed to emotion", before eagerly taking it up as a strawman and posting the erroneous argument all over the place to show everyone how poor the enemies of reason are at reasoning.

Instead of:

2) Actually considering for five minutes whether or not there might be a c... (read more)

1Jiro
"Observations" are not always caused by people observing things. The most well-known example of rationality associated with emotional control is Spock from Star Trek. And Spock is fictional. And fiction affects how people think about reality.

Hm, okay, let me try to make it more concrete.

My main example is one where people (more than once, in fact) told me that "I might have my own truth, but other people have their truth as well". This was incredibly easy to dismiss as people being unable to tell map from territory, but after the third time I started to wonder why people were telling me this. So I asked them what made them bring it up in the first place, and they replied that they felt uncomfortable when I was stating facts with the confidence they warranted. I was reminded of someth... (read more)

-2Lumifer
LOL :-)
2[anonymous]
I translate "I might have my own truth, but other people have their truth as well" as "You might have your perspective, but other people have their own perspectives. No one has the complete truth (territory), so don't state your mere perspective as if it's the complete truth." Another translation" "You may be certain you're right, but the people you're arguing with are just as certain that they are right."

This surprised me as well when I first heard it, but it's apparently a really common problem for shy people. I tend to shy back and do my own thing, and apparently some people took that as meaning I felt like I was too good to talk to them.

Now that I've trained myself to be more arrogant, it's become much less of an issue.

5Sabiola
Yes, way back when I was in school, people interpreted my shyness as arrogance too. I was very surprised when I learned that, as I'd always thought people were reading me like an open book.

This is an extremely important lesson and I am grateful that you are trying to teach it.

In my experience it is almost impossible to actually succeed in teaching it, because you are fighting against human nature, but I appreciate it nonetheless.

(A few objections based on personal taste: Too flowery, does not get to the point fast enough, last paragraph teaches false lesson on cleverness)

1pure-awesome
What exactly do you believe the false lesson to be and why do you think it's false? I interpreted it as meaning one should take into account your prior for whether someone with a gambling machine is telling the truth about how the machine works.

Btw, I am curious as to whether a post like this one could be put in Main. I put it in discussion right now because I wrote it down hastily, but I think the lesson taught is important enough for main. Could someone tell me what I would need to change to make this main-worthy?

8[anonymous]
Ugh, I am new here, but probably sprinkling it with links? Main posts tend to be heavy with linkage, preferably to studies or other articles, on-or offsite that reference studies. I have the impression that the community prefers heavily linked articles, either because the links point to things that can be interpreted as evidence, such as studies, but perhaps even more because they show the whole thing is not just something happening in the authors head, but is connected with reality, even if the links simply reference other people's opinions at least they demonstrate it is something happening in a lot of heads, and as such at least part of the reality of human psychology... they also demonstrate the diligence of trying to research the topic.

Hey, where are you guys? I am terrible at finding people and i see no number i can call

My own personal experience in the Netherlands did not show one specific bias, but rather multiple groups within the same university with different convictions. There was a group of people/professors who insisted that people were rational and markets efficient, and then there was the 'people are crazy and the world is mad' crowd. I actually really liked that people held these discussions, made it much more interesting and reduced bias overall I think.

In terms of social issues, I never noticed much discussion about this. People were usually pretty open and t... (read more)

0HalMorris
Thanks. I appreciate the additional point of view and observations.

Ooh, debiasing techniques, sounds cool. My brother and I will be attending this one. Is there any pre-reading we should do?

Interesting. However, I still don't see why the filter would work similarly to a chemical reaction. Unless it's a general law of statistics that any event is always far more likely to have a single primary cause, it seems like a strange assumption since they are such dissimilar things.

0Galap
Sorry for the delayed response; I don't come on here particularly often. The assumptions I'm making are that evolution is a stochastic process in which elements are in fluxional states and there ere is some measure of 'difficulty' in transitioning from one state to another, an energetic or entropic barrier of sorts, that to go from A to B (for example, from an organism with asexual reproduction to an organism with sexual reproduction) some confluence of factors must occur, and that occurrence has a certain likelihood that's dependent on the conditions of the whole system (ecosystem). I think that this combined with the large numbers of physical elements interacting (organsims) is enough to say that evolution is governed by something pretty similar to statistical thermodynamics. So, from the Arrhenius equation, k = Ae^{{-E_a}/{RT}} where k is the rate of reaction, A is the order of reaction (number of components that must come together), E_a is the activation energy, or energy barrier, and RT is the gas constant multiplied by temperature. The equation is mostly applied to chemistry, but it also has found uses in other sectors, like predicting the geographic progression of the blooming of Sakura trees (http://en.wikipedia.org/wiki/Cherry_blossom_front). It really applies to any system that has certain kinetic properties. So ignoring all the chemistry specific factors (like temperature), the relation in its most general form becomes k = Ae^-BE This says essentially that the rate is proportional to a negative exponential of the barrier to the transformation, and small changes in the value of the barrier correspond to large changes in the value of the rate. Thus, it's unlikely that two rates are similar. I don't see why two unrelated things would be likely to have a similar barrier, and given this, they're even less likely to have a similar rate.

Maybe not explicitly, but I keep seeing people refer to "the great filter" as if it was a single thing. But maybe you're right and I'm reading too much into this.

4Luke_A_Somers
Normal filters have multiple layers too - for example first you can have the screen that keeps the large particles out. Then you have the activated charcoal, and then the paper to keep the charcoal out, and for high-end filters you finish off with a porous membrane. And yet, it's all one filter. SO... we're speculating here on where the bulk of the work is being done. The strength could be evenly distributed from beginning to end, but it could also be lumpy.

Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it's most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single 'great filter' and I don't get why.

1amcknight
I don't have an answer but here's a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.
1Galap
I'd liken it to a chemical reaction. Many of them are multistep, and as a general statement chemical processes take place over an extremely wide range of orders of magnitude of rate (ranging from less than a billionth of a second to years). So, in an overall reaction, there are usually several steps, and the slowest one is usually orders of magnitude slower than any of the others, and that one's called the rate determining step, for obvious reasons: it's so much slower than the others that speeding up or slow down the others even by a couple of orders of magnitude is negligible to the overall rate of reaction. it's pretty rare that more than one of them happen to be at nearly the same rate, since the range of orders of magnitude is so large. I think that the evolution of intelligence is a stochastic process that's pretty similar to molecular kinetics in a lot of ways, particularly that all of the above applies to it as well, thus, it's more likely that there's one rate determining step, one Great Filter, for the same reasons. However (and I made another post about this here too), I do think that the filters are interdependent (there are multiple pathways and it's not a linear process, but progress along a multidimensional surface.) that's not really all that different than molecular kinetics either though.
1ChristianKl
Pareto principle. There also the Fermi formula.
4[anonymous]
I don't think anyone really assumes that.

1) As far as I understand it, atoms don't have a specific 'location', there are only probabilities for where that atom might be at any given time. Given that it is silly to speak of individual atoms. Even if I misunderstood that part, it is still the case that two entities which have no discernible difference in principle are the same, as a matter of simple logic.

2) Asking "which body do you wake up in" is a wrong question. It is meaningless because there is no testable difference depending on your answer, it is not falsifiable even in principle... (read more)

That's irrelevant when you're considering whether or not to use the horcrux at all and the alternative is being dead.

0drethelin
If you're on your deathbed, sure. But Horcruxing is not costless. If you have a significant projected lifespan left, and you want ACTUAL immortality, your odds are probably better NOT doing a risky dark ritual that also encourages people to come and kill you.

That's not an issue when it comes to acquiring immortality though. I mean, if you lost all knowledge of algebra, would you say that means you "died"?

3drethelin
Did you not read that section at all? If you lose all knowledge of powerful spellcasting, a) you lose your ability to continue to be immortal after this iteration, b) you lose your ability to defend yourself against enemies who haven't lost their ability to cast interdicted spells. The second one is really important when the process for immortality is one that inherently makes a lot of enemies! He specifically mentioned that dark wizards that tried use that technique to come back were easily defeated afterward.

Yes, that ideology is precisely what bothers me. Eliezer has a bone to pick with death so he declares death to be the ultimate enemy. Dementors now represent death instead of depression, patronus now uses life magic, and a spell that is based on hate is now based on emptiness. It's all twisted to make it fit the theme, and it feels forced. Especially when there's a riddle and the answer is 'Eliezer's password'.

-1hairyfigment
I don't know if MoR influenced the movies, but Deathly Hallows 1 or 2 showed an image of Death looking like the movie's image of Dementors. It seems to me like a natural inference.

Yea, the concept of burden of proof can be a useful social convention, but that's all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two 'sceptics' both taking turns postinf 'you have the burden of proof', 'no you have the burden of proof!'... You'll see what i mean. Actual rationality isn't supposed to be easy.

The appropriateness of that probably depends on what kind of question it is...

I guess it is slightly more acceptable if it's a binary question. But even so it's terrible epistimology, since you are giving undue attention to a hypothesis just because it's the first one you came up with.

An equally awful method of doing things: Reading through someone's post and trying to find anything wrong with it. If you find anything --> post criticism, if you don't find anything --> accept conclusion. It's SOP even on Less Wrong, and it's not totally stupid but... (read more)

0gjm
In principle I agree with you. In practice I think "X has the burden of proof" generally means something similar to "The position X is advancing has a rather low prior probability, so substantial evidence would be needed to make it credible, and in particular if X wants us to believe it then s/he would be well advised to offer substantial evidence." Which, yes, involves confusion between an idea and the people who hold it, and might encourage an argument-as-conflict view of things that can work out really badly -- but it's still a convenient short phrase, reasonably well understood by many people, that (fuzzily) denotes something it's often useful to say. So, yeah, issuing such challenges in such terms is a sign of imperfect enlightenment and certainly doesn't make the one who does it a rationalist in any useful sense. But I don't see it as such a bad sign as I think you do.
0Lumifer
No, that's not what I had in mind. For example, there are questions which explicitly ask for an explanation and answering them with an explanation is fine. Or, say, there are questions which are wrong (as a question) so you answer them with an explanation of why they don't make sense. I don't think you can. Or, rather, I think you can see things from multiple specific point of views, but you cannot see them without any point of view. Yes, I understand you talk about looking at things "from the perspective of the universe" but this expression is meaningless to me. That may or may not be a reasonable position to take. Let me illustrate how it can be reasonable: people often talk in shortcuts. The sentence quoted could be a shortcut expression for "I have evaluated the evidence for and against X and have come to the conclusion Y. You are claiming that Y is wrong, but your claim by itself is not evidence. Please provide me with actual evidence and then I will update my beliefs". But humans do and I'm talking to humans, not to the universe. A more general point -- you said in another post This is true when you are evaluating the physical reality. But it is NOT true when you are evaluating the social reality -- it IS influenced by emotions and what people want to be true. I don't quite understand you here.

This needs to be on posters and T-shirts if it isn't already. Is it a well-known principle?

Sadly not. I keep meaning to post an article about this, but it's really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...

I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that's wrong wit... (read more)

The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they're irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care mo... (read more)

1Velorien
This needs to be on posters and T-shirts if it isn't already. Is it a well-known principle? Thank you for the explanation. This overall idea (of the relationship between belief and reality, and the fact that it only goes one way) is in itself not new to me, but your perspective on it is, and I hope it will help me develop my ability to think objectively. Also thanks for the music video. Shame I can't upvote you multiple times.

Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.

I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that's not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)

0Lumifer
The appropriateness of that probably depends on what kind of question it is... I think my hackles got raised by the claim that your perception is "what it actually is" -- and that's a remarkably strong claim. It probably works better phrased like something along the lines of "trying to take your ego and preconceived notions out of the picture". Any links to egregious examples? :-)

Your attitude makes me happy, thank you. :)

It's the most basic rationalist skill there is, in my opinion, but for some reason it's not much talked about here. I call it "thinking like the universe" as opposed to "thinking like a human". It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don't follow any specific chain of logic toward... (read more)

1Velorien
I think I understand now, thank you. Do you follow any specific practices in order to internalise this approach, or do you simply endeavour to apply it whenever you remember?

Hm, I didn't think I was reacting that strongly... If I was, it's probably because I am frustrated in general by people's inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the 'eyes' of that view. And I see the same mistak... (read more)

1Lumifer
I think that people who fully possess such a skill are usually described as "have achieved enlightenment" and, um, are rare :-) The skill doesn't look "elementary" to me.
2Velorien
Could you describe this skill in more detail please? If it is one I do not possess, I would like to learn.

What do you mean with the term "scientifically" in that sentence? If I put identity into Google Scholar I'm fairly sure I fill find a bunch of papers in respectable scientific journals that use the term.

I mean that if you have two carbon atoms floating around in the universe, and the next instance you swap their locations but keep everything else the same, there is no scientific way in which you could say that anything has changed.

Combine this with humans being just a collection of atoms, and you have no meaningful way to say that an identica... (read more)

0[anonymous]
Yes you are missing a few things. 1) Saying you can't tell after the fact whether something occured is not the same as saying it never occured. The fact that we can't experimentally determine if two carbon atoms have distinct identity is not, repeat not the same as saying that they don't have separate identity. Maybe they do. You just can't tell. 2) That has nothing to do with continuity of consciousness. Assume the existence of a perfect matter replicator. What do you expect to happen when you make a copy of yourself? Do you expect to suddenly find yourself inside of the copy? Let's say that regardless of what you expect at that point, you end up in your same body as before, the old one not the new one. What do you expect to experience then, if you killed yourself? This has nothing, nothing to do with statements about quantum identity and equivalence of configuration spaces. It is about separating the concept of a representation of me, from an instance of that representation which is me. I expect to experience only what the instance of the representation which is currently typing this words will experience as it evolves into the future. If an exact copy of me was made at any time, that'd be pretty awesome. It'd be like having a truly identical twin. But it wouldn't me me, and if this instance died, I wouldn't expect to live on experiencing what the copy of me experiences. 3) Sleeping is a total non-sequiter. Do you expect that your brain is 100% shut off and disarticulated into individual neurons when you are in a sleeping state? No? That's right -- just because you don't have memories, doesn't mean you didn't exist while sleep. You just didn't form memories at the time.
0hairyfigment
Or not. Memories are genuinely lost, if someone makes a Horcrux and then dies some years later. Moreover, according to the Defense Professor in snake form, the maker's personality could also change due to influence from the (two) victim(s). The result need not act like the maker at time of casting would act if placed in a new environment. See also major's point.
4Nornagest
I parsed it as follows: the Killing Curse isn't powered by death in the same way that the Patronus draws power from life, but it does require the caster not to value the life of an opponent. Hatred enables this, but it's limited: it has to be intense, sustained hatred, and probably only hatred of a certain kind, since it takes some doing for neurologically typical humans to hate someone enough to literally want them dead. Indifference to life works just as well and lacks the limitations, but that's probably an option generally available only to, shall we say, a certain unusual personality type. Ideology might interact with this in interesting ways, though. I don't know whether Death Eaters would count as being motivated by hate or indifference by the standards of the spell; my model of J.K. Rowling says "hate", while my model of Eliezer says "indifference".
2major
"don't think about it either way" does not necessarily mean indifference, it means reverting to default behaviour. Humans are (mostly) pro-social animals with empathy and would not crush another human who just happens to be in their way - in that they differ from a falling rock. In fact, that's the point of hate, it overrides the built-in safeguards to allow for harmful action. According to this view, to genuinely not give a damn about someone's life is a step further. Obviously. The thing about built-in default behaviour given by evolution is that it will not trigger in some cases. Rationality and the English Language or HPMoR Ch.48 or HPMoR Ch.87 My point with that is, it's completely in line with what Eliezer usually talks about, so you know it's a perspective he holds, not just rationalization. For completeness' sake, still feels off. Oh, wait, I know! Maybe Harry is being Stupid here. Or Eliezer is being a Bad Writer. Again.

It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:

A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but th... (read more)

3Velorien
I'm surprised how strongly you're reacting to this, given that you seem to be aware that the whole "emotions having opposites" system is really just a word game anyway. Why is it important that you prioritise the "effect on preferences" axis and Eliezer prioritises the "intensity" axis, except insofar as it is a bit embarrassing to see an intelligent person presenting one of these as wisdom? Perhaps Eliezer simply considers apathy to be a more dangerous affliction than hatred, and is thus trying to shift his readers' priorities accordingly. Insofar as there are far more people in the world moved to inaction through apathy than there are people moved to wrong action through hatred, perhaps there's something to that.

Yea, I was quite surprised to find that Quirrell believes in continuity of consciousness as being a fundamental problem, since it really is just an illusion to begin with (though you could argue the illusion itself is worthwhile). Surely you could just kill yourself the moment your horcrux does its job if you're worried about your other self living on? But maybe he doesn't know that scientifically there's no such thing as identity. Or maybe he's lying. Personally, I would be MUCH more concerned about the fact that the horcrux implants memories, but does no... (read more)

0DanArmak
What would be the point? The goal of the horcrux isn't to transfer into another body you like better than your current one, it's to be a backup against accidentally dying.
0[anonymous]
It's not at all obvious that continuity of consciousness is an illusion. If you have a real proof of that I'd love to hear it.
0drethelin
The continuity of consciousness is one thing but the horcrux doesn't even give continuity of KNOWLEDGE thanks to merlin
4Viliam_Bur
Perhaps the word "opposite" is not the best one, but I think it's about this: in some metric, loving people and hating people is closer to each other than either of them is to the paperclip maximizer's attitude towards humans. In HPMOR universe, a magical paperclip maximizer could shoot AK like a machine gun. Instead of replacing one emotion with another emotion, it's replacing one emotion with an absence of an emotion. Instinctively, people sometimes prefer to be hated than to be ignored. For example, children trying to draw attention to themselves by behaving badly. There is some "recognition" in hate, that indifference lacks.
4ChristianKl
What do you mean with the term "scientifically" in that sentence? If I put identity into Google Scholar I'm fairly sure I fill find a bunch of papers in respectable scientific journals that use the term. "Obviously" is a fairly strong word. It makes some sense to label the negation of any emotion a emotionless state. Unfriendly AI doesn't hate humans but is indifferent.
0Velorien
Insofar as it is at all meaningful to consider feelings to have opposites, what would you present as the correct alternative?

Again no, a computer being conscious does not necessitate it acting differently. You could add a 'consciousness routine' without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it's own code, then I imagine you could tell the difference.

4Bugmaster
Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I'm not sure if that's true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn't do it.

Well no, of course merely being connected to a conscious system is not going to do anything, it's not magic. The conscious system would have to interact with the laptop in a way that's directly or indirectly related to its being conscious to get an observable difference.

For comparison, think of those scenario's where you're perfectly aware of what's going on, but you can't seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it's mere existence isn't going to do anything.

Sorry if this still doesn't answer your question.

1Bugmaster
That does not, in fact, answer my question :-( In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows). So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?

The role of system A is to modify system B. It's meta-level thinking.

An animal can think: "I will beat my rival and have sex with his mate, rawr!"
but it takes a more human mind to follow that up with: "No wait, I got to handle this carefully. If I'm not strong enough to beat my rival, what will happen? I'd better go see if I can find an ally for this fight."

Of course, consciousness is not binary. It's the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it's meta ... (read more)

4Bugmaster
Sorry, I think you misinterpreted my scenario; let me clarify. I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well. So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?

Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems

I'm going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.

1Bugmaster
Ok, sounds good. So, specifically, is there anything that you'd expect system A to do that system B would be unable to do (or vice versa) ?

To clarify what I mean, take the following imaginary conversation:

Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to ... (read more)

I don't really understand your objection. When I say that everything is objectively true or false, I mean that any particular thing is either part of the universe/reality at a given point in time/space or it isn't. I don't see any other possibility*. Perhaps you are confusing the map and the territory? It is perfectly possible to answer questions with "I don't know" or "mu" but that doesn't mean that the universe itself is in principle unknowable. The fact that consciousness is not properly understood yet does not mean that it occupies ... (read more)

2Algernoq
I think we are in agreement that rational decision-making is usually valuable, and that some people sometimes cite rationality in order to give false weight to their opinions. To continue your analogy, I'm saying that studying the rules of the road ceases to be a good use of time for most people once a basic driver's license is earned, even if it can slightly reduce accident risk. The possibility of upvotes while having this discussion is making me reconsider. The universe could be fundamentally unknowable, though this possibility doesn't seem very useful.

To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It's a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)

To clarify what I mean, take the following imaginary conversation:

Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to ... (read more)

8ArisKatsaris
Yvain criticized switching definitions depending on whether you want to defend an easily defensible position, or have others accept an untenable position. With Lesswrong's definition of rationality (epistemic rationality the ability to arrive to true beliefs, instrumental rationality the ability to know how to achieve your goals) how is that happening?
6Luke_A_Somers
So what's the bailey, here? You make it seem like having obviously true premises is a bad thing. Note, a progressive series of less firmly held claims are NOT Motte and Bailey, if you aren't vacillating on what each means.

To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless.

In my experience, the problem is not with disagreeing, but rather that most people won't even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that's the end of rationality discourse.

So, for me the main message of LW is this: A better definition of rationality is possible.

3Dustin
What do you mean exactly by "specifically designed"? Anyway, I don't disagree with you exactly. My original point was not that the LW defnition of rationality was a good or bad definition, but that the definition Algernoq was asserting as the LW consensus definition of rationality was probably not actually true. ETA: I'm also not sure that I agree with you about the definition being useless, as I think the LW defintion seems designed specifically to counter thinking that leads to someone spending 25% of their time for a car trip planning to save 5%. By explicitly stating that rationality is about winning it helps to not get bogged down in the details and to remember what the point is. Whether or not the definition that has arisen is explicitly designed with that in mind, I can't say.

Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality canno... (read more)

-1Algernoq
I agree. My concern is that LW claims to be "less wrong" than it is. A third possibility is "undecidable" (as in Godel incompleteness). There's something weird going on with consciousness that may resolve this question once understood.

Uhm, no. I mean, this is exaggerating; we are not having any physical violence here. Worst case: poisoning of minds.

Yes of course it's an exaggeration, but it's the same meta-type of error: Seeing X used for evil and therefore declaring that all X is evil and anyone who says X isn't always evil is either evil or stupid themselves. It's the same mistake as the one Neoreactionaries always complain about: "Perceived differences based on race or sex have been used to excuse evil, therefore anyone who says there are differences between races or sexes i... (read more)

7Viliam_Bur
Didn't say "equally". Seems to me that so far we had two significant attempts at suppressing opinions on LW. 1) Eugine's one-person guerilla war of mass downvoting. Had some success for a few months, resulted in a ban. 2) Repeated suggestions that we should remove politically incorrect speech, because allegedly women don't like it. Multiple proponents, no success yet. I'm not sure which one of these is more dangerous; I could find arguments for either side. Eugine actually did censor the site for a while. However, he was finally banned, and if someone tries to do the same thing, they will probably get banned, too (hopefully much sooner). Also, his actions didn't have popular support. On the other hand, censhorship of politically incorrect ideas is proposed repeatedly, by multiple people, openly in public. They demand that their norms become the official norms of the website, enforced by moderators. Then I believe most people here want to have a debate without any political group dominating the website. About half of them don't want to see here any politics at all, and I guess the other half would be okay with occassional, as rational as possible, polite debate about political topics.

I don't think the first sense and the second are mutually exclusive.

A dog has half as much processing power as a human = a dog can think the same thoughts but only at half the speed.
A dog has half as much consciousness as a human = a dog is only half as aware as a human of what's going on.

And yes, I definitely think that this is how it works. For example, when I get up in the morning I am much less aware of what's going on than when I am fully awake. Sometimes I pause and go "wait what am I doing right now?" And of course there's those funny time... (read more)

Thank you for taking the time to write all that, it helps me see where you are coming from. You clearly have a large framework which you are basing your views on, but the thing you have to keep in mind is that I do, too. I have several partially-written posts about this which I hope to post on Less Wrong one day, but I’m very worried they’ll be misconstrued because it’s such a difficult subject. The last thing I want to do is defend the practices of oppressive regimes, believe me. I’m worried that people just read my posts thinking “oh he is defending cens... (read more)

7Viliam_Bur
Uhm, no. I mean, this is exaggerating; we are not having any physical violence here. Worst case: poisoning of minds. (I believe Yvain handled the case of neo-reactionaries sufficiently, if that's what we are talking about here.) What if there are two competing religions; each one of them evil in a different way. And one missionary approaches you with an offer that if you help him establish the holy inquisition, he will rid you of those evil heretics from the other side. Is it a good idea to give him the power?
4Username
If the religion is so obviously harmful why is it catching on? To paraphrase Kaj, why is it the place of individual people to decide that this religion needs censorship?
Load More