Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.

Maintaining a High Signal to Noise Ratio

The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.

For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.

Rationality is not a religion – Or is it?

Therefore, on Less Wrong, a person earns karma by expressing views from within the ideology. Wayward comments are discouraged with down-votes. Sometimes, even, an ideological toe is stepped on, and the disapproval is more explicit. I’ve been told, here and there, one way or another, that expressing extremely dissenting views is: stomping on flowers, showing disrespect, not playing along, being inconsiderate.

So it turns out: the conditions necessary for the faithful support of an ideology are not that different from the conditions sufficient for developing a cult.

But Less Wrong isn't a religion or a cult. It wants to identify and dis-root illusion, not create a safe place to cultivate it. Somewhere, Less Wrong must be able challenge its basic assumptions, and see how they hold up to new and all evidence. You have to allow brave dissent.

  • Outsiders who insist on hanging around can help by pointing to assumptions that are thought to be self-evident by those who "get it", but that aren’t obviously true. And which may be wrong.

  • It’s not necessarily the case that someone challenging a significant assumption doesn’t get it and doesn’t belong here. Maybe, occasionally, someone with a dissenting view may be representing the ideology more than the status quo.

Shouldn’t there be a place where people who think they are more rational (or better than rational), can say, “hey, this is wrong!”?

A Solution

I am creating this top-level post for people to express dissenting views that are simply too far from the main ideology to be expressed in other posts. If successful, it would serve two purposes. First, it would remove extreme dissent away from the other posts, thus maintaining fidelity there. People who want to play at “rationality” ideology can play without other, irrelevant points of view spoiling the fun. Second, it would allow dissent for those in the community who are interested in not being a cult, challenging first assumptions and suggesting ideas for improving Less Wrong without being traitorous. (By the way, karma must still work the same, or the discussion loses its value relative to the rest of Less Wrong. Be prepared to lose karma.)

Thus I encourage anyone (outsiders and insiders) to use this post “Dissenting Views” to answer the question: Where do you think Less Wrong is most wrong?

Dissenting Views
New Comment
212 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Byrnema, you talk extensively in this post about the LW community having a (dominant) ideology, without ever really explicitly stating what you think this ideology consists of.

I'd be interested to know what, from your perspective are the key aspects of this ideology. I think this would have two benefits:

  1. the assumptions underlying our own ideologies aren't always clear to us, and having them pointed out could be a useful learning experience; and
  2. the assumptions underlying others' ideology aren't always clear to us, and making your impressions explicit would allow others the chance to clarify if necessary, and make sure we're all on the same page.

(More generally, I think this is a great idea.)

5byrnema
Long overdue: In May when I composed this post, I saw the LW community as having a dominant ideology, which I have since learned to label 'physical materialism'. I refrained from publically defining this ideology because of some kind of reluctance. I didn't expect the community to change over time, but it seems to me there has been drift in the type of discussions that occur on Less Wrong away from epistemological foundations. So I feel more comfortable now outlining the tenets of average LW epistemology, as I perceived it, as a ‘historical’ observation. The first and fundamental tenet of this epistemology is that there is a real, objective reality X that we observe and interact with. In contrast, persons with a metaphysical bent are less definitive about the permanent existence of an objective reality, and believe that reality alters depending on your thoughts and interactions with it. On the other extreme are skeptics that believe it is meaningless to consider any objective reality, because we cannot consider it objectively. (There are only models of reality, etc.) For formalism and precision, I will here introduce some definitions. Define objective reality as a universe X = the set of everything that we could ever potentially observe or interact with physically. (This is what we consider “real”.) We cannot know if X is a subset of a larger universe X-prime. Suppose that it is: The component of X-prime that is outside X (X-complement) may ‘exist’ in some sense but is not real to us. The second tenet is that anything we observe or interact with is a subset of X, the real physical world. While this trivially follows from the definition of X, what is being argued with physical materialism is not the tautology itself but the value of seeing things from this point of view. Trivially, there is nothing metaphysical in X; we either interact with something or we don’t. In contrast, the metaphysical view is to consider reality = X-prime, and consider that everything w
1AllanCrossman
But is there any reason to favour this more complex hypothesis?
0byrnema
I feel at home with physical materialism and I like the way it's simultaneously simple, self-consistent and powerful as a theory for generating explanation (immediately: all of science). Yet there are some interesting issues that come up when I think about the justification of this world view. The more complex hypothesis that there is 'more' than X would be favored by any evidence whatsoever that X is not completely self-contained. So then it becomes an argument about what counts as evidence, and "real" experience. The catch-22 is that any evidence that would argue for the metaphysical would either be rejected within X as NOT REAL or, if it was actually real -- in other words, observable, reproducible, explainable -- then it would just be incorporated as part of X. So it is impossible to refute the completeness of X from within X. (For example, even while QM observations are challenging causality, locality, counterfactual definiteness, etc., physicists are looking to understand X better, and modify X as needed, not rejecting the possibility of a coherent theory of X. But at what point are we going to recover the world that the metaphysicists meant all along? ) So the irrefutability of physical materialism is alarming, and the obstinate stance for 'something else' from the majority of my species leaves me interested in the question. I have nothing to lose from a refutation of either hypotheses, I'm just curious. Also despairing to some extent -- I believe such a questions are actually outside definitive epistemology.
1DanArmak
This is completely backwards. It's non-materialism that irrefutable, pretty much by definition. Suppose we allow non-materialistic, non-evidence-based theories. There is an infinite number of theories that describe X plus some non-evidential Y, for all different imaginable Ys. By construction, we can never tell which of these theories is more likely to be wrong then another. So we can never say anything about the other-than-X stuff that may be out there. Not "a benevolent god". Not "Y is pretty big". Not "Y exists". Not "I feel transcendental and mystical and believe in a future life of the soul". Not "if counterfactually the universe was that way instead of this way, we would observe Y and then we would see a teacup." Nothing at all can be said about Y because every X+Y theory that can be stated is equally valid, forever. Whatever description you give of Y, with your completely untestable religious-mental-psychic-magical-quantum powers of the mind that must not be questioned, I can give the precise opposite description. What reason could you have for preferring your description to mine? If your reason is in X, it can't give us information about Y. And if your reason is in Y, I can claim an opposite-reason for my opposite-theory which is also in Y, and we'll degenerate to a competition of divinely inspired religions that must not be questioned. Bottom line: if the majority of the species believes in "something else", that is a fact about the majority of the species, not about what's out there. If I develop the technology for making almost all humans stop believing in "something else", could that possibly satisfy your private wonderings?
1byrnema
Non-materialism is irrefutable within its own framework, agreed. So then we are left with two irrefutable theories, but one is epistemologically useful within X and one is not. Materialism wins. Nevertheless, just to echo your argument across the canyon: reality doesn't care what theories we “allow”, it is what it is. We might deduce that such-and-such-theory is the best theory for various epistemological reasons, but that wouldn't make the nature of the universe accessible if it isn't in the first place. Just reminding that ascetic materialism doesn't allow conviction about materialism.
0DanArmak
It is what X is. That's the definition of X. Whatever is outside X is outside Reality. Materialists don't think that "something outside reality" is a meaningful description, but that is what you claim when you talk about things being beyond X. No. We deduce that it's the best theory because it's only uniquely identifiable theory, as I said before. If you're going to pick any one theory, the only theory you can pick is a materialistic one. If you allow non materialistic theories, you have to have every possible theory all at once.
3Psy-Kosh
Well, dunno. To be fair, for the sake of argument, I guess one could maybe propose Idealistic theories. That is, that all that exists is made up of a "basic physics of consciousness", and everything else that we is just an emergent phenomenon of that. One would still keep reductionism, simply that one might have the ultimate reduction be to some sort of "elementry qualia" plus simple rules (as strict and precise and simple as any basic physics theory) for how those behave. (Note, I'm not advocating this position at this time, I'm just saying that potentially one could have a non materialist reductionism. If I ever actually saw a reduction like that that could successfully really predict/model/explain stuff we observe, I'd be kinda shocked and impressed.)
1byrnema
For the sake of argument, thank you. Yet I would guess that the theory you propose is still isomorphic to physical materialism, because physical materialism doesn't say anything about the nature of the elementary material of the universe. Calling it an elementary particle or calling it elementary qualia is just a difference in syllables, since we have no restrictions on what either might be like. Yet you remind me that we can arrive at other unique theories, within different epistemological frameworks. What I thought you were going to say is that a metaphysicist might propose a universe X-prime that is the idealization of X. As in, if we consider X to be an incomplete, imperfect structure, X-prime is the completion of X that makes it ideal and perfect. Then people can speculate about what is ideal and perfect, and we get all the different religions. But it is unique in theory. By the way, the epistemology used there would seem backwards to us. While we use logic to deduce the nature of the universe from what we observe, in this theory, what they observe is measured against what they predict should logically be. That is, IF they believe that "ideal and perfect" logically follows. (This 'epistemology' clearly fails in X, which is why I personally would reject it, but of course, based on a theory that ordinates X above all, even logic.)
1DanArmak
I don't see how that contradicts what I said. Suppose you believe a theory such as you described. Then I propose a new theory, with different elementary qualia that have different properties and behaviors, but otherwise obey the meta-rules of your theory - like proposing a different value for physical constants, or a new particle. If the two theories can be distinguished in any kind of test, if we can follow any conceivable process to decide which theory to believe, then this is materialism, just done with needlessly complicated theories. On the other hand, if we can't distinguish these theories, then you have to believe an infinite number of different theories equally, as I said.
1AllanCrossman
I'm perfectly happy with the idea that there could be stuff that we can't know about simply because it's too "distant" in some sense for us to experience it; it sends no signals or information our way. I'm not sure anyone here would deny this possibility. But if that stuff interacts with our stuff then we certainly can know about it.
0[anonymous]
Continued... Now (finally) to the comparison. If a particular ontological commitment gives us a better understanding of something than it is no longer in the X-complement. We are officially observing/ interacting with it. Neptune for example, before it was observed by telescope, was merely a theoretical entity needed for explaining perturbations in the orbit of Uranus. There was a mysterious feature of the solar system and we explained it by positing an astronomical entity. There was nothing unscientific about this. See, if there are interactions between X and X-Complement then there are interactions between us and X-Complement. X and X-Complement, by definition cannot be causally related. The question then is if physical entities and physical causes are sufficient for accounting for all our experiences. If they weren't we would have a reason to favor a Spiritual or X-Skeptical view. But, in fact, we've been really good about explaining and predicting experiences using just physical and scientific-theoretical entities. To conclude: I see three distinctions where you see two. There is the Scientific- physicalism of Less Wrong, the Spiritual view which holds that there are things that are not physical and that we can (only or chiefly) observe and interact with those things through means other than science, and finally, the Extreme Skeptic view which considers all our experiences as being structured by our brain or mind then as the effects of entities that are not part of our mind/brain. Moreover, the possibility you see, of our inability to make sense of physical universe we have access to because of interactions between that universe and one we do not have access to, does not exist. This is because the boundaries of what we have access to are the universe's boundaries of interaction. Anything that influences the reality we have access to we can include in our model of reality. And it turns out that a scientific-physicalist view is more or less successful and expl
0Jack
Edit: My comment was way too long, but not sure if this justifies a full post.
0[anonymous]
Now (finally) to the comparison. If a particular ontological commitment gives us a better understanding of something than it is no longer in the X-complement. We are officially observing/ interacting with it. Neptune for example, before it was observed by telescope, was merely a theoretical entity needed for explaining perturbations in the orbit of Uranus. There was a mysterious feature of the solar system and we explained it by positing an astronomical entity. There was nothing unscientific about this. See, if there are interactions between X and X-Complement then there are interactions between us and X-Complement. X and X-Complement, by definition cannot be causally related. The question then is if physical entities and physical causes are sufficient for accounting for all our experiences. If they weren't we would have a reason to favor a Spiritual or X-Skeptical view. But, in fact, we've been really good about explaining and predicting experiences using just physical and scientific-theoretical entities. To conclude: I see three distinctions where you see two. There is the Scientific- physicalism of Less Wrong, the Spiritual view which holds that there are things that are not physical and that we can (only or chiefly) observe and interact with those things through means other than science, and finally, the Extreme Skeptic view which considers all our experiences as being structured by our brain or mind then as the effects of entities that are not part of our mind/brain. Moreover, the possibility you see, of our inability to make sense of physical universe we have access to because of interactions between that universe and one we do not have access to, does not exist. This is because the boundaries of what we have access to are the universe's boundaries of interaction. Anything that influences the reality we have access to we can include in our model of reality. And it turns out that a scientific-physicalist view is more or less successful and explaining and pre

that we seem more interested in esoteric situations than in the obvious improvements that would have the biggest impact if adopted on a wide scale.

6patrissimo
I concur. We seem more interested in phenomena which are interesting psychologically than which are useful. This should not be surprising - interesting phenomena are fun to read about. Implementing a new cognitive habit takes hard work and repetition. Perhaps it is like divorcing warm fuzzies from utilons - we should differentiate from "biases that are fun to read/think about" and "practices which will help you become less wrong." As a metaphor, consider flashy spinning kicks vs. pushups in martial arts. The former are much more fun to watch and think about, but boring exercises to build strength and coordination are much more basic and important.
3loqi
This pretty vague for a heresy. Can you link to a comment or post that explains what you're referring to, or why we should condition on wide-scale adoption?
0nazgulnarsil
aren't we supposed to be pulling sideways on issues that aren't in popular contention?
[-]knb80

Overall I think my views are pretty orthodox for LW/OB. But (and this is just my own impression) it seems like the LW/OB community generally considers utilitarian values to be fundamentally rational. My own view is that our goal values are truly subjective, so there isn't a set of objectively rational goal values, although I personally prefer utilitarianism myself.

0pwno
There probably is for each individual, but none that are universal.
0knb
True, there are rational goals for each individual, but those depend on their own personal values. My point was there doesn't seem to be one set of objective goal values that every mind can agree on.
1Vladimir_Nesov
All minds can't have common goals, but every human, and minds we choose to give life to, can. Values aren't objective, but can well be said to be subjectively objective.
8timtyler
Um, the referenced The Psychological Unity of Humankind article isn't right. Humans vary considerably - from total vegetables up to Einstein. There are many ways for the human brain to malfunction as a result of developmental problems or pathologies. Similarly, humans have many different goals - from catholic priests to suicide bombers. That is partly as a result of the influence of memetic brain infections. Humans may share similar genes, but their memes vary considerably - and both contribute a lot to the adult phenotype. That brings me to a LessWrong problem. Sure, this is Eliezer's blog - but there seems to be much more uncritical parroting of his views among the commentators than is healthy.
3AdeleneDawner
And also many ways for human brains to develop differently, says the autistic woman who seems to be doing about as well at handling life as most people do. Didn't we even have a post about this recently? Really, once you get past "maintain homeostasis", I'm pretty sure there's not a lot that can be said to be universal among all humans, if we each did what we personally most wanted to do. It just looks like there's more agreement than there is because of societal pressure on a large scale, and selection bias on an individual scale.
1thomblake
AdeleneDawner, I'm being off-topic for this thread, but have you posted on the intro thread?
0AdeleneDawner
I have now...
2Vladimir_Nesov
You don't take into account that people can be wrong about their own values, with randomness in their activities not reflecting the unity of their real values.
7timtyler
Are you suggesting that you still think that the cited material is correct?!? The supporting genetic argument is wrong as well. I explain in more detail here: http://alife.co.uk/essays/species_unity/ As far as I can tell, it is based on a whole bunch of wishful thinking intended to make the idea of Extrapolated Volition seem more plausible, by minimising claims that there will be goal conflicts between living humans. With a healthy dose of "everyone's equal" political-corectness mixed in for the associated warm fuzzy feelings. All fun stuff - but marketing, not science.
3Mike Bishop
I recommend making this a top level post, but expand a little more on this implications of your view versus Eliezer's and C&T's. This could be done in a follow-up post.
-3Vladimir_Nesov
Simply stating your opinion is of little value, only a good argument turns it into useful knowledge (making authority cease to matter in the same movement). You are not making your case, Tim. You've been here for a long time, but persist in not understanding certain ideas, at the same time arguing unconvincingly for own views. You should either work on better presentation of you views, if you are convinced they have some merit, or on trying to understand the standard position, but repeating your position indignantly, over and over, is not a constructive behavior. It's called trolling.
7timtyler
I cited a detailed argument explaining one of the problems. You offer no counter-argument, and instead just rubbish my position, saying I am trolling. You then advise me to clean up my presentation. Such unsolicited advice simply seems patronising and insulting. I recommend either making proper counter-arguments - or remaining silent.
-2Vladimir_Nesov
Remaining silent if you don't have an argument that's likely to convince, educate or at least interest your opponent is generally a good policy. I'm not arguing with you, because I don't think I'll be able to change your mind (without extraordinary effort that I'm not inclined to make). Trolling consists in writing text that falls deaf on the ears of the intended audience. Professing advanced calculus on a cooking forum or to 6-year olds is trolling, even though you are not wrong. When people don't want to hear you, or are incapable of understanding you, or can't stand the way you present your material, that's trolling on your part.
4timtyler
OK, then. Regarding trolling, see: http://en.wikipedia.org/wiki/Internet_troll It does not say that trolling consists in writing text that falls deaf on the ears of the intended audience. What it says is that trolls have the primary intent of provoking other users into an emotional response or to generally disrupt normal on-topic discussion. This is a whole thread where we are supposed to be expressing "dissenting views". I do have some dissenting views - what better place for them than here? I deny trolling activities. I am here to learn, to debate, to make friends, to help others, to get feedback - and so on - my motives are probably not terribly different from those of most other participants. One thing that I am is critical. However, critics are an amazingly valuable and under-appreciated section of the population! About the only people I have met who seem to understand that are cryptographers.
3Z_M_Davis
Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.
1Vladimir_Nesov
Well, of course I don't mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values. As for the thesis above, its motivation can be stated thusly: If you can't be wrong, you can never get better.
4Z_M_Davis
How do you know what their real values are? Even after everyone's professed values get destroyed by the truth, it's not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you'll see disagreement again. I just don't see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn't seem likely to me either; that kind of type distinction doesn't seem to be built into human values. What could possibly force that kind of convergence? Okay, I'm writing this one down.
4steven0461
Your conclusion may be right, but the HedWeb isn't strong evidence -- as far as I recall David Pearce holds a philosophically flawed belief called "psychological hedonism" that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.
0Nick_Tarleton
If "the thesis above" is the unity of values, this is not an argument. (I agree with ZM.)
1Vladimir_Nesov
It's an argument for it's being possible that behavior isn't representative of the actual values. That actual values are more united than the behaviors is a separate issue.
0Nick_Tarleton
It seems to me that it's an appeal to the good consequences of believing that you can be wrong.
0Vladimir_Nesov
Well, obviously. So I'm now curious about what do you read in the discussion, so that you see this remark as worth making?
0Nick_Tarleton
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
1Vladimir_Nesov
I'm uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind. Let's review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying: To this I reply with "If you can't be wrong, you can never get better." This is not an endorsement to self-deceivingly "believe" that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it's possible to get better.
2Nick_Tarleton
Correct. I agree, and agree that the argument form you paraphrase is fallacious. Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn't clear, especially since you agreed that it's an appeal to consequences.
0Vladimir_Nesov
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn't see "appeal to consequences" of a belief sufficiently distinct from desire to ensure the truth of the belief.
0timtyler
Human values are frequently in conflict with each other - which is the main explanation for all the fighting and wars in human history. The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes. Unfortunately, everyone behaves as though they want to maximise the representation of their own genome - and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives - which explains cooperation within families. This doesn't seem particularly complicated to me. What exactly is the problem?
0Mike Bishop
It would be great if you could expand on this.
0Mike Bishop
You may be right. If so, fixing it requires greater specificity. If you have time to write top-level posts that would be great. Regardless, I value the contributions you make in the comments.
0Mike Bishop
Some people tend to value things that people happen to have in common, others are more likely to value things which people have less in common.
-2AndrewKemendo
I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness - without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals. True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails - the outcomes are impossible to predict. Exogeny anyone? In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking "I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..". To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yiel
5pengvado
Whether a given process is computationally feasible or not has no bearing on whether it's morally right. If you can't do the right thing (whether due to computational constraints or any other reason), that's no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can. If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it's just an approximation, and be willing to switch if a superior heuristic ever shows up. See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility. Just because Aristotle founded formal logic doesn't mean he was right about ethics too, any more than about physics.
0AndrewKemendo
This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate. You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing. I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue - as the second tier does. One problem with the Two Tier solution as it is presented is that it's solutions to the consequentialist problems are based on vague terms: Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation? Or on virtue: I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior. Therefore in order to be a consequentialist you must first answer "What consequence is right/correct/just?" The answer then is the correct philosophy, not simply how you got to it. Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes - and consequentialism may be the best. Ed; Seriously people, if you are going to down vote my reply then explain why.

I have two proposals (which happen to be somewhat contradictory) so I will make them in separate posts.

The second is that many participants here seem to see LW as being about more than helping each other eliminate errors in our thinking. Rather, they see a material probability that LW could become the core of a world-changing rationalist movement. This then motivates a higher degree of participation than would be justified without the prospect of such influence.

To the extent that this (perhaps false) hope may be underlying the motivations of community members, it would be good if we discussed it openly and tried to realistically assess its probability.

Where do you think Less Wrong is most wrong?

That it's not aimed at being "more right" -- which is not at all the same as being less wrong.

To be more right often requires you to first be more wrong. Whether you try something new or try to formulate a model or hypothesis, you must at minimum be prepared for the result to be more wrong at first.

In contrast, you can be "less wrong" just by doing nothing, or by being a critic of those who do something.' But in the real world (and even in science), you can never win BIG -- and it's often hard to win at all -- if you never place any bets.

This is perhaps a useful disctinction:

When it comes to knowledge of the world you want to be more right.

But when it comes to reasoning I do think it is more about being less wrong... there are so many traps you can fall into, and learning how to avoid them is so much of being able to reason effectively.

0Eliezer Yudkowsky
Well said.
9timtyler
The group title is attempting to be modest - which is cool.
4Peter_de_Blanc
Disagree. You don't have to believe your new model or hypothesis.
0steven0461
Indeed. It seems that PJEby is using a definition of "wrong" according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition. He's right that we should be prepared to sometimes be wrong in that sense. But I'm not convinced anyone else is interpreting "less wrong" in that way.
3pjeby
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren't "true", no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better. (Ala deBono's notion of "proto-truth" - i.e., a truth you accept as provisional, rather than absolute.) (DeBono's notion of lateral thinking, by the way, is another great example of how, to find something more right, you may start by doing something that's knowingly more wrong. His "provocative operator" (later renamed "green hat thinking") is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.)
8Eliezer Yudkowsky
Irrational? If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend. If you decide that some false beliefs are useful, you don't get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise. Perhaps you find that a false belief on this subject is more convenient, though...? (I need to write up a canonical article on "No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it", since it's a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)

For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:

"I'm terrible at math."

"I hate math class."

"I'm just dumb."

That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments--very small inferential gaps, no "trick questions".

Now, the "I'm terrible at math" attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A's on their homework papers--when they started to believe that maybe they were good at math, after all--the difference in their confidence and effort was night and day. It was the false belief that enabled them to "take the first steps."

7Daniel_Burfoot
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes: 1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is "terrible at Greek" and "just dumb". 2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is "terrible at math" and "just dumb". Anecdote 1) just seems ridiculous. Of course if you walk into a language class that's out of your depth, you're going to be lost, everyone knows that. Every normal person can learn every natural language; there's no such thing as someone who's intrinsically "terrible at Greek". The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can't. This idea seems absurd to me: there is no "math gene"; there are no other examples of skills that some people can get and others not.
9Apprentice
It's actually anecdote 1 that seems plausible to me and anecdote 2 that does not. I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they'd never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they'd found so hard/boring when learning Spanish. (Of course it's also possible that my classes just sucked.) By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they're terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they're bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way - maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I'm sure that holds for the special case of mathematical aptitude. I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it - but t
1Sideways
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they're taught a second technique that builds on the previous. So there are two skills required: The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next. The second is the source of trouble. I can (and have) sat in on a single day's instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they're "terrible at math" after an experience like that!
0[anonymous]
How is that unlike other subjects? Seems pretty universal.
3Vladimir_Nesov
An example of dark arts used for a good cause. The problem is that the children weren't strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results. They can't feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
5Sideways
Vladimir, the problem has nothing to do with strength--some of these students did very well in other classes. Nor is it about effort--some students had already given up and weren't bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn't solve the problem. The problem was simply that they believed "math" was impossible for them. The best way to get rid of that belief--maybe the only effective way--was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn't suffice. If your definition of "the dark arts" is so general that it includes giving an easy homework assignment, especially when it's the best solution to a problem, I think you've diluted the term beyond usefulness.
2pjeby
Ah, and where's your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part? I'm not clear if you're being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are "wrong". Meanwhile, I think that you're also still assuming that "believe" and "think true" are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist. In the instrumental sense, belief is merely acting as if something is true -- which is not the same thing as thinking it's actually true. The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it's permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
0Eliezer Yudkowsky
Intermediate level: Rational evidence. I've learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
3pjeby
Me too. Which is why I find it astounding that you appear to be arguing against testing things. The difference in my "bare minimum" versus yours is that I've learned not to consider mental techniques as being tested unless I have personally tested them using a "shut up and do the impossible" attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself. If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases... particularly the biases that lead them to avoid testing things in the first place.
5Eliezer Yudkowsky
Okay... first, "shut up and do the impossible" may sound like it has a nice ring to you, but there's something specific I mean by it - a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You're talking enthusiasm or tsuyoku naritai. "Shut up and do the impossible" is for "reduce qualia to atoms" or "build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together". It is not for testing P. J. Eby's theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me. Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I'm interested in making another try at these things, I'll go looking at such papers as have been written in the field. You, I'm sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven't caught up on your reading. I can understand why you don't understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. "Just try my things!" you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort. It is your f
5pjeby
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail. I've been saying to try something. Anything. Just test something. Yes, I've suggested some ways for testing things, and some things to test. But most of them are not MY things, as I've said over and over and over. At this point I've pretty much come to the conclusion that it's impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to "try my things"... when in fact I've bent over backwards to point as much as possible to other people's things. Believe it or not, I didn't come here to promote my work or business. I don't care if you test my things. They're not "my" things anyway. I'm annoyed that you think I don't understand science, because it shows you're rounding to the nearest cliche. I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the "somatic marker hypothesis", if you haven't previously). Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement -- we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better. True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable -- and it's a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe
1conchis
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that? I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
1timtyler
Believing things that aren't true can be instrumentally rational for humans - because their belief systems are "leaky" - lying convincingly is difficult - and thus beliefs can come to do double duty by serving signalling purposes.
-1Eliezer Yudkowsky
Yes, this is indeed the sort of argument that I'm not at all interested in, and naming this site "Less Wrong" instead of "More Wrong" reflects this. I'm going to find where the truth takes me; let me know how that lies thing works out - though I reserve the right not to believe you, of course.

Hypothetical (and I may expand on this in another post):

You've been shot. Fortunately, there's a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he's got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.

He can say: A. Look, I don't have painkiller, but I'm going to have to operate anyhow.

B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it's morphine, and administer it to you.

Which do you prefer he does? Knowing what I know about the placebo effect, I'd have to admit I'd rather be deceived. Is this unwise? Why?

Admittedly, I haven't attained a false conclusion via my epistemology. It's probably wise to generally trust doctors when they tell you what they're administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby's various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. "Lie to me about what's in that IV!" might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)

On the other hand, I'm not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.

2Eliezer Yudkowsky
The doctor should say "This is the best painkiller I have" and administer it. If the patient confronts the question, it's already too late.
6Liron
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
4Psychohistorian
Erased my previous comment. It missed the real point. If you think the doctor should say, "This is the best painkiller I have," that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false. If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller. (Yes, I realize the irony in using the expression "sure as Hell.")
0Vladimir_Nesov
"Pain will go away" is a true belief for this situation.
3pjeby
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, "you may notice some sensation happening now", and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way. (Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that's the only painkiller you have, it'd be silly not to use it.)
0JamesAndrix
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain. The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain. I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
2timtyler
I was trying to explain why it can be instrumentally rational for humans to believe things that aren't true. For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer. Lying is often dangerous for humans - because the other humans have built-in lie detectors. I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin - or indeed, freely expressing your opinions - thereby getting ostracised, excommuniacted - or whatever. Believing the truth is not my main goal - nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly. What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives - probably for signalling purposes. Not necessarily lying - they might actually believe themselves to be truth-seekers - but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes - rather like what happens to priests. In the first case, they are behaving hypocritically - and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy - though there's a fair chance that this will be rejected.
7pjeby
Having been in this circumstance in the past -- i.e., for most of my life believing myself to be such a truth-seeker -- I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes. It's what Robert Fritz calls in his books, "an ideal-belief-reality conflict" -- a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself. Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters -- something that they defend with zeal -- you know this mechanism is at work. The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil. The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I've squashed dozens in myself, including several relating to truth and rightness and fairness and such. They're also a major driving force in chronic procrastination, at least in my clients. They're not consciously deceving anyone; they're sincere in their belief, despite the fact that this sincerity is a deception mechanism. Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our "Ideal" is the excuse for our failure to actua
0Mike Bishop
I agree that people can take "really good ideas" too far, but I'm not satisfied by the distinction you draw. |The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil. ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
1pjeby
Only if you're speaking in an abstract way that's divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something "bad" is different from labeling it "not very good", in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.) Some people have a hard time grokking this, because intellectually, it's easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others' attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you. However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you've got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
0timtyler
Thanks for sharing. It all makes me think of the beauty queens - and their wishes for world peace.
0aluchko
One of the concepts I've been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them. Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia. Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It's a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you're getting back $1250 for sure. It's an incorrect belief but one I'd probably use as I wouldn't have to expend willpower overcoming your faulty loss prevention circuits. Which approach would you use?
0Nick_Tarleton
Not true; $2500 is not necessarily 2.5 times as useful as $1000. http://en.wikipedia.org/wiki/Marginal_utility#Diminishing_marginal_utility People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn't sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there's no single thing in the mind that is a "belief" (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
0jimmy
Well, what you want to do (just about by definition) is be rational in the instrumental sense. I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can't even know what you're missing and there is reason to suspect that it is a lot. For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one.... Maybe you just meant "I'm not interested in that kind of argument because it is so clearly wrong to not be worth my time", but it seems to come across as "I don't care even if it's true", and that's probably where the downvote came from.
4pjeby
This is a confusion based on multiple meanings of "belief", along the lines of the "does the tree make a sound?" debate. Depending on your definition of belief, the above is either trivial or impossible. For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it's really blue afterward, have nothing to do with your "believing" in that moment. "Believe" is a verb -- something that you DO, not something that you have. In common parlance, we think that belief is unified and static -- which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it's still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison. In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone "believes" they're not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it. Notice that these two parts are separate: there is a thought that occurs, and then it is believed... i..e, passively accepted, without dispute. Thus, there is really no such thing as "belief" - only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it's the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
4loqi
You might well be right that there are loads of "useful falsehoods", you might even know them personally, but you're wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational. This sounds like a good creativity hack, but I don't see what it has to do with accepting false beliefs.
6pjeby
It's an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear "more wrong" than where you started from. [boggle] Why do you think this has anything to do with me? Placebos are useful falsehoods, and there's tons of research on them. Go look at Dweck and Seligman on the growth mindset and optimism, respectively. Hell, go study pickup or hypnosis or even acting, for crying out loud. Direct marketing, even. ANY practical art that involves influencing the beliefs of one's self or others, that's tied to reasonably timely feedback. To the extent that you find the teachings of these arts to be less than "true", and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true. However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning. And the truth is not a substitute for that, however much blind faith you put into it. The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning. In all other matters, knowing what to do is immensely more important than knowing why... and the why is only useful if it helps you to believe in something enough to make you actually DO something.
1loqi
You've really hedged your language here. Are we talking about beliefs, or "perspectives"? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing "known-good" techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don't change betting strategies just because I'm in the middle of an experiment. Because it seems that you've had more experience with LW'ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I'm curious as to which "terror" you're referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example. Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just "went well" and are now out to capitalize on it? How can we tell the difference? I think the above exemplifies the mismatch between your philosophy and mine. Yes, it's incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn't. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don't see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
5pjeby
I said "for all that" is not "for all of". Very different meaning. "For all that" means something like "despite the fact that", or "Although". I.e., "although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning." What's more, it's you who said "self-described" -- I referred only to people who have some systematized winning. See, that's the sort of connotation I find interesting. How is "snake oil charlatan" connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn't have things go well for them? And if they didn't want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!) If you break down what you've just said, it should be easy to see why I think this sort of "thinking" is just irrationally-motivated reaction - the firing off "boo" lights in response to certain buttons being pushed. No - I'm saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren't really testing U, you're testing U+X, where X is whatever else it is you believe about U, like, "I'm going to see if this works", or "I think this is stupid". Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs. Now, that may sound like a defense of psychic phenomena. But it isn't. You don't need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer. And the measurement of the performer's results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective -- i.e., to make you feel better about life or be more motivated -- then only the subjective experiencer can measure that
-2loqi
I did assume you held the position that these people are somehow identifiable. If your point was merely "there exist some people out there who are systematic winners"... then I'm not sure I get your point. Because "I figured out the key to success, I succeeded, and now I want to share my secrets with you" is the story that sells, regardless of actual prior circumstance or method. I don't think you understand why I bring up charlatans. This is a signaling problem. You're right... I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there's going to be a lot of faking going on. My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), "teachers of the practical arts [...] have the comfort of systematized winning". It seems to me that this "comfort" is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing "useful" signal from noise. If you do have legitimate insights, you're certainly not making yourself any more accessible by pointing to others in the field. If your point was merely "some deluded people win"... then I'm not sure I get your point. This response isn't really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I'll try to explain what I mean by "T". You say "skepticism is useful before you do something", and it's precisely this sort of skepticism that T represents. You leapt straight into explaining how I've just got to embrace U in order to make it work, but that doesn't address why I'm even considering U in the first place. Hence "I first need a true belief T that U is useful". Pardon me for a moment while I look into how useful it is to believe I'm a goat. Again, I think you're overstating this fear, but now that you mention theism, I can't
2pjeby
Well, in the case of at least marketing and pickup, you can generally observe the teacher's own results, as long as you're being taught directly. For acting, you could observe the ability of the teacher's students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established "controls". (Btw, in the direct marketing industry, the "control" is just whatever ad you're currently using; it's not a control condition where you don't advertise or run a placebo ad!) IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don't always understand math or how to use it, but they're motivated to use statistical tools for split-testing. The ancient Greeks thought fire was an element, but that didn't stop them from using fire. Developing a practical model and a "true" theory are quite often independent things. My point is that you don't need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to "act as if" certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances. For example, Seth Roberts' theory of calorie-flavor association is probably not entirely true -- but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure. Right -- and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I've already tried.
0pwno
Why should a belief be true just because it's useful? Or are you saying people are claiming a belief's usefulness is not true despite the evidence that it's useful?
2pjeby
Neither. I'm saying that a popular attitude of LW culture is to prefer not to "believe" the thing it's useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief. For example, self-fulfilling prophecies and placebo effects. Some people here react with horror to the idea of believing anything they can't statistically validate... some even if the belief has a high probability of making itself come true in the future.
5Nick_Tarleton
My immediate reaction to this paragraph is skepticism that I can believe something, if I don't believe the evidence weighs in its favor; other people might be able to choose what they believe, but I've internalized proper epistemology well enough that it's beyond me. On reflection, though, while I think there is some truth to this, it's also a cached oversimplification that derives its strength from being part of my identity as a rationalist.
-1Vladimir_Nesov
Related to: Belief in Self-Deception, Litany of Tarski.
5pwno
Well, while a self-fulfilling belief might help you accomplish one goal better, it may make you worse off accomplishing another (assuming that belief is not true). It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that's hard to prove.
3pjeby
Thank you for eloquently demonstrating precisely what I'm talking about.
4timtyler
Results are neither right nor wrong - they just are.
3JamesCole
To expand a little on what timtyler said, I think you're mixing up beliefs and actions. Doing nothing doesn't make your beliefs less wrong, and placing bets doesn't make your beliefs more right (or wrong). Wanting to be 'less wrong' doesn't mean you should be conservative in your actions.
3HughRistik
I've also had mixed feelings about the concept of being "less wrong." Anyone else? Of course, it is harder to identify and articulate what is wrong than what is right: we know many ways of thinking that lead away from truth, but it harder to know when ways of thinking lead toward the truth. So the phrase "less wrong" might merely be an acknowledgment of fallibilism. All our ideas are riddled with mistakes, but it's possible to make less mistakes or less egregious mistakes. Yet "less wrong" and "overcoming bias" sound kind of like "playing to not lose," rather than "playing to win." There is much more material on these projects about how to avoid cognitive and epistemological errors, rather than about how to achieve cognitive and epistemological successes. Eliezer's excellent post on underconfidence might help us protect an epistemological success once we somehow find one, and protect it even from our own great knowledge of biases, yet the debiasing program of LessWrong and Overcoming Bias is not optimal for showing us how to achieve such successes in the first place. The idea might be that if we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth. Yet without any basis for moving towards the truth, we will probably just run into even more falsehood, because there are exponentially more possible crazy thoughts than sane thoughts. Process of elimination is really only good for solving certain types of problems, where the right answer is among our options and the number of false options to eliminate is finite and manageable. If we are in search of a Holy Grail, we need a better plan than being able to identify all the things that are not the Holy Grail. Knowing that an African swallow is not a Holy Grail will certainly help us not not find the true Holy Grail because we erroneously mistake a bird for it, but it tells us absolutely nothing about where to actually look for the Holy Grail. The ulti
2pjeby
And if you play the lottery long enough, you'll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning -- I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room. After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time. In effect, learning what something is NOT only negligibly decreases the search space, despite it still being "less wrong". (Btw, I suspect you were downvoted because it's hard to tell exactly what position you're putting forth -- some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I'm also not clear how you get from the other points to "the ultimate way to be less wrong is radical skepticism", unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn't downvote you for it.)
1HughRistik
Thanks, pjeby, I can see how it might be confusing what I am advocating. I've edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing. Yes, this is exactly the point I was making. Rather than trying to explain my previous post, I think I'll try to summarize my view from scratch. The project of "less wrong" seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes. Now, in a sense, both an error and a success are "wrong," because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton's physics, even though it was later corrected by Einstein's physics. Yet I think that even though Newton's physics is "less wrong" than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being "less wrong" without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists. Being "less wrong" is a trivially correct description of what occurs in rationality, but I argue that focusing on being "less wrong" is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things. Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.
1HughRistik
At least one person seems to think that this post is in error, and I would very much like to hear what might be wrong with it.
0JGWeissman
Perhaps there are intuitive notions of "less wrong" that are different from "more right", but in a technical sense, they seem to be the same: Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.
2Nick_Tarleton
True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there's a difference between affirming fewer false beliefs and more true ones.
1JGWeissman
The fact that psychology can explain how the phrase "less wrong" can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
3pjeby
That's what I've been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it's well established that humans can have both utility and disutility, that they're not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff? It's not rational to expect human beings to treat "less wrong" as meaning the same thing (in behavioral terms) as "more right". Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think "avoiding a predator" versus "hunting for food". The idea that we can simultaneously have approach and avoidance behaviors and they're differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc. Heck, I'd be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms "less wrong" and "more right", no matter how "rational" they rate themselves to be.
0pjeby
You bet: there's the one where you can be "less wrong" by never believing anything, because there are more possible false beliefs than true ones. You have now achieved perfect less-wrongness, at the cost of never having any more-rightness.
2JGWeissman
You missed the point. The intuitive meaning of "less wrong" you describe is a caricature of the ideal of this community. If by "never believing anything", you mean "don't assign any probability to any event", well then we give a person who does that a score of negative infinity, as wrong as it gets. If you mean they evenly distribute the probability mass amongst all possibilities, that is what we consider maximum entropy, a standard so low that anything worse might be considered "reversed intelligence". As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
2pjeby
It's not a caricature of the actual behavior of many of its members.... which notably does not live up to that ideal. No, I mean choosing to never consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true... especially with respect to the things we would prefer to believe are true about ourselves and others. A segment of LW culture applauds the detection and management of superficial biases while being ludicrously blind to the massive bias of the very framework it operates in: the one where truth and reason must prevail at all costs, and where the idea of believing something false -- even for a moment, even in a higher cause, is unthinkable. Is that a caricature of the Bayesian ideal? No kidding. But I'm not the one who's drawing it. What I'm specifically referring to here is the brigade whose favorite argument is that something or other isn't yet proven "true", and that they should therefore not try it... especially if they spend more time writing about why they shouldn't try something, than it would take them to try it. Heck, not just why they shouldn't try something, but why noone should ever try anything that isn't proven. Why, thinking a new thought might be dangerous! And yes, someone actually argued that, in the context of a thread talking about purely-mental experiments that basically amounted to thinking. (Sure, they left themselves weasel room to argue that they weren't saying thoughts were dangerous, and yet they still used it as a fully general argument, applied to the specific case of experimenting with a thought process.) What's that saying about how, if given a choice between changing their mind and trying to prove they don't need to, most people get busy on the proof?
2JGWeissman
So, "never believing anything" means having unwavering certainty? Without knowing what "brigade" or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful. Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose. They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work. They might not actually have the particular problem the technique is supposed to solve, and are seeking evidence about if it works for people who do have the problem.
3Nick_Tarleton
Good point, but a priori I wouldn't expect a self-help technique to be harmful in a way that's either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique? Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn't matter. A greater number of options has been shown to lead to less willingness to choose anything (e.g.); beware. (FWIW, I suspect this has to do with a general heuristic to do the most defensible thing instead of the best thing.) Strongly agreed. Generally, though, I agree with pjeby's conclusion (tentatively, but only because so many others here disagree).
0JGWeissman
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don't think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet. Suppose you have 10 tricks that you might try to solve a particular problem, and that it might take a day to try one trick and evaluate if it worked for you. Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
1Nick_Tarleton
No, an example of a technique that is harmful, but whose harm would have been difficult for a reasonable person to predict in advance. The potential downside of the cookie trick is easy to notice and easy to reverse (well, I guess you can't easily reverse gaining epsilon weight, but you can limit it to epsilon), so as a reason not to try it's very weak. I take my point back. If you can only try one thing, it makes sense to just act if there is only one option, but to demand a good reason before wasting your chance if there are multiple options. (Formally, this is because the opportunity cost of failure is greater in the latter case.) Realistically, "willpower to engage in psychological modification" seems like it would often be a limiting factor producing this effect; still, I would expect irrational choice avoidance to be a factor in many cases of people demanding a reason to favor one option.
3pjeby
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole "rationality" business to make sure it's more helpful than harmful? It's really ironic that optimism is as much a mind-killer here as politics and religion. Hell, the fact that religion can be shown to have empirically positive effects on people's lives is often viewed here as a depressing problem, rather than an opportunity to learn something about how brains work. The problem of understanding the god-shaped hole is something people talk a lot about, but very few people are actually doing anything about it.
-2JGWeissman
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals. Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don't. And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right. You might discover the greatest anti-akrasia trick ever, but if you can't explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping. Of course, you could take the opportunity to figure out how to explain it better, though it would require you to "consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true".
2pjeby
Two things I forgot in my other reply: first, testing on yourself is a higher standard than peer review, if your purpose is to find something that works for you. Second, if this actually were about "my" ideas (and it isn't), I've certainly effectively communicated many of them to the extent of verifiability, since many people have reported here and elsewhere about their experiments with them. But very few of "my" ideas are new in any event -- I have a few new approaches to presentation or learning, sure, maybe some new connections between fields (ev psych + priming + somatic markers + memory-prediction framework + memory reconsolidation, etc.), and a relatively-new emphasis on real-time, personal empirical testing. (I say relatively new because Bandler was advocating extreme testing of this sort 20+ years ago, but for some reason it never caught on in the field at large.) And I'm not aware that any of these ideas is particularly controversial in the scientific community. Nobody 's pushing for more individual empirical testing per se, but the "brief therapy" movement that resulted in things like CBT is certainly more focused that direction than before. (The reason I stopped even bothering to write about any of that, though, is simply that I ended up in some sort of weird loop where people insist on references, and then ignore the ones I supply, even when they're online papers or Wikipedia. Is it any wonder that I would then conclude they didn't really want the references?)
2pjeby
Those are the products of rationalism. I'm asking about evidence that the practice of (extreme) rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it. It is if you also apply the status quo bias to choose which evidence to count. I really wish people wouldn't conflate the discussion of learning and attitude in general with the issue of specific techniques. There is plenty of evidence for how attitudes (of both student and teacher) affect learning, yet somehow the subject remains quite controversial here. (Edited to say "extreme rationalism", as suggested by Nick Tarleton.)
0Nick_Tarleton
You should probably be asking about extreme rationality.
-2Vladimir_Nesov
Evidence is demanded for communicating the change in preferred decision. If I like eating cookies, and so choose to eat cookies, it takes at least a deliberative thought to change my mind. I may have all the data, but changing a decision requires considering it. I may realize that I'm getting overweight, and that most of my calories come from cookies, so I change my mind and start preferring the decision of not eating cookies. If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won't be able to change my mind as a result of hearing his assertion. I consider what it'd take to change my mind, and present him with a constructive request: find a few good studies supporting your claims, and show them to me. That's what it takes to change my mind, and I can think of no other obvious way for him to convince me to change this decision.
2pjeby
You mean status quo bias, like the argument against the Many-Worlds interpretation? It's funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies. Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they're hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now -- in effect, they can literally procrastinate on overeating, because they could now do it "any time". There's no particular moment at which they need to eat up because they're about to be out of reach of food. I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn't match your experience. However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting. And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you've definitely misunderstood something I've said. (Btw, in case anyone asks, the author is Dr. Martha Beck and the book I'm referring to above is called The Four-Day Win.)
4Eliezer Yudkowsky
I strongly suspect that this trick wouldn't work on me - the problem is that I've taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn't convince myself it was available. What I may try is telling myself a true statement when I'm tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help - if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day. Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else - to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed. 'Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
4pjeby
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It's only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having... along with a reason why the relevant thought might not be true. I haven't tried it myself -- I actually didn't buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn't represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools. That is, it's only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days... but just sitting in the parking lot and not actually going in.... then going in and sitting on a bike but not exercising... etc. At each stage, four days of it is supposed to be enough to make what you've already been doing a non-threatening part of your routine. I've used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
0JGWeissman
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don't have to eat the cookie now, it will still be there later. But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies. It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
3pjeby
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don't want to do something, you can always find a reason. Sure, that doesn't mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don't take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something. Even if you don't know what the hell you're doing and try things randomly, you'll improve as long as there's some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better... but ONLY by doing something besides thinking. After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines! I learned the hard way that my brain's confabulation -- "reasoning" -- is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it's lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing. One of my pet sayings is that "amateurs guess, professionals test". But "test" in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it
-2JGWeissman
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first? Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it. Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn't work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
3pjeby
I can only assume you're implying that that's what I do. But as I've already stated, when someone has performed a technique to my satisfaction, and it still doesn't work, I have them try something else. I don't just say, "oh well, tough luck, and it's your fault". There are only a few possibilities regarding an explanation of why "different things work for different people": 1. Some things only work on some people, and this is an unchanging trait attributable to the people themselves, 2. Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and 3. Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...) On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.) Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one's model, without any theoretical or empirical justification whatsoever. More than that, it sounds exactly like attribution error, and an instance of Dweck's "fixed" mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with cons
1AdeleneDawner
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person... like, uh, me. If I'm careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy 'treats' more often ('just this once', repeatedly) when I go out. If I keep goodies at home, I'll ignore them for a while, but then decide something along the lines of "it'd be a shame to let this go to waste" and eat them anyway. There are different mental states involved in each of those situations, but I don't know what triggers the switch from one to another.
-2Vladimir_Nesov
I mean the argument being too weak to change one's mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn't flip the switch.
0[anonymous]
I think this is an excellent point; I'm not sure it's a valid criticism of this community.

The word "ideology" sounds wrong. One of the aspects of x-rationality is hoarding general correct-ideas-recognition power, as opposed to autonomously adhering to a certain set of ideas.

It's a difference between an atheist-fanatic who has a blind conviction in nonexistence of God and participates in anti-theistic color politics, and a person who has solid understanding of the natural world, and from this understanding concludes that certain set of beliefs is ridiculous.

7conchis
As the wiki link points out, the word "ideology" has a fairly neutral sense in which it simply refers to "a way of looking at things", which seems to reflect Byrnema's focus on the underlying assumptions this community brings to things. I don't think it's a stretch to suggest that many of us here probably do share particular ways of looking at things. It's possible that these general ways of looking-at-things do in fact let us track reality better than other ways of looking-at-things; but it's also possible that we have blind spots, and that our shared "ideology" may sometimes get in the way of the x-rationality we aspire to.
2JamesCole
'Ideology' may have a fairly neutral sense (of "a way of looking at things"), but I don't think that is what it usually means to people, or is how it's used in the original post. "A burgeoning ideology needs a lot of faithful support in order to develop" isn't true of all "way[s] of looking at things". "The ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it." implies that there's isn't much that can be done to defend or reject views other than by having sheer numbers, and I don't think that's (so much) the case here. But I do take byrnema's point that the community does need an initial period to define itself. What this actually reminds me more of is the development of new paradigms. As Kuhn has described, it takes a while for a new paradigm to muster all the resources it needs to fully define and justify itself, and for a fair while it will be unfairly attacked by others who judge it by the tools and criteria of the old paradigm(s). For most people in society, the sort of viewpoint embodied in LW (which you can see as like a new paradigm) is quite different to how they are used to seeing things (which you can see as like an old paradigm).
1AdeleneDawner
Shouldn't we be working on being better at ignoring social signaling than this? Why are you assuming that 'ideology', even given the social-signaling meaning of it, is a bad thing, rather than just a thing? (Thank you for providing a good example of something I've been trying to find a way to point out for the last few days.)
0JamesCole
hi, sorry but I'm not clear on how the social signaling you mention relates to my comment. I didn't think my comment said anything about ideology being bad, though if you're interested in my opinion on it, here it is. I take ideology to be where your belief in something is less about you believing it is actually true, and more to do with other factors such as 'because i want to be part of the group who holds these views'. (please take that description of ideology with a grain of salt... i find it very difficult to describe it briefly). I think that can have negative consequences.
0Technologos
Ideology, given the social-signaling meaning, is taken to be anti-rational, so naturally it would be something of an insult around here. I'm not sure that social signaling is strictly the point here--insofar as language is only useful intersubjectively, I would instead suggest that we should attempt to communicate a point in a way that leads to the least confusion rather than insisting that we try to drop all the connotations we have had ingrained for the duration of our lives. In part, I think this is why we use jargon--we are defining new words with none of the connotations of the old ones, and this might be helpful in moving us past the effects of those connotations.
3AdeleneDawner
My point wasn't about whether 'ideology' was intended to mean a concept that could be taken as insulting or not. My point was that reacting to it as if it was an insult, and getting defensive, is significantly less rational than taking a more neutral stance. If the claim that there's an ideology here is false, examine the poster's motivation and react appropriately. Taking offense is a subset of this option, which I'd consider valid if they appear to have been malicious, but that doesn't seem to have been the case, and even if it was, taking offense would probably not be the best reaction to the situation. If the claim is true (which I think it is), examine the situation to determine if that's a useful or harmful aspect (I think it's at least partly useful; the coherent ideology makes it easier for new members to get started - but the negative could easily outweigh the positive at higher levels of rationality... but then, learning enough epistemic hygiene to break out of ideologies is a big enough part of that that it may be moot... dunno. ask someone who's further along than I am.) and react appropriately, by either working on a solution or (if necessary) defending the status quo. Taking offense or picking nits about the original comment seems pretty pointless, in this case, when there are better angles of the situation to be working on, and comes across like you're trying to deny a fact. Please bear in mind that I'm using this as an example of this kind of problem; it's not an especially egregious one, it's just convenient.

I have two proposals (which happen to be somewhat contradictory) so I will make them in separate posts.

The first is that the real purpose of this site is to create minions and funding for Eliezer's mad scheme to take over the world. There should be more recognition and consciousness of this underlying agenda.

This is an interesting and worthwhile idea, though TBH I'm not sure I agree with the premise.

The whole "rationality" thing provides more of a framework that a status quo. People who make posts like "Well, I'm a rationalist and a theist, so there! Ha!" do tend to get voted down (when they lack evidence/argument), but I hardly see a problem with this. This community strongly encourages people to provide supporting evidence or argumentation and (interestingly) seems to have no objections to extremely long posts/replies.I have yet to see a ... (read more)

I don't know if this actually counts as a dissenting opinion, since there seems to be a conclusion around here that a little irrationality is okay. But I published a post about the virtues of irrationality (modeled after Yukowsky's twelve virtues of rationality), found here:

http://antisingularity.wordpress.com/2009/06/05/twelve-virtues-of-irrationality/

I suppose my attempt is to provide a more rational view by including irrationality but that is merely my opinion. I believe that there are good irrational things in the universe and I think that is a dissent... (read more)

4saturn
It seems like, to some extent, you are confusing rationality with being "Spock".
4Vladimir_Nesov
Emotion is not irrational. Luck can't be irrational, because it doesn't exist. Aspects of human thought, such as imagination, are the bedrock of human rationality.
2pjeby
Really? At least one scientist appears to disagree with you:
1Cyan
If we define "luck" as an unusual propensity for fortunate/unfortunate things to happen at random, then Wiseman does not disagree. Wiseman explains the subjective experience of luck in terms of more fundamental character traits that give rise to predictable tendencies. There's nothing irrational about it; arational, maybe, but not irrational.
1pjeby
Yes, exactly. The fact that the typical person's understanding of "luck" does not include a correct theory of how "luck" occurs, doesn't prevent them from observing that there is in fact such a thing and that people vary in their degree of having it. This sort of thing happens a lot, because human brains are very good at picking up certain kinds of patterns about things that matter to them. They're just very bad at coming up with truthful explanations, as opposed to simple predictive models or useful procedures! The crowd that believes in "The Secret" is talking about many of the same things as Wiseman's research; I've seen all 4 of his principles in the LoA literature before. I haven't read his book, but my guess is that I will have already seen better practical instruction in these principles from books that were written by people who claim to be channeling beings from another dimension... which would just go to show how better theories aren't always related to better practices. To be fair, it is a Fast Company piece on the research; I really ought to read the actual book before I judge. Still, from previous experience, scientific advice tends to be dreadfully vague compared to the advice of people who have experience coaching other people at doing something. (i.e. scientific advice is usually much more suggestive than prescriptive, and more about "what" than "how".)
0antisingularity
I agree that emotion is not totally irrational. There are systems to it, most of which we probably don't understand in the slightest. "Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts" And how am I to know which emotion is the one that fits the facts? If I am cheated, should I be sad or angry (or maybe something else)? Give me an objective way to deal with every emotional situation and then we can call it rational. I still think luck exists and is irrational. And imagination too.
2Jack
Do you mean luck as in the fact that random events occur to randomly distributed individuals and so some will have more good things than bad happen to them and others will have more bad things happen to them than good? Or do you mean that people have some ineffable quality which makes either good things or bad things more likely to occur to them? The first seems obviously true, the second strikes me as quite a claim. btw, you blog is quite good. As someone with somewhat middle of the road views on singularity issues (more generous than you, more skeptical than most people here) your presence here is very welcome. I suggest those passing by check out the rest of the articles. Usually good to read both sides.
0antisingularity
Thanks for the compliments. I had initially been worried that I might be poorly received around here but people are genuinely encouraging and looking for debate and perspective. As for luck, I am really referring to your first statement. Random events happening to distributed individuals. It's just a tendency in the universe, I know, that we happen to call luck. But, since we get to decide on what's good and what's bad, its seems to me that sometimes really improbably good things will happen (good luck) and sometimes very improbably bad things will happen (bad luck).
0byrnema
Interesting. I see this as some kind of anti-thesis to rationality; being in some sense exactly what rationalists deny. Sure, we may believe in chaos and unpredictability, but we still believe that rationality is the best way to deal with it. While I can sympathize with the view that the universe is sometimes too complex, I do believe that predictable success is possible to some extent (probabilistically, for example), and that being rational is the way to achieve that. If being irrational predictably gives better results in any specific context, then our rational theory needs to be expanded to include that irrational behavior as rational. My strongest belief is that the theory of rationality can always be expanded in a consistent way to include all behavior that yields success. I realize this is a substantial assumption. I would like to learn more about what sorts of things are nevertheless "beyond" rationality, and whether there are some ways to be more rational about these things, or if it's just separate (so that the label rational/irrational doesn't apply.) For example, I think rationalists generally agree that preferences and values are outside rationality.
1antisingularity
"Interesting. I see this as some kind of anti-thesis to rationality; being in some sense exactly what rationalists deny. Sure, we may believe in chaos and unpredictability, but we still believe that rationality is the best way to deal with it." Yes, I suppose you could characterize it as an anti-thesis to rationality. Mostly, I think that rationality is an excellent way to deal with many things. But it is not the solution to every single problem (love is probably the best example of this I can give). As for things beyond rational, well, your second paragraph, you might agree, is beyond rational. It's not irrational, but it's a value judgment about the fact that the theory of rationality can always be expanded. You can't justify it within the theory itself. So I'm not advocating for irrationality as a better means to rationality, simply that they both exist and both have their uses. To believe that you can and should increase your rationality is both rational and great. But to believe that you will always be able to achieve perfect rationality strikes me as a bit irrational.

I would say the direction I most dissent from Less Wrong is that I don't think 'rationality' is inherently anything worth having. It's not that I doubt its relevance for developing more accurate information, nor its potential efficacy in solving various problems, but if I have a rationalistic bent that is mainly because I'm just that sort of person - being irrational isn't 'bad', it's just - irrational.

I would say the sort of terms and arguments I most reject are those with normative-moral content, since (depending on your definition) I either do not beli... (read more)

I'm continually surprised that so many people here take various ideas about morality seriously. For me, rationality is very closely associated with moral skepticism, and this view seems to be shared by almost all the rationalist type people I meet IRL here in northern Europe. Perhaps it has something to do with secularization having come further in Europe than in the US?

The rise of rationality in history has undermined not only religion, but at the same time and for the same reasons, all forms of morality. As I see it, one of the main challenges for people... (read more)

2PhilGoetz
I think you need to define what you mean by "morality" a lot more carefully. It's hard to attribute meaning to the statement "People should act without morals." Even if you mean "Everyone should act strictly within their own self-interest", evolutionary psychology would demand that you define the unit of identity (the body? the gene?), and would smuggle most of what we think of as "morality" back into "self-interest".
2Jess_Riedel
Moral skepticism is not particularly impressive as it's the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe. The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that's it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red. At best we can make empirical statements of the form "A person should act in such-and-such manner in order to achieve some outcome". Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
0Ziphead
My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems - which is the challenge I propose that rationalists should take on. You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn't make it so. You may find it disturbing that you can't "non-arbitrarily" say that "striving for truth is better than killing babies". This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.
2Jess_Riedel
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn't healthy). I don't mean to say that rationalists should give up, but we have to choose how to act in the meantime. Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don't believe this makes me irrational. In fact, given our current understanding of the problem, I don't know of any other reasonable approaches. Incidentally, this position is reminiscent of both Pascal's wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
0Nick_Tarleton
OB: "Arbitrary" (Wait, Eliezer's OB posts have been imported to LW? Win!)
7Jess_Riedel
I've read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I'm sure he believes that he brings some new insights, but I would disagree.
1Technologos
My position may be one of those you criticize. I believe something that bears an approximation to "morality" is both worth adhering to and important. I think a particular kind of morality helps human societies win. Morality, as I understand it, consists of a set of constraints on acceptable utility functions combined with observable signals of those constraints. Do I believe that this type of morality is in any sense ultimately correct? No. In a technical sense, I am a complete and total moral skeptic. However, I do think publicly-observable moral behavior is useful for coordination and cooperation, among other things. To the extent that this makes us better off--to the extent it makes me better off--I would certainly think that even a moral skeptic might find it interesting. Perhaps LWers are "too uncritical toward their moral prejudices." But it's at least worth examining which of those "moral prejudices" are useful, where this doesn't conflict with other, more deeply held values. Finally, morality broadly enough construed is a condition of rationality: if morality is taken to simply be your set of values and preferences, then it is literally necessary to a well-defined utility function, which is itself (arguably) a necessary component of rationality.
0Ziphead
It seems to me that your position can be interpreted in at least two ways. Firstly, you might mean that it is useful to have common standards for behavior to make society run more smoothly and peacefully. I think almost everyone would agree with this, but these common standards might be non-moral. People might consider them simple social convections that they adopt for reasons of self-interest (to make their interactions with society flow more smoothly), but that have no special metaphysical status and do not supersede their personal values if a conflict arises. Secondly, you might mean that it is useful that people in general are moral realists. The question then remains how you yourself, being "a complete and total moral skeptic", relate to questions of morality in your own life and in communication with people holding similar views. Do you make statements about what is morally right or wrong? Do you blame yourself or others for breaking moral rules? Perhaps you don't, but I get the impression that many LW:ers do. (In the recent survey, only 10.9% reported that they do not believe in morality, while over 80% reported themselves to support some moral theory.) In regards to the second interpretation, one might also ask: If it works for you to be a moral skeptic in a world of moral realists, why shouldn't it work for other people too? Why wouldn't it work for all people? More to the point, I don't think that morality is very useful. Despite what some feared, people didn't become monsters when they stopped believing in God, and their societies didn't collapse. I don't think any of these things will happen when they stop believing in morality either.
1Technologos
I don't think they do have any "special metaphysical status," and indeed I agree that they are "simple social conventions." Do I make statements about moral rights and wrongs? Only by reference to a framework that I believe the audience accepts. In LWs case, this seems broadly to be utilitarian or some variant. That's precisely my point--morality doesn't have to have any metaphysical status. Perhaps the problem is simply that we haven't defined the term well enough. Regardless, I suspect that more than a few LWers are moral skeptics, in that they don't hold any particular philosophy to be universally, metaphysically right, but they personally value social well-being in some form, and so we can usually assume that helping humanity would be considered positively by a LW audience. As long as everyone's "personal values" are roughly compatible with the maintenance of society, then yes, losing the sense of morality that excludes such values may not be a problem. I was simply including the belief that personal values should not produce antisocial utility functions (that is, utility functions that have a positive term for another person's suffering) as morality. Do I think that these things are metaphysically supported? No. But do I think that with fewer prosocial utility functions, we would likely see much lower utilities for most people? Yes. Of course, whether you care about that depends on how much of a utilitarian you are.
-2byrnema
Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call "reductionism" (perhaps closer to Daniel Dennetts "greedy reductionism" than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism. Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second. First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith). We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level,
4Vladimir_Nesov
No, reductionism doesn't lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn't exist.
0byrnema
I agree that if morality exists, it is implemented through quarks. This is what I meant by morality not being transcendent. Used in this sense, as the assertion of a single magisterium for the physical universe (i.e., no magic), I think reductionism is another justified tenet of rationality -- part of the consistent ideology. However, what would you call the belief I was criticizing? The one that denies the existence of non-material things? (Of course the "existence" of non-material things is something different than the existence of material things, and it would be useful to have a qualified word for this kind of existence.)
2Apprentice
Eliminative materialism?
0byrnema
Yes, that is quite close. And now that I have a better handle I can clarify: Eliminative materialism is not itself "false" -- it is just an interesting purist perspective that happens to be impracticable. The fallacy is when it is inconsistently applied. Moral skeptics aren't objecting to the existence of morality because it is an abstract idea, they are objecting to it because the intersection of morality with our current logical/scientific understanding of morality reduces to something trivial compared to what we mean when we talk about morality. I think their argument is along the lines of if we can't scientifically extend morality to include what we do mean (for example, at least label in some rigorous way what it is we want to include), then we can't rationally mean anything more.

One thing that came to mind just this morning: Why is expected utility maximization the most rational thing to do? As I understand it (and Im a CS, not Econ. major), prospect theory and the utility function weighing used in it are usually accepted as how most "irrational" people make their descisions. But this might not be because they are irrational but rather because our utility functions do actually behave that way in which case we should abandon EU and just try to maximize well being with all the quirks PT introduces (such as loss being more... (read more)

0Velochy
Sorry. I thought about things a little and realized that a few things about prospect theory definately need to be scrapped as bad ideas.. The probability weighing for instance. But other quirks (such as loss aversion or having different utilities for loss vs gain) might be useful to retain... It would really be good if I knew a bit more about the different descision theories at this point. Does anyone have any good references from where one would get an overview and good references?
1Technologos
The standard argument against anything other than EU maximization (note that consistent loss-aversion may arise from diminishing marginal utility of money; loss-aversion only is interesting when directionally inconsistent) in economics involves Dutch-booking: the ability to set people up as money pumps and extract money from them by repeatedly offering subjectively preferred choices that violate transitivity. Essentially, EU maximization might be something we want to have because it induces consistency in decision-making. For instance, imagine a preference ordering like the one in Nick_Tarleton's adjacent comment, where +10 is different from +20-10. Let us say that +9=+20-10 (without loss of generality; just pick a number on the left side). Then I can offer you +9 in exchange for +20-10 repeatedly, and you'll prefer it every time, but you ultimately lose money. The reason that rational risk aversion (which is to say, diminishing marginal utility of money) is not a money pump is that you have to reduce risk every time you extract some expected cash, and that cannot happen forever. Ultimately, then, prospect theory and related work are useful in understanding human decision-making but not in improving it.
0linkhyrule5
Question - is there a uniqueness proof of VNM optimality in this regard?
0Technologos
VNM utility is a necessary consequence of its axioms but doesn't entail a unique utility function; as such, the ability to prevent Dutch Books is derived more from VNM's assumption of a fixed total ordering of outcomes than anything.
1Nick_Tarleton
Differing utilities for loss vs. gain introduce an apparently absurd degree of path dependence, in which, say, gaining $10 is perceived differently from gaining $20 and immediately thereafter losing $10. Loss vs. gain asymmetry isn't in conflict with expected utility maximization (though nonlinear probability weighing is), but it is inconsistent with stronger intuitions about what we should be doing. "Different decision theories" is usually used to mean, e.g., causal decision theory vs. evidential decision theory vs. whatever it is Eliezer has developed. Which of these you use is (AFAIK) orthogonal to what preferences you have, so I assume that doesn't answer your real question. Any reference on different types of utilitarianism might be a little more like what you're looking for, but I can't think of anyone who's catalogued different proposed selfish utility functions.
1steven0461
Yes -- the example I've seen is that a loss-averse agent may evaluate a sequence of say ten coinflips with -$15/+$20 payoffs positively at the same time as evaluating each individual such coinflip negatively.
1Nick_Tarleton
Hmm: I didn't know that. Cool.

"Where do you think Less Wrong is most wrong?"

I don't know where Less Wrong is most "wrong" - I don't have a reliable conclusion about this and moreover I don't think Less Wrong community accept exceptionlessly a group of statements - but I can certainly say this: some posts (and sometimes comments) introduce jargon (i.e. Kullback-Leibler distance, utility function, priors etc.) for not very substantial reasons. I think sometimes people have a little urge to show off and reveal the world how smart they are. Just relax, okay? We all kno... (read more)

9loqi
I think the tendency to use terms like "utility function" and "prior" stems more from a desire to be precise than to show off. Both have stuck with me as seemingly-useful concepts far outside the space of conversations in which they're potentially intelligible to others. Unless you know it's superfluous, give jargon the benefit of the doubt. When communication is more precise, we all win.
3Sideways
Agreed--most of the arguments in good faith that I've seen or participated in were caused by misunderstandings or confusion over definitions. I would add that once you know the jargon that describes something precisely, it's difficult to go back to using less precise but more understandable language. This is why scientists who can communicate their ideas in non-technical terms are so rare and valuable.
0billswift
I'm not so sure of that, since most of the people that use "utility function" and "prior" can't seem to agree on what they mean. They seem to be more terms of art; the art of showing off.
0steven0461
Huh? A utility function is a map from states/gambles/whatever to real numbers that respects preferences. A prior is a probability assigned without conditioning on evidence. Maybe some terms people use here are for showing off, but these two happen to be clear and useful.
1Cyan
A prior is a probability distribution assigned prior to conditioning on some specific data. If I learn data1 today and data2 tomorrow, my overnight probability distribution is a posterior relative to data1 and a prior relative to data2. The reason I nitpick this is because the priors we actually talk about here on LW condition on massive amounts of evidence.
0timtyler
More nitpicking: the data doesn't really have to be "specified" - at least, it can be presented in the form of a black box with contents that are not yet known, or perhaps not yet even measured.
0conchis
That's not its only meaning. It's not, for example the definition that a hedonist utilitarian would give (net pleasure-over-pain is not equivalent to preference; unless you're giving preference a very broad interpretation, in which case you've just shifted the ambiguity back a level.)
0steven0461
I've seen that called "utility" but never a "utility function".
1conchis
I could go trawling through the literature to get you examples of non-preferentist usages of the words "utility function", but if you're willing to take my word for it, I can assure you that they're pretty common (especially in happiness economics and pre-ordinalist economics, but also quite broadly apart from that). Indeed, it would be very strange if e.g. the hedonist account were a valid definition of utility, but no-one had thought to describe a mapping from states of the world into hedonist-utility as a utility function. Googling "experienced utility function" turns up a few examples, but there are many more.
0steven0461
Guess I'll take your word for it. Not sure I remember seeing that usage for "utility function" on LW, though. ETA: It gets kind of confusing, because if I prefer that people are happy, their happiness becomes my utility, but in a way that doesn't contradict utility functions as a description of preferences.
1conchis
Many uses are ambiguous enough to encompass either definition. If you aren't aware of the possible ambiguity then you're unlikely to notice anything awry - at least up until the point where you run into someone who's using a different default definition, and things start to get messy. (This has happened to me a couple of times.)
0timtyler
I've argued that utilitarians should probably employ surreal-valued utilitiy functions. However, that is hardly a major disagreement. It would be like the creationists arguing that evolution was a theory mired in controversy because of the "puctuated equilibrium" debate.

I think the group focusses too much on epistemic rationality - and not enough on reason.

Epistemic rationality is one type of short-term goal among many - whereas reason is the foundation-stone of rationality. So: I would like to see less about the former and more about the latter.

1conchis
What do you mean by "reason"?
3timtyler
http://en.wikipedia.org/wiki/Reason is fairly reasonable. Deduction, induction and Occam's razor. Processing your sensory inputs to derive an accurate model of the world without actually performing any actions (besides what is necessary to output your results). Reason can be considered to be one part of rationality.
0Matt_Simpson
isn't that epistemic rationality? I.e arriving at the correct answer?
0timtyler
No. Epistemic rationality is a type of instrumental rationality which primarily values truth-seeking. To find the truth, you sometimes have to take actions and perform experiments. Reason is more basic, more fundamental.
0Matt_Simpson
so you mean the tools by which we arrive at the correct answer?
0timtyler
Only if those "tools" don't involve doing things. Once you start performing experiments and taking steps to gather more data, then you have gone beyond using reason. If you like, you can imagine a test of reason to match the circumstances of a typical exam - where many ways of obtaining the correct answer are forbidden.
-4[anonymous]
Reason is useless without rationality.
0timtyler
I would rather say that "reason" is a useful concept. They call them "deductive reasoning" and "inductive reasoning" - and those are the correct names for some very useful tools. Anyway, you should be able to make out my request to LessWrong - to talk more about reason, especially when it is reason that is under discussion.

I like this idea. I don't really have anything to contribute to this thread at the moment, though.

Seems along the same lines as the "closet thread" but better.

1ThanatosSavehn
I think this is a problem of Rhetoric. I dialed it back to Plato and Aristotle and have made my way up to "The New Rhetoric: A Treatise on Argumentation". I shall report back if I have any success as I make my way to the here and now. In the meantime I note the following: as a sceptic I hope I'm ready to cast aside any idea, however cherished, that fails of its purpose - that fails the acid bath test of falsifiability. I cast aside the snake handlers that kept Romney from being nominated and I cast aside the "empathy" that leads Sotomayor to believe that single Latina Moms make better judgments than white males. But what's left? Is there really room for a party of Rationalists? Won't the purest rationalist sell out his brethren for a better deal offered by the emotionalists? Isn't that what a good rationalist would do? Are we ultimately the victims of our own good sense? Or are we able to deal with the negative externailities of personal rationalism. And if so, how? Alas, even Spock has now decided "if it feels right, do it!"
[-]ivan-10

I read LW for a few months but I haven't commented yet. This looks like a good place to start.

There are two points in LW community that seem to gravitate towards ideology IMHO:

  1. Anti-religion. Some people hold quite rational religious believes which seem to be a big no-no here.

  2. Pro-singularity. Some other people consider Singularity merely a "sci-fi fantasy" and I have an impression that such views, if expressed here, would make this community irrationally defensive.

I may be completely wrong though :)

-1timtyler
I don't discuss religion much - but here is my list of "Viable Intelligent Design Hypotheses": http://originoflife.net/intelligent_design/
0thomblake
Note that none of the items on the list is an alternative to evolution, which is how ID is presented in the US context.
-2Cyan
I'd replace your item 1 with physicalism. The "rational religious" example you propose might get criticized here, but not for belief in the supernatural.
-2timtyler
FWIW, this doesn't describe me: http://alife.co.uk/essays/the_singularity_is_nonsense/