All of common_law's Comments + Replies

There are lots of brilliant black scientists, I collaborate closely with one. You guys are toxic idiots, you should get out more and meet more smart people.

Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can constitute the same ghloal attractor, and a vast numbmer of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. - Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter, pp. 516 - 517.

Elaborate or detailed are characteristics neither necessary nor sufficient for rigor. The first describe characteristics of the theory; the second of the argument for the theory. To say a theory is rigorous is neither more or less than to say it is well argued (with particular emphasis on the argument's tightness).

Whether Freud and Marx argued well may be hard to agree on when we examine their arguments. [Agreement or disagreement on conclusions have a way of grossly interfering with evaluation of argument, with the added complication that evaluation must ... (read more)

0gjm
I agree with your first paragraph, and (in case it was unclear) my point was that the only support you offered for "well argued" over merely "elaborate and detailed" as a description of Marx and Freud was (1) to say that they wrote a lot of intricately-argued stuff and (2) to reiterate the claim that it was rigorous. I don't have the impression that Freud's theories have sustained the agreement of a large minority of serious intellectuals and academics for more than a century. I could agree with half a century, maybe a little more, and that's certainly more influence than most of us will ever have -- but I don't see why it constitutes strong evidence of rigour. Likewise, I think, for Marx. His theories have of course been widely endorsed by people in countries where they formed a quasi-religious orthodoxy, but outside those countries it's been only a small minority (hasn't it?) who have accepted Marxism as a whole. Plenty more have agreed that he got some things right, but getting some things right is another achievement that surely isn't very strong evidence of rigour.

You've drawn a significant distinction, but I don't think degree of rigor defines it. I'm not sufficiently familiar with many of these thinkers to assess their rigorousness, but I am familiar with several, the ones who would often be deemed most important: Einstein, Darwin on the side you describe as rigorous; Freud and Marx on the side you describe as less rigorous. I can't agree that Freud and Marx are less rigorous. Marx makes a argument for his theory of capitalism in three tightly reasoned volumes of capital, none of the arguments formulaic. Freud dev... (read more)

0gjm
I think you may be confusing "rigorous" with "elaborate" or "detailed". (Or maybe not, in which case you might like to say a few words about why the former, and not only the latter, applies to Marx and Freud.)

Aesthetic ability as such hasn't been extracted as a cognitive ability factor. My guess would be that it's mainly explained by g and the temperamental factor of openness to experience. (I don't know what the empirical data is on this subject, but I think some immersion in the factor-analytic data would prove rewarding.)

[Added.] On aesthetic sense: the late R.B. Cattell (psychologist) devised an IQ test based on which jokes were preferred.

[Added.2] I'm wondering if you're not misinterpreting your personal experience. You say your IQ is only LW-average. You ... (read more)

What's your basis for concluding that verbal-reasoning ability is an important component of mathematical ability—particularly important in more theoretical areas of math?

The research that I recall showed little influence of verbal reasoning on high-level math ability, verbal ability certainly being correlated with math ability but the correlation almost entirely accounted for by g (or R). There's some evidence that spatio-visual ability, rather unimportant for mathematical literacy (as measured by SAT-M, GRE-Q), becomes significant at higher levels of ach... (read more)

You should question your unstated but fundamental premise: one should avoid arguments with "hostile arguers."

A person who argues to convince rather than to understand harms himself, but from his interlocutor's standpoint, dealing with his arguments can be just as challenging and enlightening as arguing with someone more "intellectually honest."

Whether an argument is worthwhile depends primarily on the competence of the arguments presented, which isn't strongly related to the sincerity of the arguer.

3CronoDAS
Another reason one might want to engage with a "hostile arguer" is to convince a third party - any opponent in a live debate before an audience is almost certain to act as a hostile arguer.
9ike
Well, as the post is mainly dealing with hostile arguers that have ultimate power over you, that seems justified. Play around with looking for "challenging and enlightening" arguments when it's not so dangerous.
-5wedrifid

Actually, I think you're wrong in thinking that LW doctrine doesn't dictate heightened scrutiny of the deployment of self-deception. At the same time, I think you're wrong to think false beliefs can seldom be quarantined, compartmentalization being a widely employed defense mechanism. (Cf., any liberal theist.)

Everyone feels a tug toward the pure truth, away from pure instrumental rationalism. You're mistake (and LW's), I think, is to incorporate truth into instrumental rationality (without really having a cogent rationale, given the reality of compartment... (read more)

1the-citizen
LW appears to be mixed on the "truthiness should be part of instrumental rationality" issue. It seems we disagree on the compartmentalising issue. I believe self-deception can't easily be compartmentalised in the way you describe because we can't accurately predict, in most cases, where our self-deception might become a premise in some future piece of reasoning. By its nature, we can't correct at the later date, because we are unaware that our belief is wrong. What's your reasoning regarding compartmentalizing? I'm interested in case I am overlooking something. My experience so far is that a large (50%?) part of LW agrees with you not me. This is an interesting argument. In a sense I was treating the ethics as separate in this case. I'd be interested to hear a more detailed version of what you say here. There's a great quote floating around somewhere about studying the truth vs. creating the truth. I can't remember it specifically enough to find it right now... but yes I agree intellectuals will undermine their abilities if they adopt pure instrumentality.

I don't see how your argument gains from attributing the hard-work bias to stories. (For one thing, you still have to explain why stories express this bias—unless you think it's culturally adventitious.)

The bias seems to me to be a particular case of the fair-world bias and perhaps also the "more is better" heuristic. It seems like you are positing a new bias unnecessarily. (That doesn't detract from the value of describing this particular variant.)

Philosophically, I want to know how you calculate the rational degree of belief in every proposition.

If you automatically assign the axioms an actually unobtainable certainty, you don't get the rational degree of belief in every proposition, as the set of "propositions" includes those not conditioned on the axioms.

0alex_zag_al
Hmm. Yeah, that's tough. What do you use to calculate probabilities of the principles of logic you use to calculate probabilities? Although, it seems to me that a bigger problem than the circularity is that I don't know what kinds of things are evidence for principles of logic. At least for the probabilities of, say, mathematical statements, conditional on the principles of logic we use to reason about them, we have some idea. Many consequences of a generalization being true are evidence for a generalization, for example. A proof of an analogous theorem is evidence for a theorem. So I can see that the kinds of things that are evidence for mathematical statements are other mathematical statements. I don't have nearly as clear a picture of what kinds of things lead us to accept principles of logic, and what kind of statements they are. Whether they're empirical observations, principles of logic themselves, or what.
0hairyfigment
Hmm? If these are physically or empirically meaningful axioms, we can apply regular probability to them. Now, the laws of logic and probability themselves might pose more of a problem. I may worry about that once I can conceive of them being false.

What about the problem that if you admit that logical propositions are only probable, you must admit that the foundations of decision theory and Bayesian inference are only probable (and treat them accordingly)? Doesn't this leave you unable to complete a deduction because of a vicious regress?

2somnicule
I think most formulations of logical uncertainty give axioms and proven propositions probability 1, or 1-minus-epsilon.

A critical mistake in the lead analysis is false assumption: where there is a causal relation between two variables, they will be correlated. This ignores that causes often cancel out. (Of course, not perfectly, but enough to make raw correlation a generally poor guide to causality.

I think you have a fundamentally mistaken epistemology, gwern: you don't see that correlations only support causality when they are predicted by a causal theory.

1[anonymous]
If two variables are d-separated given a third, there is no partial correlation between the two, and the converse holds for almost all probability distributions consistent with the causal model. This is a theorem (Pearl 1.2.4). It's true that not all causal effects are identifiable from statistical data, but there are general rules for determining which effects in a model are identifiable (e.g., front-door and back-door criteria). Therefore I don't see how something like "causes often cancel out" could be true. Do you have any mathematical evidence? I see nothing of this "fundamentally mistaken epistemology" that you claim to see in gwern's essay.

“how else could this correlation happen if there’s no causal connection between A & B‽”

The main way to correct for this bias toward seeing causation where there is only correlation follows from this introspection: be more imaginative about how it could happen (other than by direct causation).

[The causation bias (does it have a name?) seems to express the availability bias. So, the corrective is to increase the availability of the other possibilities.]

5gwern
Maybe. I tend to doubt that eliciting a lot of alternate scenarios would eliminate the bias. We might call it 'hyperactive agent detection', borrowing a page from the etiology of religious belief: https://en.wikipedia.org/wiki/Agent_detection which now that I think about it, might be stem from the same underlying belief - that things must have clear underlying causes. In one context, it gives rise to belief in gods, in another, interpreting statistical findings like correlation as causation.
1Nornagest
Seems to me like a special case of privileging the hypothesis?

I intended nothing more than to solve the literal interpretation. This isn't my beaten path. I don't intend more on the subject besides speculation about why an essentially trivial problem of "literal interpretation" has resisted articulation.

I think you'll find the argument is clear without any formalization if you recognize that it is NOT the usual claim that confidence goes down. Rather, it's that the confidence falls below its contrary.

In philH's terms, you're engaging in pattern matching rather than taking the argument on its own terms.

2Strilanc
How have I not addressed the arguments on its own terms? I agree with basically everything you said, except calling it a solution. You'll run into non-trivial problems when you try to turn it into an algorithm. For example, the case of there being an actual physical mugger is meant to be an example of the more general problem of programs with tiny priors predicting super-huge rewards. A strategy based on "probability of the mugger lying" has to be translated to the general case somehow. You have to prevent the AI from mugging itself.

What you're ignoring is the comparison probability. See philH's comment.

0DanielLC
I'm not sure what you mean. If you mean what I think you mean, I'm ignoring it because I'm going with worst case. Rather than tracking how the probability of someone making the threat reduces slower than the probability of them carrying it out (which means a lower probability of them carrying it out), I'm showing that even if we assume that the probability is one, it's not enough to discount the threat. P(Person is capable of carrying out the threat) is high enough for you to pay it off on its own. The only way for P(Person is capable of carrying out the threat | Person makes the threat) to be small enough to ignore is if P(Person makes the threat) > 1.

It's accurate. But it's crucial, of course, to see why P(C) comes to dominate P(B), and I think this is what most commenters have missed. (But maybe I'm wrong about that; maybe its because of pattern matching.) As the threat increases, P(C) comes to dominate P(B) because the threat, when large enough, is evidence against the threatened event occurring.

That is [it is assumed that] the only plausible reason to state a meganumber class high utility is to beat someone elses number.

It's the only reason that doesn't cancel out because it's the only one about which we have any knowledge. The higher the number, the more likely it is that the mugger is playing the "pick the highest number" game. You can imagine scenarios in which picking the highest number has some unknown significance, they cancel out, in the same way as Pascal's God is canceled by the possibility of contrary gods.

Also why what t

... (read more)

What you present is the basic fallacy of Pascal's Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.

Your formalism, in other words, doesn't model the argument. The basic point is that Pascal Mugging can be solved by the same logic as succeeds with Pascal's wager. Pascal ignored that believing in god A was instrumentally rational by ignoring that there might, with equal consequences, be a god B instead who hated people who worshiped god A.

Pascal's Mugging ignores that giving to the mugger migh... (read more)

4DanielLC
The prior probability of X is 2^-(K-complexity of X). There are more possible universes where they carry out smaller threats, so the K-complexity is lower. What I showed is that, even if there were only a single possible universe where the threat was carried out, it's still simple enough that the K-complexity is small enough that it's worth paying the threatener. You gave a vague argument. Rather than giving a vague counterargument along the same lines, I just ran the math directly. You can argue that P(C|E) decreases all you want, but since I found that the actual value is still too high, it clearly doesn't decrease fast enough. If you want the vague counterargument, it's simple: The probability that it's a lie approaches unity. It just doesn't approach it fast enough. It's a heck of a lot less likely that someone who threatens 3^^^3 lives is telling the truth than someone who's threatening one. It's just not 3^^^3 times less likely.

You become less skeptical, but that doesn't affect the issue presented, which concerns only the evidential force of the claim itself.

If someone tears the sky asunder, you will be more inclined to believe the threat. But after a point of increasing threat, increasing it further should decrease your expectation.

4solipsist
OK, so after a certain point, the mugger increasing his threat will cause you to decrease your belief faster. After a certain point, the mugger increasing his threat will cause (threat badness * probability) to go decrease. That implies that if he threatens you with a super-exponentially bad outcome, you will assign a super-exponentially small probability to his threat. But super-exponentially small probabilities are a tricky thing. Once you've assigned a super-exponentially small probability to an event, no amount of evidence in the visible universe can make you change your mind. It doesn't matter if the mugger grows wings of fire or turns passers by into goats; no amount of evidence your eyes and ears are capable of receiving can sway a super-exponentially small probability. If the city around you melts into lava, should you believe the mugger then? How do you quantify whether you should or should not?

This essay makes a correct appraisal of Less Wrong thinking, but it denominates the position confusingly as "natural rights." The conventional designation is "moral realism," with "natural rights" denoting a specific deonotological view.

A more charitable reading than than provided by commenters would have understood that all the arguments invoked against natural rights (as well as the arguments attributing natural-rights thinking to Less Wrong) hold for other forms of moral realism, in particular utilitarianism/consequentiali... (read more)

You're mistaken in applying the same standards to personal and deliberative decisions. The decision to enroll in cryonics is different in kind from the decision to promote safe AI for the public good. The first should be based on the belief that cryonics claims are true; the second should be based (ultimately) on the marginal value of advocacy in advancing the discussion. The failure to understand this distinction is a major failing in public rationality. For elaboration, see The distinct functions of belief and opinion.

that's not how the subject is taught

Hope this isn't too off-topic, but I wonder if you have any ideas about why that is.

The main impediment to many far-mode thinkers learning hard (post-calculus) math is the drill and drudgery involved. If you're going to learn hard math, it seems you should, by all means, learn it deeply. That's not the obstacle. The obstacle is that to learn math deeply, you must first learn a lot of it rotely--at least the way it's taught.

In the far-distant past, when I was in school, learning elementary calculus meant rote drilling ... (read more)

1DanArmak
Some degree of this is probably inevitable. Integration in particular has no closed solution (unlike differentiation), so there really is no one general method you can apply to all problems. All you can do is remember a bag of tricks. While for differentiation, a few general rules allow you to integrate all elementary and trigonometric functions, and that's pretty much all you encounter in school.
2JonahS
I think that the point is that more people are capable of routine tasks than of conceptual understanding, and that educational institutions want lots of people to do well in math class on account of a desire for (the appearance of) egalitarianism. What time period was this? (No need to answer if you'd prefer not to :-) ) Some diligence is necessary, but not as much as it appears based on standard pedagogy. I wish that I could substantiate this in a few lines. If you say something about what math you know/remember, I might be able to point you to some helpful references.

The reason for apparent anomalies is that "holistic" thinking can involve two different styles: pre-attentive thinking and far-mode thinking. That is, you can have cognition that could be described as holistic either by being unreflective (System 1) or by engaging in far-mode forms of reflection (System 2 offloads to System 1.) In Ulric Neisser's terms, what is being called "intuitive" might reflect distinctly deeper or distinctly shallower processing than what is called analytic. I sort this out in The deeper solution to the mystery of... (read more)

Total karma isn't for you, it's for everyone else.

Producing correlated rather than independent judgments of post quality, with the well-known cascading effects. The "system" deliberately introduces what I call belief-opinion confusion

-4Kawoomba
I don't know who this Mr. Nessov is, but I sure am glad he seems to have very little in common with our Vladimir Nesov.
1wedrifid
Since I'm not Nesov I can only assume you are writing to me because of some argument you are having with someone else is in a trollblocked thread. I'm fairly sure Nesov has a button meant for just this kind of thing.
8Vladimir_Nesov
One purpose of (significantly) negative (total) Karma is to indicate when a user should be kicked out. Limiting this damage damages the forum. ("A particularly unpopular posting" is not normally the issue, it's usually the systematic failure to respond to negative feedback, including by stopping to post in particular modes or at all.)
6wedrifid
Yes. That is precisely what we are talking about. You may argue that once the identities have been identified and can no longer be used simultaneously in the same conversation they are no longer technically sockpuppets. But that distinction doesn't seem terribly important. Yes. There are incentives for using multiple accounts (whether for sock-puppetry, karma assassination or otherwise). I prefer the case where such practice is sufficiently discouraged that any perpetrators must at least go to the effort of acting differently---maybe by having different pet issues that they rant about rather than the same battle and same arguments with a different moniker.
0TimS
There's no rule against multiple accounts (AFAIK), but there is a local social norm against multiple accounts. Some folks even dislike Clippy, who I find hilarious.

Apology accepted, but I think it's Dmytry to whom you actually owe it: he's the one you recklessly accused of deceitful self-promotion.

Now there's an excellent example of rationality failure: I'm not Dmytry. Check my profile and my blogs.

lukeprog130

Oh wait, you're that other person with a bunch of different monikers: metaphysicist, srdiamond, etc. Sorry.

Here's a more germane objection: a single vote, in reality (as opposed to in "should universes") never truly comes even close to deciding an election. When the votes are close to a tie, the courts step in, as in Bush v. Gore. There are recounts and challenges. The power of connections and influence by judicial politics completely overwhelms the effect of a single vote.

Don't you think it perverse to derive the value of voting from the very high value of the outcome of an extraordinary event?

Take an electorate with 1,000,000,000 voters, deciding between A and B. If 550 million vote for A, and 450 million vote for B, then A is 90% likely to win. Conversely, if B leads 550 million to A's 450 million, B is 90% likely to win. With very finely balanced vote totals both candidates have sizable chances at winning depending on the outcomes of recounts, etc (although the actual vote total in a recount certainly matters for the recount and challenge process!).

Say we make a graph, assigning a probability of victory for A for every A vote total between 45... (read more)

I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000.

You reason, I think, that since most everyone has better knowledge of the identity of the better candidate than chance, Chance (to reconstruct the argument) is the relevant criterion because, for your vote to be decisive, the other voters would have shown themselves (as a whole) to be indifferent between the two outcomes--I find that a convenient way to put it. In the only circumstance where your... (read more)

[This comment is no longer endorsed by its author]Reply

It's just trivial that if voting is rational, political spending is even more rational. It's not germane to use political contributions in proxy for charitable contributions.

7CarlShulman
"It's just trivial that if voting is rational, political spending is even more rational." I clearly explained why this is wrong directly above. If your opportunity cost of time is $50 per hour, voting would take an hour, and it would cost you $400 to elicit a marginal vote of equal expected impact, then you are getting an eightfold multiplier on effort spent voting as opposed to earning money to influence other votes. You need to spend one hour instead of eight hours worth of effort to get the same result. If your opportunity cost of time is lower, or the impact of money on elections is lower, the effect gets more extreme. If your breakeven point is anywhere in that range then voting can make sense for you even while political donation does not.

Do you agree then that it is a potential explanation? If so, what's a more plausible one? It may limitations of my imagination, but I don't see one.

1wedrifid
Try.

Doing it because people have emotions is worthy of immense respect? Why?

Emotions are part of rational process, but you aren't rational in discussion when you're in the grip of a strong, immediate emotion. Since you have the advantage in an argument when you remain calm, it is worthy of respect to forgo that advantage and disengage.

It's vogue to defend correspondence because 1) it sounds like common sense and 2) it signals rejection of largely discredited instrumentalism. But surely a correspondence theorist should have a theory of the nature of the correspondence. How does a proposition or a verbal string correspond to a state of reality? By virtue of what is it a correct description? We can state a metalinguistic relationship about "Snow is white," but how does this locution hook onto the actual world?

Correspondence theorists think this is a task for a philosophical theor... (read more)

0torekp
Interesting. I am inclined to replicate my compatibility claim at this level too; i.e., the technical solution in the psychology of language will be a philosophical theory of reference (as much as one needs) as well. I'd be interested in references to any of the deflationist discussions of reference you have in mind.

The feeling of 'People who agree with me on X also agree with me on completely unrelated Y' is awesome.

The halo effect may be awesome ... but it's deadly!

1wedrifid
The halo effect is not necessarily either a cause or a consequence of the quoted phenomenon.

Two quibbles that could turn out to be more than quibbles.

  1. The concept of truth you intend to defend isn't a correspondence theory--rather it's a deflationary theory, one in which truth has a purely metalinguistic role. It doesn't provide any account of the nature of any correspondence relationship that might exist between beliefs and reality. A correspondence theory, properly termed, uses a strong notion of reference to provide a philosophical account of how language ties to reality.

  2. You write:

Some pundits have panicked over the point that any judgm

... (read more)

In my humble opinion, snarkiness is a form of rudeness, and we should dispense with it here.

Moreover, since we have a politeness norm, it isn't so clear that the interpretation you offer is charitable!

It's the most appropriate answer to a question that constitutes a rhetorical demand that the reader must generalize from fictional evidence. (Last four words hyperlinked.)

There was no demand to "generalize" from fictional evidence, except to recognize the theoretical possibility a sociopathic character who is indifferent to status concerns.

The intended question is whether such characters can exist and if so what's their diagnosis. Your response "fictional" would be reasonable if you went on to say, "that's a fiction; such a pat... (read more)

-1JoshuaZ
The simplest minimally charitable interpretation of the remark seems to be saying that in a slightly snarky fashion.
2wedrifid
Thankyou.
0wedrifid
That's an untenable interpretation of the written words and plain rude. (Claiming to have) mind read negative beliefs and motives in others then declaring them publicly tends to be frowned upon. Certainly it is frowned upon me.

Why would charities behave any differently than profit-making assets? Do you think that charities have less uncertainties?

The confusion concerns whose risk is relevant. When you invest in stocks, you want to minimize the risk to your assets. So, you will diversify your holdings.

When you contribute to charities, if rational you should (with the caveats others have mentioned) minimize the risk that a failing charity will prove crucial, not the risk that your individual contribution will be wasted. If you take a broad, utilitarian overview, you incorporat... (read more)

4Larks
This is the most insightful thing I've read on LW today.

The goal we like to aim for here in "dissolving" problems is not just to show that the question was wrongheaded, but thoroughly explain why we were motivated to ask the question in the first place. ¶ If qualia don't exist for anyone, what causes so many people to believe they exist and to describe them in such similar ways? Why does virtually everyone with a philosophical bent rediscover the "hard problem"

I think this objection applies to Dennett or Churchland's account but not to mine. The reason the qualia problem is compelling, on... (read more)

we do all see roughly the same thing: we've got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.

To my disappointment, David Papineau concluded the same, but we can't compare differences in pictures of the world to differences in the brain structure or function because we can have only a single example of a "picture of the world." "Pretty much the same sensory organs & bra... (read more)

The simplest explanation for the universe is that it doesn't exist. It's not popular, because the universe seems to exist. Explanations need to be adequate to the facts, not just simple... Since the inexpressibility of qualia can be accounted for given facts about the limited bandwidth of speech, it does not need to be accounted for all over again on the hypothesis that qualia don't exist.

But can the inexpressibility of qualia be accounted for by such facts as mentioned? That's the question, since the claim here is that the only supposed fact you have t... (read more)

-2Peterdjones
It would be more intesting to put forward a specific objection. I don't think that anything anywhere is better supported. Can you prove the existence of matter, or the falsity of contradictions without assuming them? What an odd thing to say. The argument for the inexpressability of qualia is just the persistent inability of anyone to do so -- like the argument against the existence of time machines. An explanation for that inablity is what I gave, just as their are speculative theories against time travel. I think that if you unpack "science" and "human practice" you will find elements of "we assume without proving"..and "we can't help but believe".

The private-language problem ought to tell us that even if raw experiences exist, then we should not expect to have words to describe raw experience.

Wittgenstein's private-language argument, if sound, would obviate 2c. But 3b is based on Wittgenstein's account not being successful in explaining the absence of private language. It claims to be a solution to the private-language problem, recognizing that Wittgenstein was unsuccessful in solving it.

One can accept materialism while remaining agnostic about whether it can explain qualia, just like one can accept economics without necessarily requiring it to explain physics.

Materialism is a philosophy which claims the primacy of physics. A materialist can be either a reductionist or an eliminitivist about qualia.

The analogy to economics is bad because economics doesn't contend that economics is primary over physics, but materialism does contend that the physical is primary over the mental.

1Peterdjones
I don't see why that shoudn't be called physcialism.
0Kaj_Sotala
I suppose I'm using "materialism" in a slightly different way, then - to refer to a philosophy which claims that mental processes (but not necessarily qualia) are a subset of physical processes, and thus explainable by physics.

Reification seems at work in the studies of the placebo effect for antidepressants. It's found that except for severe depressions, antidepressants may have "little or no greater benefit than placebo." The conclusion drawn is either that antidepressants aren't effective or placebos are effective, when the truth is that most depressions have a short-term course, and the placebo group's effects include the spontaneous remissions.

If IQ tests are 'culturally biased', then we would expect the highest scoring group to share the same culture as the test writers.

This assumes that if a test is culture biased, it must be biased in favor of the culture as a whole. A test can be culture biased by hyper-valuing a set of skills prominent in one culture, even if that skill set is stronger in some other culture. If IQ is biased, say, toward "academic culture," even though this is a feature of "white U.S. culture" it may be even more a part of East Asian culture.

What I th... (read more)

If I can try to get you to be more specific--was it perhaps something you recently learned about LW "culture"? Such as was contained in a recently published expose?

I was turned off too.

-4Will_Newsome
I was a Visiting Fellow at SingInst for about two years—I know way more about it than can be found in that article. The reason for the change in my strategies is simply that I learned that I can have most of my comments downvoted while still reaching a good fraction of the more interesting people on LessWrong. It's been a long time since I've cared about LessWrong as a community, I'm only here to interact with the interesting folk. LessWrong is still a hub for them. Even Nick Szabo's been stopping by recently.

The article is obviously embarrassing to E.Y. If he didn't want to see this essay's Google rating improve, it wasn't about some general principle regarding "trolling." That's a pretty pathetic attempt at an excuse. It was something about this article. But what? Everyone thinks it's the "moral" aspect. That may be part of his worry: if so, it suggests that the SIAI/Less Wrong complex has a structure of levels--like say, Scientology--where the behavior of the more "conscious" is hidden from less-conscious followers.

But let me p... (read more)

Load More