All of Oligopsony's Comments + Replies

If it's digitally embedded, even if the "base" module was bad at math in the same way we are, it would be trivial to cybernetically link it to a calculator program, just as us physical humans are cyborgs when we use physical calculators (albeit with a greater delay than a digital being would have to deal with.)

Thiel enjoys the spotlight, he's his own boss and could spend all day rolling around in giant piles of money if he wanted to, he's said plenty of things publicly that are way more NRx-y than the monopoly thing and he's obviously fine.

6James_Miller
A billionaire who enjoys the spotlight does something like buying a professional sports team or media company or running for office. Thiel's behavior is not consistent with him craving publicity.
1eternal_neophyte
Seems that you're arguing that it would be consistent with the behaviour of someone of Thiel's stripe. I don't believe that's fair, it smacks to me of begging the question (i.e. he's a prima donna and therefore his current behaviour is that of a prima donna).

I give more to charity and use spaced repetition systems heavily.

If the demons understand harm and are very clever in figuring out what will lead to it, what happens when we ask them minimize harm, or maximize utility, or do the opposite of what it would want to do otherwise, or {rigidly specified version of something like this}?

Can we force demons to tell us (for instance) how they'd rank various policy packages in government, what personal choices they'd prefer I make, &c., so we can back-engineer what not to do? They're not infinitely clever, but how clever are they?

6TsviBT
There are ten thousand wrong solutions and four good solutions. You don't get much info from being told a particular bad solution. The opposite of a bad solution is a bad solution.

The issue isn't whether looks are objective (clearly they aren't,) but whether judgments of looks are more correlated among the userbase than those of personality.

(Actually, the degree to which personality is correlated is probably the more interesting question here (granting that interestingness isn't particularly objective either.) Robin Hanson has pointed to some studies that suggest that "compatibility" isn't really a thing and some people are just easier to get along with than others - the study in question IIRC didn't take selection effects into account, but it remains an interesting hypothesis.)

It was a garbled version of Angkorism, sorry.

1NoriMori1992
I don't get any informative results from looking that up, either.

If your point is that Openness is probably not a thing-in-the-world, I would be inclined to agree, actually.

gwern320

No, that's not my point. I don't know what real thing(s) make up the factor that factor analysis spits out and we label Openness, but my point is that if openness, novelty, new ideas, etc, really were universally valued, then there could be no such factor because everyone would recognize the Openness-loaded questions (they're transparent) and answer maximally. The fact that an Openness factor emerges out of pretty transparent questions shows that in the general population, there are a lot of people who will freely tell you on a questionnaire that they aren... (read more)

Big Five Openness correlates with political liberalism, so cet par it would be weak Bayesian evidence for open-mindedness, even if it is not an example of it.

2gwern
Speaking of Big Five: If everyone thought they were completely open-minded, then how can questionnaires even work for measuring Openness? I mean, have you read or taken one of them? The O questions aren't remotely subtle.

I am completely uninformed on the technical particulars here, so this is idle speculation. But it isn't totally implausible that ideological factors were at play here. By this I don't mean that there were arguments being deployed as soldiers - nothing political, as far as I'm aware, rides upon the two theories - but that worldviews may have primed scientists (acting in entirely good faith) to think of, and see as more reasonable, certain hypotheses. Dialectical materialism, for instance, tends to emphasize (or, by default, think in terms of) qualitative tr... (read more)

8Douglas_Knight
If anything, it seems the opposite to me. The biogenic theory is about swamps that only occurred in particular places in particular geologic periods, whereas the abiogenic theory, though I did not say, is about a continual process uniform through space and time, except for variation in the porosity of rock, especially capstones, a particularity that is shared with the other theory. The Germanic founders of quantum mechanics did invoke Idealism, and the Soviets criticized them for it, but this was quite explicit.

And that willingness to invest such time might correlate with certain factors.

For present purposes, I suppose it includes any domain including the defense of lying itself.

0TheOtherDave
So, I think "a defense of corrupting intellectual discourse with lies" collapses into looking for a defense of lying more generally... would you agree? I'm not trying to put words in your mouth, just trying to make sure I've understood you.

All this needs the disclaimer that some domains should be lie-free zones. I value the truth and despise those who would corrupt intellectual discourse with lies.

Can anyone point me to a defense of corrupting intellectual discourse with lies (that doesn't resolve into a two-tier model of elites or insiders for whom truth is required and masses/outsiders for whom it is not?) Obviously there is at least one really good reason why espousing such a viewpoint would be rare, but I assume that, by the law of large numbers, there's probably an extant example somewhere.

0Douglas_Knight
Here's something. It's not a defense of lying, but I do think it's an example of advocating lying that does not resolve into elites versus outsiders in an essay by Gould: 1 2 3 4. It ends with which I read as advocating that the reader indoctrinate himself with the belief. I don't think it's clear whether he thinks it true or false, just too consequential to leave to the facts. This isn't an exhortation to indoctrinate the masses with lies, but for the reader should to first indoctrinate himself. I think that this is a common pattern. It's possible that I'm reading this wrong. Perhaps it is a coded message of esoteric knowledge and elites are supposed to know better than the indoctrinate themselves. Indeed, that could apply to any example along these lines. Or perhaps I'm reading too much into those words and they aren't meant to be indoctrination at all. Some nearby passages that argue against that: ---------------------------------------- For anyone else, the object level of the essay came up here (though perhaps for the meta level of another debate). I do think it is a good essay.
4ChrisHallquist
I'm trying to take the idea of not lying in science journals and broaden it to include fields other than science, and public discussion in places other than journals. A specific example would be Christian apologist William Lane Craig (who I've been following long enough to become convinced that the falsehoods he tells are too systematic to all be a matter of self-deception.)
2ChristianKl
Do you believe that Sokal was immoral when he wrote his famous paper? There are people who suggest that Bem wrote his latest famous paper for the same reason. If you think that the system is inherently flawed and corrupt and has no error correction build in, the strategy of placing lies into the system to make it blow up makes sense.
5TheOtherDave
Can we taboo "intellectual discourse"? As I think about your question I realize that I'm not sure I understand what that phrase is being used to refer to in this context.

At LessWrong there've been discussions of several different views all described as "radical honesty." No one I know of, though, has advocated Radical Honesty as defined by psychotherapist Brad Blanton, which (among other things) demands that people share every negative thought they have about other people. (If you haven't, I recommend reading A. J. Jacobs on Blanton's movement.) While I'm glad no one here is thinks Blanton's version of radical honesty is a good idea, a strict no-lies policy can sometimes have effects that are just as disastrous.

... (read more)
-1hyporational
If someone close to me started being that honest or more importantly submissive with me, the power imbalance would probably upset me much more than any truths exposed. I don't want to control my friends, I want them to challenge me and support me. Alternatively a sudden change like that without obvious submissiveness might make me rather suspicious of what they're hiding behind those little lies. This is not to say there aren't radically honest people who aren't even a bit submissive. I haven't seen such people and they might be rather interesting, but I wouldn't introduce them to anyone else I know. One person I know pretends to be radically honest by telling all kinds of personal stuff even to strangers nobody in their right mind would expose, but is actually full of shit too.

This, but in a more general sense for the first: Pascal thought there were a bunch of sophisticated philosophical reasons that you should be a Catholic; the Wager was just the one he's famous for.

I suspect this was written and is being upvoted in very different senses.

2Viliam_Bur
Yes, it was written in the same text as:
1Jayson_Virissimo
What are the two opposing senses you have in mind?
3VAuroch
Very true. Is it still a rationality quote if it isn't rational at all in the original context, but can be useful out of context?

See also Hanson's less than enthusiastic review.

Amusingly enough, the example of TrollBot that came to mind was the God expounded on in many parts of the New Testament, who will punish you iff you do not unconditionally cooperate with others, including your oppressors.

To provide a concrete example, this seems to suggest that a person who favours the Republicans over the Democrats and expects the Republicans to do well in the midterms should vote for a Libertarian, thereby making the Republicans more dependent on the Tea Party. This is counterintuitive, to say the least.

Is it? Again, I haven't done the math, but look at the behavior of minor parties in parliamentary systems. They typically demand a price for their support. If the Republican will get your vote regardless why should they care about you?

0Chrysophylax
I agree that voting for a third party which better represents your ideals can make the closer main party move in that direction. The problem is that this strategy makes the main party more dependent upon its other supporters, which can lead to identity politics and legislative gridlock. If there were no Libertarian party, for example, libertarian candidates would have stood as Republicans, thereby shifting internal debate towards libertarianism. Another effect of voting for a third party is that it affects the electoral strategy of politically distant main parties. If a main party is beaten by a large enough margin it is likely to try to reinvent itself, or at least to replace key figures. If a large third party takes a share of the votes, especially of those disillusioned with main parties, it may have significant effects on long-term strategies.

Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before.

Rather than - or at least in addition to - being a bug, this strikes me as one of charity's features. Most arguments are, indeed, neither original nor very good. Inasmuch as you can substitute them for more original and/or coherent claims, then so much the better, I say.

3asr
Yes. But it's not doing any favors to anybody if you pretend that a new coherent argument is the same as an old incoherent argument. In my experience, the authors of the previous argument are often hesitant to agree with the new rephrasing -- it's not written in the terms they use to understand the world.

Another consideration is the effects of your decision criteria on the lesser evil itself. All else being equal, and assuming your politics aren't so unbelievably unimaginative that you see yourself somewhere between the two mainstream alternatives, you should prefer the lesser evil to be more beholden to its base. The logic of this should be most evident in parliamentary systems, where third party voters can explicitly coordinate and sometimes back and sometimes withdraw support from their nearest mainstream parties, depending on policy concessions.

0gjm
The assumption you appear to be making here is that you're either between the two mainstream parties (and "unbelievably unimaginative") or further out than one of them along the axis running between them, somewhere near to where "their base" lives (and presumably not so unimaginative). I think this assumption is very wrong, and I don't see any reason for accepting it. It's wrong for at least two reasons. (1) Politics is not one-dimensional. You might be fairly near the middle on that primary axis but far from the two mainstream options on some other axis. (2) You may be further out than "between the two mainstream alternatives" but the nearest applicable "base" may be further out and highly undesirable. (For instance, you might consider that the centre ground between Republicans and Democrats is too far left while thinking the Tea Party is too far right -- or, for that matter, too far out on some other axis.)
0Chrysophylax
How would you go about achieving this? The only interpretation that occurs to me is to minimise the number of votes for the less-dispreferred main party subject to the constraint that it wins, thereby making it maximally indebted to (which seems an unlikely way for politicians to think) and maximally (apparently) dependent upon its strongest supporters. To provide a concrete example, this seems to suggest that a person who favours the Republicans over the Democrats and expects the Republicans to do well in the midterms should vote for a Libertarian, thereby making the Republicans more dependent on the Tea Party. This is counterintuitive, to say the least. I disagree with the initial claim. While moving away from centre for an electoral term might lead to short-term gains (e.g. passing something that is mainly favoured by more extreme voters), it might also lead to short-term losses (by causing stalemate and gridlock). In the longer term, taking a wingward stance seems likely to polarise views of the party, strengthening support from diehards but weakening appeal to centrists.

Sure. Or more glibly, does malaria not inhibit economic development?

Educated women have less children, reduced childhood mortality means less hedging to reach a desired number of children, above-noted changes away from agriculture and mandatory public schooling reduce the economic value of child labor, some other stuff.

1pianoforte611
Wait a minute does providing malaria nets or deworming kits lead to economic development?

Also, deontic concerns about forcing existence on people.

As Apprentice points out the heritability of prosocial behaviors such as cooperativeness, empathy and altruism is 0.5, and I think most people here are aware that IQ has a heritability around that number as well and is a pretty good predictor of life outcomes. If you want to increase the number of people in the world that are like yourself, then having children is a great way of doing so.

I would submit that most people are not very good about judging whether they are prosocial geniuses. (This goe... (read more)

0[anonymous]
I find this to be the most interesting and important ones, because to me, everybody who has dead children is technically killer.
1pianoforte611
"The overall effect of economic development is to greatly reduce fertility." That's very interesting, why is that?

Technically speaking, this seems like an altruistic reason to write something for Ada Lovelace Day, not a selfish one. Unless you're using the term in the trivial sense where "selfish reason to" is pleonastic.

I won't be able to make it today, but I do promise to show up sometime soon so I can return the books I borrowed the last swap.

For their part, Stalinists have tended to be fond of technical elites as well. However, I suspect that gristly examples may arise simply from the depth of the sample size; the innumerable cruelties of the premodern world, after all, we're chiefly overseen by humanistic elites. It may be that today humanistic values are substantially more weak and "feminine" (from the perspective of their predecessors,) but this may also be part of why existing power structures are less fond of employing them.

(All this, of course, assumes this is a useful dichotomy, the primary avenues for elite recruitment under modern liberalism are business and the legal profession, which straddle the line in some ways.)

Can Blindsight-style Scramblers employ anthropic reasoning?

2Wei Dai
Not sure what Scramblers are exactly, but in 2001 I came up with some ideas about how AIs could do something like anthropic reasoning, without being sentient (which eventually incorporated into UDT). Here's the original post I wrote, which was titled no need for anthropic reasoning .
2BlindIdiotPoster
To the extent that anthropic reasoning works at all, it doesn't seem like sentience should be needed. To use an analogy, it seems to me that this non-sentient site is sort of using anthropic reasoning.

Doesn't the anthropic principle provide some difficulty for the latter solution as well - why should we find ourselves at the very beginning of such preposterously long lifespans?

Having spoken with you in person (unaware that this was a consciously chosen practice) my experience was mostly that it was cognitively burdensome and that I was mostly worried for you. I suspect this isn't what you're shooting for! (I also classified it alongside my "Will is a troubled genius" model, which may or may not be what you're going for.)

My personal experience is that I tend towards terrible self-destructiveness when I don't get enough human warmth, so this strategy would not be a good debiaser for me. But if you can make it work... actually, this seems like a good thing to get external feedback on whether you make it work. Have you?

"Rather" my butt; there's an incredibly obvious rude reply I could have made, and would have, had I the minimal intelligence to realize it.

0Multiheaded
[Godfuckingdamnit, this supporting response is an experiment in social dynamics:. Will LW ascribe any game-theoretical relevance to this here anecdotal data of two comrades sticking together in the face of negative karma? Or is it all part of a larger plot I'm weaving?] [:comradefist:]

If you are much better than the market at predicting how cards will trend, you should probably be working for Star City or some other secondary market giant.

Probably the continuous uptrend in the P9 et al. can be understood as rational if the continued growth of the game is uncertain. There's always the black swan possibility that Wizards will catastrophically fuck up in some way and hence let them tumble down. In addition, the growth of eternal formats is itself limited by the availability of staples. I would suspect there's an upper limit to how expensiv... (read more)

Oh, I'm sure if I keep on my current kick I can dip below a kilokarma.

0A1987dM
That would still not be good evidence that you have a low IQ, rather than just being a dick. Hanlon's razor only goes so far.

Maybe you can call in Gwern to measure my skull shape and really narrow it down.

The biggest barrier that has anything to do with cleverness? Sure.

0A1987dM
The biggest barrier to joining LW all right, but not the biggest barrier to staying on LW long enough to get more than 1000 karma points.
9TimS
Because that is the biggest barrier to new people joining LW.
-2A1987dM
Don't you? If you're a human, it's almost certainly somewhere between 10 and 190; if you made it through high school, it's very likely over 70; if you have a university degree, it's probably over 90; if you haven't won a Nobel Prize or similar, it's probably below 160; must... resist... the temptation of making examples using the words “black” or “Jewish”; and so on.
Dahlen150

Right. Stop. Just stop. I can see right through what you're doing now.

It wasn't a "perfectly reasonable hypothesis", it was meant to reflect bad on me; it was an oblique accusation that I broke the social norm of not calling people stupid, or not arrogantly believing everybody who disagrees with me to be stupid. Of course I don't believe that you, or anybody smart enough to be on LW, would ever give serious consideration to the hypothesis that they're really, truly, honest-to-God dumb; no, you're a bunch of reasonably smart guys that are aware th... (read more)

Being angry is a signal that you're willing to back up your disagreement with consequences of some sort, whether it's violence or a lost friendship. It's also a signal, commensurate with the degree to which it is embarrassing, that this is highly important to you. Why, precisely, is it irrational to respond to this? Did evolution prime us to respond to it because it thought it would be funny? It is, indeed, not obvious to me (though perhaps I have low IQ) that it is astonishingly stupid to be more convinced (behaviorally) by pathos than logos; behavioral r... (read more)

3TimS
Being angry signals lots of things - and if I desire less angry reaction, I need to figure out the function of the anger in this particular context. Dahlen's points seems to be that in the ordinary social context, anger tends to function as an attention seeking behavior, not a conflict resolution behavior. In other words, most anger is trying to yank someone's chain. If that is the case, then responding to the anger with more anger is not consistent with having a goal of reducing the amount of anger directed at oneself. Your assertion that anger reactions can have only one function seems likely to be false - and not a charitable or steelman reading of Dahlen's post.
1Dahlen
Look. The basic assumption here is that people would rather not be the targets of a hysterical person's fits, that it's unpleasant to them, and if there were anything they could do to discourage it, they would do it. Another assumption is that people remember which strategy worked on a certain person when they tried to achieve a certain goal in their interaction with them. Therefore, if someone wants to get you to stop being angry at them, and they throw you a hissy fit, and it works, and when they tried to reasonably talk things through with you it didn't work, then they'll remember that the best way to get you to stop being angry at them is to throw you a hissy fit. The next time they'll want you to stop being angry at them, they'll probably throw you another. You don't want this. You're uncomfortable when they do that. Therefore the rational thing to do is not to reinforce that sort of behavior, and reinforce instead the behavior that makes you feel comfortable. I didn't say anything about the rationality of responding to anger per se. I just said that reinforcing a behavior you don't want to be subject to is irrational (and I thought any audience could agree with me on that) and that this particular case belongs to that class of irrational things to do. Why, yes. I earnestly believe that evolution has a sense of humour which influences its "decisions" regarding what sorts of behavioral tendencies to implement in humans. It's disingenuous to suggest an answer to your question which you expect no reasonable person to give. Possibly, but I am not very tempted to fault the quality of my logos for the failure of my attempt at mediation, since the obstacle it had to overcome was of the kind "I don't want to listen to you. (I want to indulge in my anger.)". The only response that the other person would accept of me was to shut up, admit to not quite qualifying as a human being because of my moral faults, feel horrible about it and leave the room. I am inclined to b

With respect to this being a "danger," don't Boltzmann brains have a decision-theoretic weight of zero?

1James_Miller
Why zero? If you came to believe there was a 99.99999% chance you are currently dreaming wouldn't it effect your choices?

This says to me that early childhood nutrition is the common factor here.

Translating any serious insights into LW-speak by myself is a bit of a daunting task

I like to think my entire tenure here has been something of an attempt at this, although of course I can't say how successful it's been.

(I'd also characterize it as in black rather than clown suits, at least from the inside. Will Newsome and muflax are the clown suit guys here, God bless them.)

Some possibilities: it jets out in one direction, little droplets radiate outwards from all over, there are a bunch of miniature streams going in all directions, there are sorts of sheets of water radiating outward that split into droplets, it does any of these things at a rapid or very slow rate, the water doesn't leave at all

Reasoning: when I wring out a towel it usually all leaves in one big thing, then drips from all over. Does it all leave through one "faucet" because that's where the pressure is or because it's the lowest point? I've never ... (read more)

The central premise of Time on the Cross - that slavery was economically profitable and unlikely "wither away", and this had some positive effect on the treatment of the slaves, seems quite plausible to me. (That said, I believe this is only true after the invention of the cotton gin).

The first half of the thesis is most assuredly true. It could be that if not for the invention of the cotton gin, slavery would not have been profitable in the cotton-growing regions of the US South, but slavery was extremely profitable and economically dynamic e... (read more)

For serious (though hardly undisputed) evidence that slavery wasn't, in certain respects, "not all that bad" see Fogel and Engerman's Time on the Cross. Note also that Fogel and Engerman were allowed to say this and that they both remain highly respected academics, despite Engerman existing in just the sort of field that the Sheeple Can't Handle My Thoughtcrime crowd would predict to be most witchhunty.

4TimS
In case it wasn't clear, I think people who think "Slavery wasn't so bad" are widely under-weighing the suffering caused by the violent enforcement of the status quo. Slaves tried to escape all the time, and fugitive slave enforcement was incredibly violent - and the violence was state-sanctioned. I was asking to try to understand how the statement imputed to Bill addressed that issue - because without addressing the violence of fugitive slave enforcement, the statement did not even seem plausible to me. The central premise of Time on the Cross - that slavery was economically profitable and unlikely "wither away", and this had some positive effect on the treatment of the slaves, seems quite plausible to me. (That said, I believe this is only true after the invention of the cotton gin). But I find it implausible that this benefit outweighed the negatives of the fugitive slave enforcement in the US.

Had it never been officially discouraged in the first place, I would still expect it to be less popular in 2013 than 1913. Wouldn't you?

-2MugaSofer
Honestly, I'm so uninformed any opinion I have on the subject is almost totally uncertain. I've typed out multiple replies to this comment, and deleted them all because I simply didn't have a high enough confidence rating. Sorry! OTOH, religions can either get more or less popular, so all things being equal (which I doubt they are in real life) lowered popularity is evidence for the laws working.
3gwern
The "secularization hypothesis" is seductive and common, but if you google, you'll see that it's debated whether societies do in fact become less religious as they get richer.

From my experience, I think that your estimate of the odds of encountering a comment "which blows apart their argument" as about 1% is overly optimistic. Maybe in some other fields it's different. At best you can expect a minor correction or a qualification.

That's probably a more accurate way of phrasing things, yeah.

For any given assertion by an expert on a situation you are not an expert on, the probability that your criticism is correct is not small. However,

1) this does not mean that the expected value of the criticism is negative, even to the expert. If the expert receives 100 comments, 99 of which are confused and one of which blows apart their argument, then they are probably collectively valuable.

2) if the expert is unusually patient, your comment can present her with an opportunity to correct your confusion.

I would say that the important thing is more humility of presentation than humility of willingness to speak at all.

6Shmi
I agree, it's OK to ask what you think could be a stupid question. It's better than not asking, as you lose a chance to learn. It's not OK to insist that you are right and she is wrong once an explanation has been given, even if it does not make sense to you. Though, given the usual inferential distance problems, it's perfectly fine to ask for clarification. From my experience, I think that your estimate of the odds of encountering a comment "which blows apart their argument" as about 1% is overly optimistic. Maybe in some other fields it's different. At best you can expect a minor correction or a qualification. If the expert is any good, they probably have heard it all before, and if they aren't, their ego would likely prevent them from admitting that they are wrong, anyway.

If Rationality is Winning, or perhaps more explicitly Making The Decisions That Best Accomplish Whatever Your Goals Happen to Be, then Rationality is so large that it swallows everything. Like anything else, spergy LW-style rationality is a small part of this, but it seems to me that anything which one can meaningfully discuss is going to be one such small portion. One could of course discuss Winning In General at a sufficiently high level of abstraction, but then you'd be discussing spergy LW stuff by definition - decision theory, utility, and so on.

If bu... (read more)

1jooyous
This is a really good point and it is also related to Manfred's comment that I don't personally know how to reconcile with some of the points in the article. On one hand, I would like to have a lot of money because a lot of inconvenient things would suddenly become much easier. On the other hand, I would have to do other inconvenient things, like manage a lot of money. Also, I don't think I would be happy doing Oprah's job, even if it resulted in a lot of money. Basically, I would not mind lots of money but it is not currently a priority. So I don't know if I'm actually winning or not, oops. Therefore, a poll! How successful are you? [pollid:426] From a fame, money or bragging rights perspective, how ambitious are your current goals?[pollid:427]
8Jayson_Virissimo
I never did find out if any sizable fraction of Less Wrongers would bite this bullet. That is to say, to affirm the claim that, all else equal, a person with more physical strength is necessarily more rational.

Okay, so at this point we're basically disagreeing over what someone intended by what they say. Unless Julian wants to clarify I'm going to tap out.

0Eugine_Nier
I don't believe so. At least I can't see where your position differs from mine. The difference is you object to my formulation her position in a way that doesn't make anthropology look good.

Which claim? The one that anthropologists are endorsing is not the one that's politically convenient to them.

or even outright lies

You're misunderstanding Julian's claim, albeit I think for reasons of inferential distance rather than deliberate misreading. The claim was not that anthropology/Sinister Cathedral Orthodoxy endorses inborn gender identity, despite its being wrong, for its political utility to trans rights. Such Orthodoxy is precisely the basis on which he thinks it is wrong. The claim was that activists endorse this false belief for its political utility, and that he and other Sinister Cathedral Agents don't feel particularly obliged to go out of t... (read more)

-2Eugine_Nier
Of course not. Her claim is the the great and noble anthropologists are deceiving the public for the greater good.
0Eugine_Nier
My point is that one way or another the claim obtains the official stamp of approval as being the "scientific" and that this is an argument to be highly skeptical of anthropological claims with this approval.
Load More