Posts

Sorted by New

Wiki Contributions

Comments

And yet not traditional enough to see any problem with the UK's disastrous immigration policy. The BNP exists pretty much entirely because the "conservative" party is more concerned with not being called racists than with doing what the majority of their constituents have demanding for decades.

"Genuinely desirable" seems like the problem here, in that it's conflating base sexual attraction with a more pragmatic evaluation of someone's prospects.

Beta males certainly have many admirable qualities; they're reliable productive and civil, usually friendly and loyal as well. But those qualities, while again being very important, are simply not attractive.

Alpha males, on the other hand, are really quite a menace. The Dark Triad traits which make them attractive also mean they are shiftless and poor contributors to society, at least for the most part.

Hence the pattern of "Alpha fucks, Beta bucks." Women want to get the Alpha but will, if forced to by circumstances, trade sex to Betas for resources / security.

In that context, female "Betas" would be the low-risk women men settle for reluctantly while "Alphas" would be high-risk women who are highly sought after.

The problem here is that, as far as I can tell, a "Tell" culture would immediately become a "Lie Ineptly" culture.

Most of the time, in my experience anyway, when you don't want to help someone it's usually for a reason you couldn't say without nuking or at least damaging the relationship. Even worse, the level of detail / emotion in the "Tell" is much higher than the straightforward "Ask" which makes the usual evasions seem hollow and requires more elaborate excuses. And most people suck at spontaneous deception, since usually the only ones of us who get any practice tend to get weeded out of normal society pretty quickly as is.

"Telling" sounds great if your goal is to quickly burn up your social capital for favors, which can be a smart move if you're not planning on seeing someone again anyway. But you can't really build a useful relationship that way; blunt honesty and bad lies aren't going to get you trust / comfort and without that you're fighting uphill for every little thing.

Something which strikes me is that scientists having science as a job at all is a somewhat new idea; unless I'm wrong, it used to be that a lot of the great naturalists were either independently wealthy aristocrats and pursued scientific inquiry as a hobby, or were monks and were supported by the other brothers of their orders. On the one hand, they worked at their own paces and on topics close to their own interests (hard to imagine Mendel getting grant money, especially with his publishing rate), but on the other there are a lot of very bright people who aren't born into much money or a particularly religious bent who ought to at least consider science.

Still, I think a smart person could sort of split the difference; an ascetic or fraternal order devoted to naturalism might have some appeal and solve a few of the basic problems. New initiates could be put to work reproducing experiments which otherwise would be ignored in the rush to publish unique papers, established scientists who aren't cut out for corporate life or constant grant haggling could relax and focus on their actual jobs, the scientific community would be able to self-regulate with less direct interference by outsiders, and governments or rich individuals who want to appear smart or socially-conscious could patronize the order directly without the intermediary weirdness of setting up organizations of their own to vet applicants. There are concerns with group-think and corruption, but then again it's not like those would be novel issues given what happens in peer-reviewed journals or university departments.

Is this a terrible idea, and if not how would you sell it?

You jest, but from what I understand that's not far off. He wasn't exactly a polygamist, but at the very least a serial philanderer.

Any problems here?

That people are stupefyingly irrational about risks, especially in regards to medicine.

As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.

Now granted that's a rather extreme case, and she wasn't exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.

(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might "just happen" to come up in their cases if they went in for treatment; it's not like American bureaucrats have never abused their power to target political enemies before.)

Imagine an agent with an (incorrect) belief that only by killing everyone, would the world be the best place possible, and a prior against anything realistically causing it to update away. This would have to be stopped somehow, because of what it thinks (and what that causes it to do).

That doesn't quite follow.

Thinking something does not make it so, and there are a vanishingly small number of people who could realistically act on a desire to kill everyone. The only time you have to be deeply concerned about someone with those beliefs is if they managed to end up in a position of power, and even that just means "stricter controls on who gets access to world-ending power" rather than searching for thoughtcriminals specifically.

So if we assume a measure is invalid, it is useless to us (as an accurate measure anyway; you already pointed out a possible rhetorical use)?

If you'll forgive my saying it, that seems like more of a tautology about measurements in general than an argument about this specific case. If you have evidence that general intelligence as-measured-by-IQ is invalid, or even evidence that people unfamiliar with the field like Dr Atran or Gould take issue with 'reifying' it, that would be closer to what the original question was looking for.

I realize this comes off as a bit rude, but this particular non sequitur keeps coming up and is becoming a bit of a sore spot.

Not to mention that we don't know for sure that there even is a significant population difference here. It could just as easily be one of the things which humans seem to be generally consistent on as a species.

The point I was making, albeit ineptly, is that good research on the topic would be interesting and any potential ideological fallout shouldn't deter people from it.

There's a very commonly accepted line of thought around here whereby any sufficiently good digital approximation of a human brain is that human, in a sort of metaphysical way anyhow, because it uses the same underlying algorithms which describe how that brain works in it's model of the brain.

(It doesn't make much sense to me, since it seems to conflate the mathematical model with the physical reality, but as it's usually expressed as an ethical principle it isn't really under any obligation to make sense.)

The important thing is that once you identify sufficiently good simulations as moral agents you end up twisting yourself into ethical knots about things like how powerful beings in the far future treat the NPCs in their equivalent to video games. For that, and other reasons I'm not going to get into here, it seems like a fairly maladaptive belief even if it were accurate.

Load More