PhilosophyTutor comments on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1529)
Assuming for the sake of argument that women are sentient, but also that they have absolutely no free will when it comes to sexual relationships and that they can be piloted like a remote-controlled drone by a man who has cracked the human sexual signalling language (a hypothesis only slightly more extreme than the PUA hypothesis), that would still leave us with the question of how to maximise the utility of these strange, mindless creatures given that they are sentient and their utility counts as much as any other sentient being's.
PUA might be compatible with this if you assume that just by chance the real utility function of the human female just happens to be maximised by the behaviour which maximises the utility of the PUA, which is to say that you maximise the utility of all human females by having a one night stand with them if you find them physically attractive but not inclined to be subservient, and a longer-term relationship with them under some circumstances if you want regular sex and you can manage the relationship so that you are dominant. (We could call this the Weak Gor Hypothesis).
However this has not been demonstrated, and it might turn out that in some cases women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and "romantic" gestures and so forth. If that was the case then ethically some weight would have to be given to these sources of utility, and it would be ethically questionable to talk down such behaviour as "beta" since it would have turned out that the alpha/beta distinction did not match up with a real distinction between utility-maximising and non-utility-maximising behaviour in all cases.
LOL. Given that IRL Goreans (male and female) exist, someone who wants that sort of thing needn't try converting anyone from the general dating pool.
I've paraphrased your comment to make it gender neutral and preference-neutral.
The thing is, what maximizes our happiness isn't always what's predictably enjoyable. (See prospect theory, fun theory, liking vs. wanting, variable reinforcement...) Excitement and variety are very often the spice of life.
Frankly, having a partner who does nothing but worship you is both annoying and unattractive... even though it might sound like a good idea on paper. (For one thing, you can feel pressured to reciprocate.)
I'm reminded of Eliezer's "fun theory" posts about the evolution of concepts of heaven: that if you're a poor farmer then no work to do and streets paved with gold sounds like heaven to you, but once you actually got there, it'd be bloody boring.
In the same way, a lot of romantic ideals for relationships sound like heaven only when you haven't actually gotten there yet.
I think we need to be careful of false dichotomies and straw men, since so much of PUA doctrine/knowledge/dogma (pick your preferred term) is communicated in the form of dichotomies, which I suspect are false to at least a significant extent.
The possibility I advanced was that "women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and "romantic" gestures and so forth". This does not seem to me to be the same thing as saying that women are happier with "a partner who does nothing but worship [them]", although I can see how if you were trained to see relationships in terms of the PUA alpha/beta dichotomy it might seem to be the same thing to you. Most obviously treating someone as an equal partner is inconsistent with doing nothing but worshipping that person.
You also are asserting without evidence that the kind of relationship I just described would not be fun if you were actually in one, which seems to me to contain implicit status attack, since it assumes that I have never been in such a relationship and hence that I am speaking from a position of epistemological disadvantage compared to yourself.
Would I be far wrong if I guessed that your data set for this implicit assumption is based on interacting with a significant number of PUAs? If so the underlying problem may well simply be self-selection bias. The kind of people who have long-term relationships based on honesty, equality and support are probably unlikely to self-select for participation in PUA forums and hence their experiences and viewpoints will be under-represented in those circles compared to their prevalence in the general population.
Actually, it's my observation that men who consciously make an effort to do what you said, actually end up doing what I said, from the point of view of the people they interact with.
That is, they are poorly calibrated and overshoot the mark. (Been there, did that.)
Hm. Sorry - the important piece left out of my explicit reasoning is above: i.e., that people who think they are "communicating honestly", et al usually end up doing something completely different; it's the absence of that which I implicitly assume you've had... and which is AFAICT a less common experience for men (with no implied connotations about status) if for no other reason than that women are on average better socially calibrated than men.
Yes, you would. ;-)
Data point: I have been married for 15 years and would not classify myself as a PUA in any sense, although based on what statistics I've read about men in general, I would have to consider myself to have had above-average sexual success (though not drastically so) before I got married -- largely due to behaviors PUAs would've described as social game, direct game, and qualifying. (However, the terms didn't exist at the time, as far as I know -- this was pre-internet for the most part.)
At no time were a lack of honesty, equality, or support a part of what I did or sought, so I'm not sure why you think they are anathema to PUA goals.
PUA literature, like so many other things, is largely what you make of it. When I look at it, I find the parts that are positive, life-affirming, and utility-increasing for everybody involved. So your objections look to me like strawman attacks.
One thing I have observed is that once I've read the parts of PUA theory that sound good (i.e., more politically correct), I find that on reading the less politically-correct things, they are actually advocating similar behaviors, and simply describing them differently. Some use more inflammatory and controversial language laced with all sorts of negative judgments about men and women; others emphasize empathy and helping men to see things from women's point of view (without an added heap of patronizing the women in the process).
And yet, when it comes right down to it, they're still saying to do the same things; it's only the connotations of their speech that are different.
IOW, ISTM that you are arguing with the misogynistic connotations of some fragment of PUA theory that you've encountered; I disagree because the connotations are AFAICT superfluous to functional PUA advice, having had the opportunity to compare misogynistically-connotated and non-misogynistically-connotated descriptions of the same thing.
This is something that PUA and self-help in general have in common, btw: they are best read in such a way as to completely disregard connotation, judgment, and theory, in favor of simply extracting as directly as possible what precise behaviors are being recommended and what predictions are being made regarding the outcomes of those behaviors. Only after determining whether the behavior produces the predicted result, is it worth exploring (or refuting) the advocate's theories about "how" or "why" it works.
Case in point: "The Secret" and other "law of attraction stuff", much of which turns out to be scientifically valid, if (and only if) you completely ignore the nutty theories and focus on behavior and predictions. Richard Wiseman's research into "luck theory" actually demonstrates that the behaviors and attitudes recommended by certain "law of attraction" proponents actually do make you luckier, by increasing the probability that you will notice and exploit serendipitous positive opportunities in your environment.
If Wiseman had simply dismissed "The Secret" as another nutty new-age misinterpretation of physics, that research couldn't have been done. I suggest that if you seriously intend to research PUA (as opposed to making what seem to me like strawman arguments against it), you follow Wiseman's example, and break down whatever you read into concrete behaviors and outcome predictions, minus any theories or political connotations of theories.
I think your position is going to turn out to be unfalsifiable on the point of whether relationships involving honesty, equality and mutual support actually exist. If your response to claims that they exist is to say "Well in my experience they don't exist, the people who think they do are just deluded" I can't provide any evidence that will change your views. After all, I could just be deluded.
As for whether I'm engaging with, and have read, the "real" PUA literature or the "good" PUA literature, I'm not sure whether or not this is an instance of the No True Scotsman argument. There's no question that a large part of the PUA literature and community are misogynist and committed to an ideology that positions themselves as high-status and women and non-PUA men as low-status. As such that part of PUA culture is antithetical to the goals of LW as I understand them since those goals include maximising everyone's utility.
If there's a subset of positive-utility PUA thinking then that criticism does not apply and it's at least possible that if they have scientific data to back up their claims then there is something useful to be found there.
I think it's the PUA advocates' burden of proof to show us that data though, if there really is an elephant of good data pertinent to pursuing high net-utility outcomes in the room. As opposed to some truisms which predate PUA culture by a very long time hidden under an encrustation of placebo superstitions.
Huh? I didn't say those things didn't exist. I said I was not searching for a lack of those things (I even bolded the word "lack" so you wouldn't miss it), and that I don't see why you think that PUA requires such a lack.
Authentic Man Program and Johnny Soporno are the two schools I'm aware of that are strongly in the honesty and empowerment camps, AFAICT, and would constitute the closest things to "true scotsmen" for me. Most other things that I've seen have been a bit of a mixed bag, in that both empathetic and judgmental material (or honest and dishonest) can both be found in the same set of teachings.
Of notable interest to LW-ers, those two schools don't advocate even the token dishonesty of false premises for starting a conversation, let alone dishonesty regarding anything more important than that.
(Now, if you want to say that these schools aren't really PUA, then you're going to be the one making a No True Scotsman argument. ;-) )
As I said, I'm less interested in "scientific" evidence than Bayesian evidence. The latter can be disappointingly orthogonal to the former, in that what's generally good scientific evidence isn't always good Bayesian evidence, and good Bayesian evidence isn't always considered scientific.
More to the point, if your goals are more instrumental than epistemic, the reason why a particular thing works is of far less interest than whether it works and how it can be utilized.
I took a quick look at AMP and Soporno's web sites and I'm more than happy to accept them as non-misogynistic dating advice sources aiming for mutually beneficial relationships. I wasn't previously aware of them but I unconditionally accept them as True Scotsmen.
I'm now interested in how useful their advice is, either in instrumental or epistemic terms. Either would be significant, but if there is no hard evidence then the fact that their intentions are in step with those of LW doesn't get them a free pass if they don't have sound methodology behind their claims.
I'm aware Eliezer thinks there's a difference between scientific evidence and Bayesian evidence but it's my view that this is because he has a slightly unsophisticated understanding of what science is. My own view is that the sole difference between the two is that science commands you to suspend judgment until the null hypothesis is under p=0.05, at least for the purposes of what is allowed into the scientific canon as provisional fact, and Bayesians are more comfortable making bets with greater degrees of uncertainty.
Regardless, if your goals are genuinely instrumental you very much want to figure out what parts of the effect are due to placebo effects and what parts are due to real effects, so you can maximise your beneficial outcomes with a minimum of effort. If PUA is effective to some extent but solely due to placebo effects then it only merits a tiny footnote in a rationalist approach to relationships. If it has effects beyond placebo effects then and only then is there something interesting for rationalists to look at.
There is a word for the problem that results from this way of thinking about instrumental advice. It's called "akrasia". ;-)
Again, if you could get people to do things without taking into consideration the various quirks and design flaws of the human brain (from our perspective), then self-help books would be little more than to-do lists.
In general, when I see somebody worrying about placebo effects in instrumental fields affected by motivation, I tend to assume that they are either:
Inhumanly successful and akrasia-free at all their chosen goals, (not bloody likely),
Not actually interested in the goal being discussed, having already solved it to their satisfaction (ala skinny people accusing fat people of lacking willpower), or
Very interested in the goal, but not actually doing anything about it, and thus very much in need of a reason to discount their lack of action by pointing to the lack of "scientifically" validated advice as their excuse for why they're not doing that much.
Perhaps you can suggest a fourth alternative? ;-)
I'd prefer not to discuss this at the ad hominem level. You can assume for the sake of argument whichever of those three assumptions you prefer is correct, if it suits you. I'm indifferent to your choice - it makes no difference to my utility. I make no assumptions about why you hold the views you do.
My view is that the rationalist approach is to take it apart to see how it works, and then maybe afterwards put the bits that actually work back together with a dollop of motivating placebo effect on top.
The best way to approach research into helping overweight people lose weight is to study human biochemistry and motivation, and see what combinations of each work best. Not to leave the two areas thoroughly entangled and dismiss those interested in disentangling them as having the wrong motivations. I think the same goes for forming and maintaining romantic relationships.
Me either. I was asking you for a fourth alternative on the presumption that you might have one.
FWIW, I don't consider any of those alternatives somehow bad, nor is my intention to use the classification to score some sort of points. People who fall into category 3 are of particular interest to me, however, because they're people who can potentially be helped by understanding what it is they're doing.
To put it another way, it wasn't a rhetorical question, but one of information. If you fall in category 1 or 2, we have little further to discuss, but that's okay. If you fall in category 3, I'd like to help you out of it. If you fall in an as-yet-to-be-seen category 4, then I get to learn something.
So, win, win, win, win, in all four cases.
This is conflating things a bit: my reference to weight loss was pointing out that "universal" weight-loss advice doesn't really exist, so a rationalist seeking to lose weight must personally test alternatives, if he or she cannot afford to wait for science to figure out the One True Theory of Weight Loss.
This presupposes that you already have something that works, which you will not have unless you first test something. Even if you are only testing scientifically-validated principles, you must still find which are applicable to your individual situation and goals!
Heck, medical science uses different treatments for different kinds of cancer, and occasionally different treatments for the same kind of cancer, depending on the situation or the actual results on an individual - does this mean that medical science is irrational? If not, then pointing a finger at the variety of situation-specific PUA advice is just rhetoric, masquerading as reasoning.
No ad hominem fallacy present in grandparent.
Why don't you first describe one, then the other, then contrast them? Then, describe Eliezer's view and contrast that with your position.
I'll try to do it briefly, but it will be a bit tight. Let's see how we go.
Bayes' Theorem is part of the scientific toolbox. Pick up a first year statistics textbook and it will be in there, although not always under that name (look for "conditional probability" or similar constructs). Most of scientific methodology is about ensuring that you do your Bayesian updating right, by correctly establishing the base rate and the probability of your observations given the null hypothesis. (Scientists don't state their P(A), but they certainly have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely).
If you're doing Bayes right it's the same as doing science, but I think some of the LW groupthink holds that you can do a valid Bayesian update in the absence of a rigorously established base rate, and so they think this is a difference between being a good Bayesian and being a good scientist. I think they are just being bad Bayesians since updating is no better than guesswork in the absence of a rigorously obtained P(B).
Eliezer (based on The Dilemma: Science or Bayes? ) doesn't quite carve up science-culture from ideal-science-methodology the way I do, and infers that there is something wrong with Science because the culture doesn't care about revising instrumentally-indistinguishable models to make them more Eliezer-intuitive. I think this has more to do with trying to win a status war with Science than with any differences in predicted observations that matter.
That doesn't mean it doesn't underlie the entire structure. As an analogy, to get from New York to Miami, one must generally go south. But instructions on how to get there will be a hodgepodge of walk north out of the building, west to the car, drive due east, then turn south...the plane takes off headed east...and turns south...etc. Showing that going south is one of several ways to turn while walking doesn't mean its no conceptually different than north for getting fro New York to Miami. Similarly:
If one is paid to do plumbing, then there is no difference between being a good plumber and a "good Bayesian", and in that sense there is no difference between being a "good Bayesian" and a "good scientist".
In the sense in which it is intended, there is a difference between being a "good Bayesian" and a "good scientist". To continue the analogy, if one must go from Ramsey to JFK airport across the Tappan Zee Bridge, one's route will be on a convoluted path to a bridge that's in a monstrously inconvenient location. It was built there - at great additional expense as that is where the river is widest - to be just outside of the NY/NJ Port Authority's jurisdiction. The best route from Ramsey to Miami may be that way, but that accommodates human failings, and is not the direct route. Likewise for every movement that is made in a direction not as the crow flies. Bayesian laws are the standard by which the crow flies, against which it makes sense to compare the inferior standards that better suit our personal and organizational deficiencies.
Well, yes and no. It's adequately suited for the accumulation of not-false beliefs, but it both could be better instrumentally designed for humans and is not the bedrock of thinking by which anything works. The thing that is essential to the method you described, "Scientists...have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely". What abstraction describes the scientist's thought process, the engine within the scientific method? I suggest it is Bayesian reasoning but even if it is not, one thing it cannot be is more of the Scientific method, as that would lead to recursion. If it is not Bayesian reasoning, then there are some things I am wrong about, and Bayesianism is a failed complete explanation, and the Scientific method is half of a quite adequate method - but they are still different from each other.
P(B|~A) is inversely proportional to P(A|B) by Bayes' Rule, so the direction is right - that's why we can make planes that don't fall out of the sky. But just using P(B|~A) isn't what's done, because scientists interject their subjective expectations here and pretend they do not. P(B|~A) doesn't contain whether or not a researcher would have published something had she found a two tail rather than one tail test - a complaint about a paper I read just a few hours ago. What goes into p-values necessarily involves the arbitrary classes the scientist has decided evidence would fit in, and then measures his or her surprise at the class of evidence that is found. That's not P(B|~A), it's P(C|~A).
Do you have examples of boundary cases that distinguish a rigorously established one with one that isn't?
If one believes in qualitatively different beliefs, the rigorous and the non-rigorous, one falls into paradoxes such as the lottery paradox. It's important to establish the actual nature of knowledge as probabilistic, and not be tricked into thinking science is a separate non-overlapping magisteria with other things.
With such actually correct understanding of how beliefs should work, we can think about improving our thinking rather than eternally and in vain trying to smooth out a ripple in a rug that has a table on each of its corners, hoping our mistaken view of the world has few harmful implications like "Jesus Christ is God's only son" and not "life begins at conception".
Or, we could not act on our most coherent world-views, only acting according to whatever fragment of thought our non-coherent attention presents to us. Not appealing.
Are you familiar with Michael Polanyi Personal Knowledge?
His view is only slightly more strict, yet he arrives at some very different conclusions. For example, under your framework Rhine's ESP experiments are scientific hypothesis tests, and under his they are illogical. I am not convinced by Polanyi, but it is far from clear to me how you could show he is wrong. If you know how to show he is wrong and could explain that in a couple paragraphs (or point me to such a document) I would be very interested in reading it.
I'm not familiar with his work, unfortunately.
However a quote from one of the reviews concerns me. The reviewer says:
If that's Polanyi's position it seems both kooky and not immediately relevant to the topic, so unless you can take a shot at explaining what you think Polanyi's insights are that are relevant to the topic at hand I think we should drop this and take it up elsewhere or by other means if you want to talk about it further.
What are some examples of good scientific evidence that isn't good bayesian evidence?
Uh, how about all of parapsychology, aka "the control group for the scientific method". ;-) Psi experiments can reach p .05 under conventional methods without being good Bayesian evidence, as we've seen recently with that "future priming" psi experiment.
(Note that I said "scientific" not Scientific. ;-) )
Ok, I wouldn't have necessarily classed that as 'good scientific evidence' but it seems to be useful Bayesian evidence so we must be looking at it from different angles.
If they see this behavior from a stranger, they hate it like a bad smell. Yuck.
If they see a lot of this in a relationship, they begin to lose attration for him, and in the end hate him and cheat on him.
By the way, have you studied game theory? A man who always gives you treats and compliments is signalling his own low value, therefore his treats and compliments are devalued. Yes?
My personal belief is that female utility is maximized by a man who is alpha, who leads them rather than treating them as an equal, who keeps them "on their toes" by flirting with other chicks, but who occasionally surprises them with a big romantic gesture like a surprise weekend break, champagne on ice, hot sex in the penthouse suite. But he doesn't do it all the time, his rewards are unpredictable. This is in line with what game theory would predict.
Note that "utility" is not the same thing as "sexual pleasure".
Perhaps the reason you're being downvoted is because you're confusing game theory with behaviorism. Variable reinforcement schedules, and all that.
Also, I expect if you phrased the last part of your comment, say, as:
"People enjoy a little variety and unpredictability from their partners, and generally prefer not to have to come up with all the ideas for what to do."
It'd be less likely to be perceived as some sort of chauvinism. That statement, as it happens is true of both men and women.
(Likewise, the first part of your comment describes things that men do in response to women's behavior, despite your writing it as if it were unique to women's response to men.)