PhilosophyTutor comments on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics - Less Wrong

48 Post author: lukeprog 05 November 2011 11:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1529)

You are viewing a single comment's thread. Show more comments above.

Comment author: usedToPost 08 November 2011 03:51:30PM *  6 points [-]

Lukeprog, you have produced exactly that which we have been warned against: an article and a paradigm which has all the appearances and dressings of rationality (lots of citations, links to articles on decision theory, rationalist lingo), but which spectacularly fails to actually pursue the truth.

Vladimir_M puts it better than I could:

First, there is the conspicuous omission of any references to the PUA elephant in the room. The body of insight developed by this particular sort of people, whatever its faults, is of supreme practical importance for anyone who wants to formulate practical advice in this area. Without referencing it explicitly, one can either ignore it altogether and thus inevitably talk nonsense, or pretend to speak based solely on official academic literature, which is disingenuous and unfair in its failure to attribute credit and also misleading for those who would like to pursue their own research in the matter.....

he continues:

On the whole, the article is based on the premise that an accurate and no-nonsense analysis of the topic will result in something that sounds not just inoffensive, but actually strongly in line with various fashionable and high-status norms and ideals of the broader society. This premise however is flawed, and those who believe that this has in fact been accomplished should apply the powerful debiasing heuristic that says that when a seemingly rational discussion of some deeply problematic and controversial topic sounds pleasant and reassuring, there's probably something fishy going on

And finally:

So, what about the quality of advice that will be produced by a LW discussion on these topics operating under such constraints of respectability, where disreputable sources of accurate information are tabooed, a pretense must be maintained that the discourse is grounded in officially accredited scholarship and other high-status sources of information, and -- most important of all -- the entire discourse and its bottom line must produce a narrative that is in line with the respectable, high-status views of humanity and society? I am not at all optimistic, especially having seen what has been produced so far!

Yvain is also on point:

shy, nerdy men who can't find anyone who will love them because they radiate submissiveness and non-assertiveness, and women don't find this attractive. Most women do find dominant, high-testosterone people attractive

In three worlds collide, we were introduced to the "Order of Silent Confessors", which is "charged with guarding sanity, not morality". In this post especially, I feel that sanity is lying beaten and abused on the floor. I think we need the "Order of Silent Confessors" now.

As a start, Lukeprog, I think you should include the exerpts by vladimir_M and Yvain above in your article.

Comment author: PhilosophyTutor 09 November 2011 01:22:07PM 4 points [-]

I should disclose immediately that I am one of the people who find the PUA community distasteful on a variety of levels, intellectual and ethical, and this may colour my viewpoint.

The PUA community may present themselves, and think of themselves, as a "disreputable source of accurate information" but in the absence of controlled trials I don't think the claim to accuracy is well-founded. Sticking strictly to the scientific literature is not so much ignoring the elephant in the room as suspending judgment as to whether the elephant exists until we can turn the lights on.

If it's been said already I apologise, but it seems obvious to me that an ethical rationalist's goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties, and that scientific evidence about how to find suitable partners and behave in the relationship so as to maximise utility for both partners is a great potential source of human happiness. It's obvious from even the briefest perusal of PUA texts that the PUA community are concerned very much with maximising their own utility and talking down the status of male outgroup members and women in general, but not with honestly seeking means to maximise the utility of all stakeholders.

Given that their methodology is incompatible with scientific reasoning and their attitudes incompatible with maximising global utility for all sentient stakeholders, I think it's quite correct to leave their claims out of a LW analysis of human sexual relationships.

Comment author: Vaniver 09 November 2011 02:00:50PM 4 points [-]

it seems obvious to me that an ethical rationalist's goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties

It is not clear to me that utilities can be easily compared. What tradeoff between my satisfaction and my partner's satisfaction should I be willing to accept? I can see how to elicit my preferences (for things like partner happiness, relationship duration, and so on) and try to predict how the consequences of my actions will impact my preferences, but I don't quite see how to add utilities, or compare the amount of satisfaction I could provide to multiple potential partners.

It's obvious from even the briefest perusal of PUA texts that the PUA community are concerned very much with maximising their own utility and talking down the status of male outgroup members and women in general, but not with honestly seeking means to maximise the utility of all stakeholders.

It's not clear that they want to talk down the status of women in general. Men becoming more attractive and less annoying to women seems to be better for women, and there's quite a bit in the PUA literature of how to keep a long-term relationship going, if that's what you want to do.

Comment author: PhilosophyTutor 09 November 2011 10:43:33PM *  1 point [-]

You are absolutely right that utilities cannot be easily compared and that this is a fundamental problem for utilitarian ethics.

We can approximate a comparison in some cases using proxies like money, or in some cases by assuming that if we average enough people's considered preferences we can approach a real average preference. However these do not solve the fundamental problem that there is no way of measuring human happiness such that we could say with confidence "Action A will produce a net 10 units of happiness, and Action B will produce a net 11 units of happiness".

In the case of human sexual relationships what you'd really have to do is conduct a longitudinal study looking at variables like reported happiness, incidence of mental illness, incidence of suicide, partner-assisted orgasms per unit time, longevity and so on.

That said this difficulty in totalling up net utilities is not a moral blank cheque. If women report distress after a one night stand with a PUA followed by cessation of contact then that has to be taken as evidence of caused disutility, and you can't remove the moral burden that entails by pointing out that calculating net utility is difficult or postulating that their distress is their fault because they are "entitled"/"in denial"/etc.

Comment author: Vaniver 09 November 2011 10:56:10PM 2 points [-]

conduct a longitudinal study looking at variables like

While this would give people more knowledge about how their actions turn into consequences, this doesn't help people decide which consequences they prefer, and so only weakly helps them decide which actions they prefer.

If women report distress after a one night stand with a PUA followed by cessation of contact then that has to be taken as evidence of caused disutility, and you can't remove the moral burden that entails

So, let's drop the term utility, here, and see if that clarifies the moral burden. Suppose Bob and Alice go to a bar and meet; they both apply seduction techniques; they have sex that night. Alice's interest in Bob increases; Bob's interest in Alice decreases. What moral burdens are on each of them, and where did those moral burdens come from?

Comment author: Prismattic 09 November 2011 11:43:12PM -2 points [-]

The thing is, some (granted, not all) of what falls under PUA or "apply seduction techniques" falls unambiguously into the category of dark arts.

I find it hard to believe that we want to argue that, "Dark arts are bad, except when they can get you laid."

Comment author: wedrifid 12 November 2011 04:24:50AM 4 points [-]

I find it hard to believe that we want to argue that, "Dark arts are bad, except when they can get you laid."

Dark arts AREN"T bad in general! Nor is avadakadavraing anyone that you would have shot with a gun anyway.

Comment author: Vaniver 10 November 2011 01:38:41AM 3 points [-]

I find it hard to believe that we want to argue that, "Dark arts are bad, except when they can get you laid."

Ah. I prefer not to argue "dark arts are bad," rather "dark arts do not illuminate." Tautologies have the virtue of being true.

(Put flippantly, sex is sometimes easier with the lights off.)

Comment author: Prismattic 11 November 2011 03:26:07AM *  1 point [-]

I was using "dark arts" here in the more narrow sense of "techniques designed to subvert the rationality of others by exploiting cognitive biases." I'm not speaking of being an effective flirt, or wearing flattering makeup and clothing. The sort of things I had in mind are, to take a mild example, bringing a slightly less attractive "wingman" to make oneself look more attractive than one would alone, or to take a serious example, whisking a woman from bar to bar to create the illusion of longer-term acquaintance. I see this as wrong for essentially the same reason that spiking someone's drink is wrong if they wouldn't sleep with you sober.

To oversimplify somewhat, I tend to see society as divided into three groups: those who don't generally aspire to rationality (the majority of the population), those who want to share the bounty of rationality to help others overcome their biases (Lesswrong), and those who would instead use their knowledge of rationality to exploit people in the first group. I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.

Comment author: PhilosophyTutor 12 November 2011 04:05:20AM *  2 points [-]

I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.

My observation is that most of the posts I have made that criticised PUA or PUA-associated beliefs have been voted down very quickly, but then they have bounced back up over the next day or so such that the overall karma delta is highly positive. One hypothesis that explains it is that there are a certain number of people reviewing this thread at short intervals who are downvoting posts critical of PUA, but that they are not the plurality of posters reviewing this thread.

ETA: Update on this. Posts critical of PUA ideology that are concealed from the main thread either by being voted to -3 or below, or by being a descendant of such, get voted into the ground, and as far as I can see this effect is largely insensitive to the intellectual value or lack thereof of the post. I hypothesise that the general LW readership doesn't bother drilling down to see what's going on in those subthreads and hence their opinions are not reflected in the vote count, while PUA-enthusiasts who vote along ideological lines do bother to drill down.

Posts critical of PUA that are well-written, logical, pertinent and visible to the general readership are voted up, overall.

Comment author: lessdazed 12 November 2011 07:33:11AM 5 points [-]

One explanation is that the first to read your messages are those you responded to, who are those most likely to note any poorness of fit between what they said and what they are alleged or implied to have said or believed.

Comment author: wedrifid 12 November 2011 05:04:59AM 0 points [-]

I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.

I'm shocked that it didn't stay below 0. Forget any point it was trying to make about dating - it sends totally the wrong message about 'lesswrong' attitudes towards 'dark arts'!

Comment author: PhilosophyTutor 10 November 2011 01:47:54AM 0 points [-]

While this would give people more knowledge about how their actions turn into consequences, this doesn't help people decide which consequences they prefer, and so only weakly helps them decide which actions they prefer.

I think it does help if people have pre-existing views about whether they like the internal experience of happiness, mental health, continued life, orgasms and so on, and about whether they can legitimately generalise those views to others. I don't think I would be making an unreasonable assumption if I assumed that an arbitrarily chosen woman in a bar would most likely have a preference for the internal experience of happiness, mental health, continued life, orgasms and so on and hence that conduct likely to bring about those outcomes for her would produce utility and conduct likely to bring about the opposite would produce negative utility.

So, let's drop the term utility, here, and see if that clarifies the moral burden. Suppose Bob and Alice go to a bar and meet; they both apply seduction techniques; they have sex that night. Alice's interest in Bob increases; Bob's interest in Alice decreases. What moral burdens are on each of them, and where did those moral burdens come from?

There is not enough information to say, and your chosen scenario is possibly not the best one for exploring the ethics of PUA behaviour since it firstly postulates that the female participant is also using seduction techniques (hopefully defined in some more specific sense than just trying to be attractive), and secondly it skips entirely over the ethical question of approaching someone in the first place and possibly getting them to participate in sex acts they may not have planned to engage in. By jumping straight to the next morning and asking that the moral path is forward from that point this scenario avoids arguably the most important ethical questions about PUA behaviour.

However I will answer the question as posed to avoid accusations that I am simply avoiding it. From a utilitarian perspective the moral burden is simply to maximise utility, so we need to know what are Bob and Alice's utility functions are, and what Bob and Alice should reasonably think the other party's utility function is like.

It might well be that Bob has neither the interest not the ability to sustain a mutually optimal ongoing relationship with Alice and in that case the utility-maximising path from that point forward and hence the ethical option is for Bob to leave and not contact Alice again. However if Bob knew in advance that this was the case and had reason to believe that Alice's utility function placed a negative value on participating in a one night stand with a person who was not interested in a long-term relationship then Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.

Comment author: Vaniver 10 November 2011 11:11:37AM 1 point [-]

I don't think I would be making an unreasonable assumption if I assumed that an arbitrarily chosen woman in a bar would most likely have a preference for the internal experience of happiness, mental health, continued life, orgasms and so on and hence that conduct likely to bring about those outcomes for her would produce utility and conduct likely to bring about the opposite would produce negative utility.

Knowing that her weights on those things are positive gets me nowhere. What I need to know are their relative strengths, and this seems like an issue where (heterosexual) individuals are least poised to be able to generalize their own experience. It seems likely that a man could go through life thinking that everyone enjoys one night stands and sleeps great afterwards, and not until reading PUA literature realizes that women often freak out after them.

the female participant is also using seduction techniques (hopefully defined in some more specific sense than just trying to be attractive)

Suppose she flirts, or the equivalent (that is, rather than just seeking general attraction, she seeks targeted attraction at some point). If she never expresses any interest, it's unlikely she and Bob will have sex (outside of obviously unethical scenarios).

this scenario avoids arguably the most important ethical questions about PUA behaviour.

What question do you think is most important?

we need to know what are Bob and Alice's utility functions are, and what Bob and Alice should reasonably think the other party's utility function is like.

Suppose Bob and Alice both believe that actions reveal preferences.

Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.

Suppose Alices enjoy one night stands, and Carols regret one night stands, though they agree to have sex after the first date. When Bob meets a woman, he can't expect her to honestly respond whether she's a Carol or an Alice if he asks her directly. What probability does he need that a woman he seduces in a bar will be an Alice for it to be ethical to seduce women in bars?

As well, if he believes that actions reveal preferences, should he expect that one night stands are a net utility gain or loss for Carols?

Comment author: PhilosophyTutor 11 November 2011 02:19:27PM -2 points [-]

Knowing that her weights on those things are positive gets me nowhere. What I need to know are their relative strengths, and this seems like an issue where (heterosexual) individuals are least poised to be able to generalize their own experience. It seems likely that a man could go through life thinking that everyone enjoys one night stands and sleeps great afterwards, and not until reading PUA literature realizes that women often freak out after them.

Hopefully research like that cited in the OP can help with that. In the meantime we have to do the best we can with what we have, and engage in whatever behaviours maximise the expected utility of all stakeholders based on our existing, limited knowledge.

What question do you think is most important?

I think the most important question is "Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?". A close second would be "Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?".

Suppose Alices enjoy one night stands, and Carols regret one night stands, though they agree to have sex after the first date. When Bob meets a woman, he can't expect her to honestly respond whether she's a Carol or an Alice if he asks her directly. What probability does he need that a woman he seduces in a bar will be an Alice for it to be ethical to seduce women in bars?

One approach would be to multiply the probability you have an Alice by the positive utility an Alice gets out of a one night stand, and multiply the probability that you have a Carol by the negative utility a Carol gets out of a one night stand, and see which figure was larger. That would be the strictly utilitarian approach to the question as proposed.

If we're allowed to try to get out of the question as proposed, which is poor form in philosophical discussion and smart behaviour in real life, a good utilitarian would try to find ways to differentiate Alices and Carols, and only have one night stands with Alices.

A possible deontological approach would be to say "Ask them if they are an Alice or a Carol, and treat them as the kind of person they present themselves to be. If they lied it's their fault".

The crypto-sociopathic approach would be to say "This is all very complicated and confusing, so until someone proves beyond any doubt I'm hurting people I'll just go on doing what feels good to me".

Comment author: wedrifid 11 November 2011 02:49:57PM *  6 points [-]

I think the most important question is "Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?".

"Deliberately faking social signals"? But, but, that barely makes any sense. They are signals. You give the best ones you can. Everybody else knows that you are trying to give the best signals that you can and so can make conclusions about your ability to send signals and also what other signals you will most likely give to them and others in the future. That is more or less what socializing is. I suppose blatant lies in a context where lying isn't appropriate and elaborate creation of false high status identities could be qualify - but in those case I would probably use a more specific description.

A close second would be "Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?".

A third would be "could the majority of humans have a romantic relationship without dominance-seeking behavior?" and the fourth : "would most people find romantic relationships anywhere near as satisfying without dominance-seeking behavior?" (My money is on the "No"s.)

Comment author: NancyLebovitz 11 November 2011 03:05:04PM 3 points [-]

One more question: What principles would help establish how much dominance seeking behavior is enough to break the relationship or in some other way cause more damage than it's worth, considering that part of dominance is ignoring feedback that it's unwelcome?"

Comment author: Vaniver 14 November 2011 04:42:14AM 1 point [-]

I think the most important question is "Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?".

This question seems malformed. "Deliberating faking social signals" is vague- but is typically not something that's unethical (Is it unethical to exaggerate?). "What we know of the consequences" is unclear- what's our common knowledge?

A close second would be "Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?".

Yes.

That would be the strictly utilitarian approach to the question as proposed.

And, of course, you saw the disconnect between your original statement and your new, more correct one.

Right?

If we're allowed to try to get out of the question as proposed, which is poor form in philosophical discussion and smart behaviour in real life, a good utilitarian would try to find ways to differentiate Alices and Carols, and only have one night stands with Alices.

The reason I asked that question is because you put forth the claim that Bob's fault was knowingly causing harm to someone. That's not the real problem, though- people can ethically knowingly cause harm to others in a wide variety of situations, under any vaguely reasonable ethical system. Any system Bob has for trying to determine the difference between Alices and Carols will have some chance of failure, and so it's necessary to use standard risk management, not shut down.

Comment author: PhilosophyTutor 15 November 2011 12:19:29AM 1 point [-]

This question seems malformed. "Deliberating faking social signals" is vague- but is typically not something that's unethical (Is it unethical to exaggerate?). "What we know of the consequences" is unclear- what's our common knowledge?

Rhetorical questions are a mechanism that allows us to get out of making declarative statements, and when you find yourself using them that should be an immediate alert signal to yourself that you may be confused or that your premises bear re-examination.

Deceiving others to obtain advantage over them is prima facie unethical in many spheres of life, and I think Kant would say that it is always unethical. Some role-ethicists would argue that when playing roles such as "salesperson", "advertiser" or "lawyer" that you have a moral license or even obligation to deceive others to obtain advantage but these seem to me like rationalisations rather than coherent arguments from supportable prior principles. Even if you buy that story in the case of lawyers, however, you'd need to make a separate case that romantic relationships are a sphere where deceiving others to obtain advantage is legitimate, as opposed to unethical.

PUA is to a large extent about spoofing social signals, in the attempt to let young, nerdy, white-collar IT workers signal that they have the physical and psychological qualities to lead a prehistoric tribe and bring home meat. The PUA mythology tries to equivocate between spoofing the signals to indicate that you have such qualities and actually having such qualities but I think competent rationalists should be able to keep their eye on the ball too well to fall for that. Consciously and subconsciously women want an outstanding male, not a mediocre one who is spoofing their social signals, and being able to spoof social signals does not make you an outstanding male.

Yes.

Okay. We come from radically different ethical perspectives such that it may be unlikely that we can achieve a meeting of minds. I feel that dominance-seeking in romantic relationships is a profound betrayal of trust in a sphere where your moral obligations to behave well are most compelling.

And, of course, you saw the disconnect between your original statement and your new, more correct one. Right?

Can you point me to the text that you take to be "my original statement" and the text you take to be "my new, more correct statement"? There may be a disconnect but I'm currently unable to tell what text these constructs are pointing to, so I can't explicate the specific difficulty.

The reason I asked that question is because you put forth the claim that Bob's fault was knowingly causing harm to someone. That's not the real problem, though- people can ethically knowingly cause harm to others in a wide variety of situations, under any vaguely reasonable ethical system.

People can ethically and knowingly burn each other to death in a wide variety of situations under any vaguely reasonable ethical system too, so that statement is effectively meaningless. It's a truly general argument. (Yes, I exclude from reasonableness any moral system that would stop you burning one serial killer to death to prevent them bringing about some arbitrarily awful consequence if there were no better ways to prevent that outcome).

Any system Bob has for trying to determine the difference between Alices and Carols will have some chance of failure, and so it's necessary to use standard risk management, not shut down.

We agree completely on that point, but it seems to me that a substantial subset of PUA practitioners and methodologies are aiming to deliberately increase the risk, not manage it. Their goals are to maximise the percentage of Alices who sleep with the PUA and also to maximise the percentage of Carols who sleep with the PUA.

It doesn't seem unreasonable to go further and say that in large part the whole point of PUA is to bed Carols. Alices are up for a one night stand anyway, so manipulating them to suspend their usual protective strategies and engage in a one night stand with you would be as pointless as peeling a banana twice. It's only the Carols who are not normally up for a one night stand that you need to manipulate in the first place. Hence that subset of PUA is all about maximising the risk of doing harm, not minimising that risk.

(Note that these ethical concerns are orthogonal to, not in conflict with, my equally serious methodological concerns about whether it's rational to think PUA performs better than placebo given the available evidence).

Comment author: lessdazed 11 November 2011 02:36:58PM 1 point [-]

until someone proves beyond any doubt

What about just "until someone proves scientifically"?

Comment author: PhilosophyTutor 12 November 2011 03:26:45AM -2 points [-]

What about just "until someone proves scientifically"?

Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.

A disciple of Bayes and Bentham doesn't go around causing harm up until someone else shows that it's scientifically proven that they are causing harm. They do whatever maximises expected utility for all stakeholders based on the best evidence available at the time.

Note that this judgment holds regardless of the relative effectiveness of PUA techniques compared to placebo. Even if PUA is completely useless, which would be surprising given placebo effects alone, it would still be unethical to seek out social transactions that predictably lead to harm for a stakeholder without greater counterbalancing benefits being obtained somehow.

Comment author: TheOtherDave 09 November 2011 03:40:02PM 0 points [-]

So, this gets at something that frequently confuses me when people start talking about personal utilities.

It seems that if I can reliably elicit the strength of my preferences for X and Y, and reliably predict how a given action will modify the X and Y in my environment, then I can reliably determine whether to perform that action, all else being equal. That seems just as true for X = "my happiness" and Y = "my partner's happiness" as it is for X = "hot fudge" and Y = "peppermint".

But you seem to be suggesting that that isn't true... that in the first case, even if I know the strengths of my preferences for X and Y and how various possible actions lead to X and Y, there's still another step ("adding the utilities") that I have to perform before I can decide what actions to perform. Do I understand you right?

If so, can you say more about what exactly that step entails? That is... what is it you don't know how to do here, and why do you want to do it?

Comment author: Vaniver 09 November 2011 06:11:01PM 1 point [-]

You're missing four letters. Call the strength of your preferences for X and Y A and B, and call your partner's preferences for X and Y C and D. (This assumes that you and your partner both agree on your happiness measurements.)

I agree there's a choice among available actions which maximizes AX+BY, and that there's another choice that maximizes CX+DY. What I think is questionable is ascribing meaning to (A+C)X+(B+D)Y.

Notice there are an infinite number of A,B pairs that output the same action, and an infinite number of C,D pairs that output the same action, but when you put them together your choice of A,B and C,D pairs matters. What scaling to choose is also a point of contention, since it can alter actions.

Comment author: TheOtherDave 09 November 2011 07:06:19PM 0 points [-]

So, we're assuming here that there's no problem comparing A and B, which means these valuations are normalized relative to some individual scale. The problem, as you say, is with the scaling factor between individuals. So it seems I end up with something like (AX + BY + FCX + FDY), where F is the value of my partner's preferences relative to mine. Yes?

And as you say, there's an infinite number of Fs and my choice of action depends on which F I pick.

And we're rejecting the idea that F is simply the strength of my preference for my partner's satisfaction. If that were the case, there'd be no problem calculating a result... though of course no guarantee that my partner and I would calculate the same result. Yes?

If so, I agree that that coming up with a correct value for F sure does seem like an intractable, and quite likely incoherent, problem.

Going back to the original statement... "an ethical rationalist's goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties" seems to be saying F should approximate 1. Which is arbitrary, admittedly.

Comment author: Vaniver 09 November 2011 10:45:14PM 0 points [-]

And we're rejecting the idea that F is simply the strength of my preference for my partner's satisfaction. If that were the case, there'd be no problem calculating a result... though of course no guarantee that my partner and I would calculate the same result. Yes?

Yes. If you and your partner agree- that is, A/B=C/D- then there's no trouble. If you disagree, though, there's no objectively correct way to determine the correct action.

Going back to the original statement... "an ethical rationalist's goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties" seems to be saying F should approximate 1. Which is arbitrary, admittedly.

Possibly, though many cases with F=1 seem like things PhilosophyTutor would find unethical. It seems more meaningful to look at A and B.

Comment author: wedrifid 09 November 2011 02:47:05PM 7 points [-]

Given that their methodology is incompatible with scientific reasoning

Not something you have shown (or something that appears remotely credible).

and their attitudes incompatible with maximising global utility for all sentient stakeholders,

Not much better and also not a particularly good reason to exclude an information source from an analysis. (An example of a good reason would be "people say a bunch of prejudicial nonsense for all sorts of reasons and everybody concerned ends up finding it really, really annoying").

Comment author: usedToPost 09 November 2011 09:07:49PM 9 points [-]

Given that their methodology is incompatible with scientific reasoning

They write stuff on their version of ArXiv (called pick-up forums) then they go out and try it, and if it works repeatably it is incorporated into PU-lore.

What definition of science did you have in mind that this doesn't fit?

Comment author: PhilosophyTutor 09 November 2011 10:19:47PM 18 points [-]

There are a significant number of methodological problems with their evidence-gathering.

PUAs don't change just one variable at a time, nor do they keep strict track of what they change and when so they can do a multivariate regression analysis. Instead they change lots of variables at once. A PUA would advocate that a "beta" change their clothes, scent, social environment(s), social signalling strategies and so forth all at once and see if their sexual success rate changed. However if this works you don't know which changes did what.

The people doing the observation are the same people conducting the experiment which is obviously incompatible with proper blinding.

The people reporting the data stand to gain social status in the PUA hierarchy if they report success, and hence have an incentive to misreport their actual data. When a PUA reports that they successfully obtained coitus on one out of six attempts using a given methodology it is reasonable to suspect that some such reports come from people who actually took sixteen attempts, or from people who failed to obtain coitus given sixteen attempts and went home to angrily masturbate and then post on a PUA forum that they had obtained success. We can't tell what the real success rate is without observing PUAs in the wild.

Even assuming honest reporting it seems intuitively likely that PUAs, like believers in psychic powers, are prone to reporting their hits and forgetting their misses. It's a known human trait to massage our internal data this way and barring rigorous methodological safeguards it's a safe assumption that this will bias any reported results.

There's no comparison with a relevant base rate, which is a classic example of the base rate fallacy in action. We don't know what the success rate for a well-groomed, well-spoken person who does not employ PUA social signalling tactics is compared with a similarly groomed and comported person using PUA social signalling tactics, for example.

A successful PUA was mentioned as having obtained coitus ~300 times out of ~10 000 approaches. That's useless unless we know what success rate other methodologies would have produced. In any case people aren't naturally such good statisticians that they can detect variations in frequency in a phenomenon that occurs one time in 33 at best with a sample size for a given experiment in the tens at most.

PUA mythology seems to me to have built-in safeguards against falsifiability. If a woman rejects a PUA then it can be explained away as her being "entitled" or "conflicted" or something similar. If a woman chooses a "beta" over a PUA then it can be explained away in similar terms or by saying that she has low self-esteem and doesn't think she is worthy of an "alpha", and/or postulating that if an "alpha" came along she would of course engage in an extra-marital affair with the "alpha". As long as the PUAs are obtaining sex some of the time, or are claiming they are doing so, their theories aren't falsifiable.

We shouldn't trust a PUA's reported opinion about their ability to obtain sex more often than chance any more than we should trust a claimed psychic's reported opinion about their ability to predict the future more often than chance. Obviously our prior probability that they are reporting true facts about the universe should be higher for the PUA since their claims do not break the laws of physics, but their testimony should not give us strong reason to shift our prior.

Comment author: steven0461 10 November 2011 02:31:51AM 5 points [-]

You're assuming that there's no feedback other than a single yes/no bit per approach.

Comment author: pjeby 09 November 2011 11:36:34PM 2 points [-]

PUA mythology seems to me to have built-in safeguards against falsifiability. ... As long as the PUAs are obtaining sex some of the time, or are claiming they are doing so, their theories aren't falsifiable.

Note that this may be a feature, not a bug: a PUA with unwavering belief in their method will likely exude more confidence, regardless of the method employed.

I remember one pickup guru describing how when he was younger, he'd found this poem online that was supposed to be the perfect pickup line... and the first few times he used it, it was, because he utterly believed it would work. Later, he had to find other methods that allowed him to have a similar level of belief.

As has been mentioned elsewhere on LW, belief causes people to act differently -- often in ways that would be difficult or impossible to convincingly fake if you lacked the belief. (e.g. microexpressions, muscle tension, and similar cues)

To put it another way, even the falsifiability of PUA theory is subject to testing: i.e., do falsifiable PUA theories work better or worse than unfalsifiable ones? If unfalsifiable ones produce better results, then it's a feature, not a bug. ;-)

Comment author: PhilosophyTutor 09 November 2011 11:51:12PM 6 points [-]

Only in the same sense that the placebo effect is a "feature" of evidence-based medicine.

It's okay if evidence-based medicine gets a tiny, tiny additional boost from the placebo effect. It's good, in fact.

However when we are trying to figure out whether or not a treatment works we have to be absolutely sure we have ruled out the placebo effect as the causative factor. If we don't do that then we can never find out which are the good treatments that have a real effect plus a placebo effect, and which are the fake treatments that only have a placebo effect.

Only if it turned out that method absolutely, totally did not matter and only confidence in the method mattered would it be rational to abandon the search for the truth and settle for belief in an unfalsifiable confidence-booster. It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.

Comment author: pjeby 10 November 2011 12:03:27AM 1 point [-]

It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.

This really, really underestimates the number of confounding factors. For any given man, the useful piece of information is what method will work for him, for women that:

  1. Would be happy with him, and
  2. He would be happy with

(Where "with" is defined as whatever sort of relationship both are happy with.)

This is a lot of confounding factors, and it's pretty central to the tradeoff described in this post: do you go for something that's inoffensive to lots of people, but not very attractive to anyone, or something that's actually offensive to most people, but very attractive to your target audience?

You can't do group randomized controls with something where individuality actually does count.

This is especially true of PUA advice like, "be in the moment" and "say something that amuses you". How would you test these bits of advice, for example, while holding all other variables unchanged? By their very definition, they're going to produce different behavior virtually every time you act on them.

Comment author: PhilosophyTutor 10 November 2011 01:01:44AM 2 points [-]

There are two classes of claim here we need to divide up, but they share a common problem. First the classes, then the problem.

The first class is claims that are simply unfalsifiable. If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.

The second class is claims that are hard to prove or disprove because there are multiple confounding factors, but which with proper controls and a sufficiently large sample size we could in theory confirm or disconfirm. If a moderate amount of cologne works better than none at all or a large amount of cologne, for example, then if we got enough men to approach enough women then eventually if there's a real effect we should be able to get a data pool that shows statistical significance despite those confounding effects.

The common problem both classes of claims have is that a rationalist is immediately going to ask someone who proposes such a claim "How do you think you know this?". If a given claim is terribly difficult to confirm or disconfirm, and nobody has yet done the arduous legwork to check it, it's very hard to see how a rational agent could think it is true or false. The same goes except more strongly for unfalsifiable claims.

For a PUA to argue that X is true, but that X is impossible to prove, is to open themselves up to the response "How do you know that, if it's impossible to prove?".

Comment author: pjeby 10 November 2011 01:57:09AM *  3 points [-]

If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.

Sure... as long as you separate predictions from theory. When you reduce a PUA theory to what behaviors you expect someone believing that theory would produce, or what behaviors, if successful, would result in people believing such theories, you then have something suitable for testing, even if the theory is nonsensical on its face.

Lots of people believe in "The Secret" because it appears to produce results, despite the theory being utter garbage. But then, it turns out that some of what's said is consistent with what actually makes people "luckier"... so there was a falsifiable prediction after all, buried under the nonsense.

If a group of people claim to produce results, then reduce their theory to more concrete predictions first, then test that. After all, if you discard alchemy because the theory is bunk, you miss the chance to discover chemistry.

Or, in more LW-ish speak: theories are not evidence, but even biased reports of actual experience are evidence of something. A Bayesian reductionist should be able to reduce even the craziest "woo" into some sort of useful probabilistic information... and there's a substantial body of PUA material that's considerably less "woo" than the average self-help book.

In the simplest form, this reduction could be just: person A claims that they were unsuccessful with women prior to adopting some set of PUA-trained behaviors. If the individual has numbers (even if somewhat imprecise) and there are a large number of people similar to person A, then this represents usable Bayesian evidence for that set of behaviors (or the training itself) being useful to persons with similar needs and desires as person A.

This is perfectly usable evidence that doesn't require us to address the theory or its falsifiability at all.

Now, it is not necessarily evidence for the validity of person A's favorite PUA theory!

Rather, it is evidence that something person A did differently was helpful for person A... and it remains an open question to determine what actually caused the improvement. For example, could it simply be that receiving PUA training somehow changes people? That it motivates them to approach women repeatedly, resulting in more confidence and familiarity with approaching women? Any number of other possible factors?

In other words, the actual theory put forth by the PUAs doing the teaching shouldn't necessarily be at the top of the list of possibilities to investigate, even if the teaching clearly produces results...

And using theory-validity as a screening method for practical advice is pretty much useless, if you have "something to protect" (in LW speak). That is, if you need a method that works in an area where science is not yet settled, you cannot afford to discard practical advice on the basis of questionable theory: you will throw out way too much of the available information. (This applies to the self-help field as much as PUA.)

Comment author: PhilosophyTutor 10 November 2011 02:16:42AM -1 points [-]

I'm perfectly happy to engage with PUA theories on that level, but the methodological obstacles to collecting good data are still the same. So the vital question is still the same, which is "How do these people think they know these things?".

The only difference is that instead of addressing the question to the PUA who believes specific techniques A, B and C bring about certain outcomes, we address the question to the meta-PUA who believes that although specific techniques A, B and C are placebos that belief in the efficaciousness of those techniques has measurable effects.

However PUA devotees might not want to go down this argumentative path because the likely outcome is admitting that much of the content on PUA sites is superstition, and that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.

PUA devotees like to position themselves as gurus with secret knowledge. If it turns out that the entire edifice is indistinguishable from superstition then they would be repositioned as people with poor social skills and misogynist world-views who reinvented a very old wheel and then constructed non-evidence-based folk beliefs around it.

So depending on the thesis you are arguing for, it might be safer to argue that PUA techniques do have non-placebo effects.