thomblake comments on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics - Less Wrong

48 Post author: lukeprog 05 November 2011 11:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1529)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 16 November 2011 03:25:15PM *  0 points [-]

Can I thus generalize your objection that the optimal course of action for achieving X is impossible to discuss sans ethics in the first analysis?

Yes. Discussing the optimal course of action for achieving X is absolutely under the purview of ethics. Else you're not really finding what's optimal. Editing grandparent.

ETA: Leaving the first 'PUA' since that it is about courses of action motivates the rest.

Comment author: [deleted] 16 November 2011 03:49:04PM *  3 points [-]

I agree that finding the optimal course of action for humans dosen't mean much if it dosen't include ethics. But humans in order to do that often construct and reason within systems that don't include ethics in their optimization criteria.

There is a sometimes subtle but important difference between thinking and and considering "this is the optimal course of action optimizing for X and only X" and discussing it and between saying "you obviously should be optimizing for X and only X."

I argue that this is former is sometimes a useful tool for the latter, because it allows one to survey how possible action space differs taking away or adding axiom to your goals. It is impossible to think about what people who might have such axioms do, or how dangerous or benign to your goals the development of such goals seeking systems might be. You need to in essence know the opportunity costs of many of your axioms and how you will measure up either in a game theoretic sense (because you may find yourself in a conflict with such an agent and need to asses it capabilities and strategic options) or in a evolutionary sense (where you wish to understand how much fitness your values have in the set of all possible values and how much you need to be concerned with evolution messing up your long term plans).

In short I think that: generally It is not unethical to think about how a specific hypothetical unethical mind would think. It may indeed perhaps be risky for some marginal cases, but also very potentially rewarding in expected utility.

One can say that while this is theoretically fine but in practice actually quite risky in people with their poor quality minds. But let me point out that people are generally biased against stabbing 10 people and similar unpleasant courses of action. One can perhaps say that a substantial minority isn't, and using self-styled ethical agents cognitive capacity to emulate thinking of (in their judgement) unethical agents and sharing that knowledge willy-nilly with others will lead to unethical agents having lower enough costs and greater enough efficiency that it cancels out or overwhelms the gains of the "ethical agents".

This however seems to lead towards a generalize argument against all rationality and sharing of knowledge, because all of it involves "morally constrained" agents potentially sharing the fruits of their cognitive work with less constrained agents who then out compete them in the struggle to order the universe into certain states. I maintain this is a meaningless fear unless there is good evidence that sharing particular knowledge (say schematics for a nuclear weapon or death ray) or rationality enhancing techniques will cause more harm than good one can rely on more people being biased against doing harmful things than not and thus using the knowledge for non-harmful purposes. But this is a pretty selected group. I would argue the potential for abuse among the readers of this forum is much lower than average. Also how in the world are we supposed to be concerned about nuclear weapons or death rays if we don't have any good ideas if they are even possible? Can you ethically strongly condemn the construction of non-functioning death ray? Is it worth invading a country to stop the construction of a non-functioning death ray?

And note that at this point I'm already basically blowing the risks way out of proportion because quite honestly the disutility from a misused death ray is orders of magnitude larger than anything that can arise from what amounts to some unusually practical tips on improving one's social life.

Comment author: thomblake 16 November 2011 03:55:09PM *  0 points [-]

But humans in order to do that often construct and reason within systems that don't include ethics in their optimization criteria.

How can something not include "ethics" in its "optimization criteria"? Do you just mean that you're looking at a being with a utility function that does not include the putative human universals?

ETA: Confusion notwithstanding, I generally agree with the parent.

EDIT: (responding to edits)

This however seems to lead towards a generalize argument against all rationality and sharing of knowledge, because all of it involves "morally constrained" agents potentially sharing the fruits of their cognitive work with less constrained agents who then out compete them in the struggle to order the universe into certain states.

I actually wasn't thinking anything along those lines.

people are generally biased against stabbing 10 people and similar unpleasant courses of action

Sure, but people do unhealthy / bad things all the time, and are biased in favor of many of them. I'm not supposing that someone might "use our power for evil" or something like that. Rather, I think we should include our best information.

A discussion of how best to ingest antifreeze should not go by without someone mentioning that it's terribly unhealthy to ingest antifreeze, in case a reader didn't know that. Antifreeze is very tasty and very deadly, and children will drink a whole bottle if they don't know any better.

Comment author: [deleted] 16 November 2011 04:34:07PM *  3 points [-]

Sure, but people do unhealthy / bad things all the time, and are biased in favor of many of them. I'm not supposing that someone might "use our power for evil" or something like that. Rather, I think we should include our best information.

Our disagreement seems to boil down to:

A ... net cost of silly biased human brains letting should cloud their assessment of is.
B ... net cost of silly biased human brains letting is cloud their assessment of should.

Statement: Among Lesswrong readers: P(A>B) > P(B>A)

I say TRUE. You say FALSE.

Do you (and the readers) agree with this interpretation of the debate?

Comment author: thomblake 16 November 2011 04:39:49PM -2 points [-]

Do you (and the readers) agree with this interpretation of the debate?

I don't.

My point is that a discussion of PUA, by its nature, is a discussion of "should". The relevant questions are things like "How does one best achieve X?" Excluding ethics from that discussion is wrong, and probably logically inconsistent.

I'm actually still a bit unclear on what you are referring to by this "letting should cloud their assessment of is" and its reverse.

Comment author: [deleted] 16 November 2011 04:54:50PM *  3 points [-]

Should and is assessed as they should be:

  • I have a good map, it shows the best way to get from A to B and ... also C. I shouldn't go to C, it is a nasty place.

Should and is assessed as they unfortunately often are:

"letting should cloud their assessment of is"

  • I don't want to go to C so I shouldn't draw out that part around C on my map. I hope I still find a good way to B and don't get lost.

and its reverse.

  • I have a good map, it shows the best way to get from A to B and ... also C. Wow C's really nearby, lets go there!
Comment author: thomblake 16 November 2011 05:01:38PM *  0 points [-]

Aha. I agree. Do people really make that mistake a lot around here?

Also note that 'should' goes on the map too. Not just in the "here be dragons" sense, but also indicated by the characterization that the way is "best".

Comment author: [deleted] 16 November 2011 05:05:27PM *  2 points [-]

Do people really make that mistake a lot around here?

There is a whole host of empirically demonstrated biases in humans that work in these two directions under different circumstances. LWers may be aware of many of them, but they are far from immune.

Also note that 'should' goes on the map too. Not just in the "here be dragons" sense, but also indicated by the characterization that the way is "best".

Agreed but, should is coloured in with different ink on the map than is. I admit mapping should can prove to be as much of a challenge as is.

I see my proposal of two quarantined threads, as a proposal to lets stop messing up the map by all of us drawing with the same color at the same time, and first draw out "is" in black and then once the colour is dry add in should with red so we don't forget where we want to go. Then use that as our general purpose map and update both red and black as we along our path and new previously unavailable empirical evidence meets our eyes.

Comment author: thomblake 16 November 2011 09:41:54PM 1 point [-]

I see my proposal of two quarantined threads, as a proposal to lets stop messing up the map by all of us drawing with the same color at the same time, and first draw out "is" in black and then once the colour is dry add in should with red so we don't forget where we want to go. Then use that as our general purpose map and update both red and black as we along our path and new previously unavailable empirical evidence meets our eyes.

I didn't point this out before, but this is actually a good argument in favor of the 'ethics later' approach. It makes no sense to start drawing paths on your map before you've filled in all of the nodes. (Counterargument: assume stochasticity / a non-fully-observable environment).

Also, if this technique actually works, it should be able to be applied to political contexts as well. PUA is a relatively safer area to test this, since while it does induce mind-killing (a positive feature for purposes of this test) it does not draw in a lot of negative attention from off-site, which is one of the concerns regarding political discussion.

I am majorly in favor of researching ways of reducing/eliminating mind-killing effects.

Comment author: thomblake 16 November 2011 05:11:22PM 1 point [-]

I see my proposal of a two quarantined threads, as a proposal to lets stop messing up the colours, and first draw out is in black and then once the colour is dry add in should with red so we don't forget where we want to go. Then use that as our general purpose map.

So an analogous circumstance would be: if we were constructing a weighted directed graph representing routes between cities, we'd first put in all the nodes and connections and weights in black ink, and then plan the best route and mark it in red ink?

If so, that implies the discussion of "PUA" would include equal amounts of "X results in increased probability of the subject laughing at you" and "Y results in increased probability of the subject slapping you" and "Z results in increased probability of the subject handing you an aubergine".

If the discussion is not goal-directed, I don't see how it could be useful, especially for such a large space as human social interaction.

Comment author: [deleted] 16 November 2011 05:14:52PM *  1 point [-]

If the discussion is not goal-directed

But it would be goal directed: "To catalogue beliefs and practices of PUAs and how well they map to reality."

Without breaking the metaphor, we are taking someone else's map and comparing it to our own map. Our goal being to update our map where their map of reality (black ink) is clearly better or at the very least learn if their map sucks. And to make this harder we aren't one individual but a committee comparing the two maps. Worse some of us love their black ink more than their red one and vice versa, and can't shut up about them. Let's set up separate work meetings for the two issues so we know that black ink arguments have no place on meeting number 2. and the person is indulging his interests at the expense of good map making.

The reason why I favour black first is that going red first we risk drawing castles in clouds rather than a realizable destinations.

Comment author: [deleted] 16 November 2011 04:49:47PM *  1 point [-]

I don't.

I didn't mean to misrepresent your position or the debate so far. I was just trying to communicate how I'm seeing the debate. Hope you didn't take my question the wrong way! :)

Comment author: thomblake 16 November 2011 04:50:54PM *  2 points [-]

Hope you didn't take my question the wrong way!

Not at all (I think).

Comment author: wedrifid 16 November 2011 04:02:44PM *  3 points [-]

Ethics is a whole different thing than putative human universals. Very few things that I would assert as ethics would I claim to be human universals. "Normative human essentials" might fit in that context. (By way of illustration, we all likely consider 'Rape Bad' as an essential ethical value but I certainly wouldn't say that's a universal human thing. Just that the ethics of those who don't think Rape Is Bad suck!)

Comment author: [deleted] 16 November 2011 04:09:31PM 3 points [-]

Different ethical systems are possible to implement even on "normal" human hardware (which is far from the set of all humans!). We have ample evidence in favour of this hypothesis. I think Westerners in particular seem especially apt to forget to think of this when convenient.

Comment author: wedrifid 16 November 2011 04:12:50PM 1 point [-]

Different ethical systems are possible to implement even on "normal" human hardware (which is far from the set of all humans!).

I think I agree with what you are saying but I can't be sure. Could you clarify it for me a tad (seems like a word is missing or something.)

I think Westerners in particular seem especially apt to forget to think of this when convenient.

Westerners certainly seem to forget this type of thing. Do others really not so much?

Comment author: [deleted] 16 November 2011 04:47:43PM *  4 points [-]

I think I agree with what you are saying but I can't be sure. Could you clarify it for me a tad (seems like a word is missing or something.)

Human can and do value different things. Sometimes even when they start out valuing the same things, different experiences/cricumstances lead them to systematize this into different reasonably similarly consistent ethical systems.

Westerners certainly seem to forget this type of thing. Do others really not so much?

Modern Westerners often identify their values as being the product of reason, which must be universal. While this isn't exactly rare, it is I think less pronounced in most human cultures throughout history. I think a more common explanation to "they just haven't sat down and thought about stuff and seen we are right yet" is "they are wicked" (have different values). Which obviously has its own failure modes, just not this particular one.

Comment author: Emile 16 November 2011 05:03:39PM 2 points [-]

It would be interesting to trace the relationship between the idea of universal moral value, and the idea of universal religion. Moldbug argues that the latter pretty much spawned the former (that's probably a rough approximation), though I don't trust his scholarship on the history of ideas that much. I don't know to what extent the ancient Greeks and Romans and Chinese and Arabs considered their values to be universal (though apparently Romans legal scholars had the concept of "natural law" which they got from the Greeks which seems to map pretty closely to that idea, independently of Christianity and related universal religions).

Comment author: wedrifid 16 November 2011 04:54:40PM 1 point [-]

Thankyou. And yes, I wholeheartedly agree!

Comment author: TheOtherDave 16 November 2011 04:12:27PM 1 point [-]

I suspect you meant "I certainly wouldn't say"... confirm?

Comment author: wedrifid 16 November 2011 04:20:18PM 0 points [-]

Confirm.

Comment author: thomblake 16 November 2011 04:13:07PM 0 points [-]

That's not very helpful to me.

Ethics can arguably be reduced to "what is my utility function?" and "how can I best optimize for it?" So for a being not to include ethics in its optimization criteria, I'm confused what that would mean. I had guessed Konkvistador was referring to some sort of putative human universals.

I'm still not sure what you mean when you say their ethics suck, or what criteria you use when alleging something as ethics.

Comment author: wedrifid 16 November 2011 04:19:38PM 3 points [-]

That's not very helpful to me.

Ethics aren't about putative human universals. I'm honestly not sure how to most effectively explain that since I can't see a good reason why putative human universals came up at all!

I had guessed Konkvistador was referring to some sort of putative human universals.

Cooperative tribal norms seems more plausible. Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.

Comment author: thomblake 16 November 2011 04:25:12PM -1 points [-]

Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.

Not at all. Many prominent ethicists and anthropologists and doctors agree that there are human universals. And any particular fact about a being has ethical implications.

For example, humans value continued survival. There are exceptions and caveats, but this is something that occurs in all peoples and amongst nearly all of the population for most of their lives (that last bit you can just about get a priori). Also, drinking antifreeze will kill a person. Thus, "one should not drink antifreeze without a damn good reason" is a human universal.

Comment author: lessdazed 16 November 2011 04:35:07PM *  0 points [-]

prominent ethicists

If these people don't frequently disagree with others about ethics, they become unemployed.

This group's opinions on the subject are less correlated with reality than most groups'.

ETA: I have no evidence for this outside of some of their outlandish positions, the reasons for which I have some guesses for but have not looked into, and this is basically a rhetorical argument.

Comment author: thomblake 16 November 2011 04:54:12PM 1 point [-]

Actually, if anything I think I'd be more likely to believe that the actual job security ethicists enjoy tends to decrease their opinions' correlation with reality, as compared to the beliefs about their respective fields of others who will be fired if they do a bad job.

Comment author: thomblake 16 November 2011 04:43:05PM *  0 points [-]

If these people don't frequently disagree with others about ethics, they become unemployed.

I don't believe this, and am not aware of any evidence that it's the case.

If it was intended to be merely a formal argument, compare:

If prominent mathematicians don't frequently disagree with others about math, they become unemployed.

This group's opinions on the subject are less correlated with reality than most groups'.

ETA: Note that many prominent ethicists are tenured, and so don't get fired for anything short of overt crime.

Comment author: wedrifid 16 November 2011 04:53:10PM 1 point [-]

If it was intended to be merely a formal argument, compare:

If prominent mathematicians don't frequently disagree with others about math, they become unemployed.

This group's opinions on the subject are less correlated with reality than most groups'.

I thought you had an overwhelming point there until my second read. Then I realized that the argument would actually be a reasonable argument if the premise wasn't bogus. In fact it would be much stronger than the one about ethicists. If mathematicians did have to constantly disagree with other people about maths it would be far better to ask an intelligent amateur about maths than a mathemtician.

You can't use an analogy to an argument which uses a false premise that would support a false conclusion as a reason why arguments of that form don't work!

Comment author: wedrifid 16 November 2011 04:37:36PM *  0 points [-]

Which reality was it that the ethicists were not correlated with again? Oh, right, making factual claims about universals of human behavior. I don't disbelieve you.

Comment author: wedrifid 16 November 2011 04:31:07PM 0 points [-]

Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.

Not at all. Many prominent ethicists and anthropologists and doctors agree that there are human universals. And any particular fact about a being has ethical implications.

This does not refute!

Comment author: thomblake 16 November 2011 04:32:46PM *  -1 points [-]

This does not refute!

No duh. Though it does suggest.

Comment author: wedrifid 16 November 2011 04:34:10PM *  0 points [-]

No duh.

Was the "Not at all." some sort of obscure sarcasm?

EDIT: At time of this comment the quote was entirety of parent - although I would have replied something along those lines anyway I suppose.

Comment author: DoubleReed 16 November 2011 04:26:06PM 0 points [-]

What would be an example of a "Normative human essential"?

Comment author: [deleted] 16 November 2011 04:37:11PM *  0 points [-]

Killing young children is bad.

Comment author: thomblake 16 November 2011 04:31:54PM 0 points [-]

My guess is you wanted another example?

Comment author: DoubleReed 16 November 2011 04:51:05PM -1 points [-]

Yea, Konkvistador supplied well.

Comment author: [deleted] 16 November 2011 04:10:32PM *  0 points [-]

Sorry for posting the first few paragraphs and then immediately editing to add the later ones. It was a long post and I wanted to stop at several points but I kept getting hit by "one more idea/comment/argument" moments.