Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 31 May 2016 05:49:47PM 0 points [-]

it looks like you're making a complaint about the logical structure of the criticism

Nope. I'm making a guess that this particular argument looked like a good soldier and so was sent into battle; a mirror-image argument would also look like a good soldier and would also be sent into the same battle. Logical structure is an irrelevant detail X-/

Comment author: Houshalter 31 May 2016 05:47:09PM 0 points [-]

Right, but what about the people who say they strongly believe in cryonics, have income high enough to afford it (and the insurance isn't that expensive actually), yet haven't signed up? I.e. "cryocrastinators". There are a lot of those on the survey results every year.

I believe this was the argument used, that Lesswronger's aren't very instrumentally rational, or good at actually getting things done. Again, I can't find the post in question, it's possible it was deleted.

Comment author: Lumifer 31 May 2016 05:46:10PM *  0 points [-]

It's not a media issue. Think about how much empathy and attention did Jesus and his army of saints consume X-)

But generally speaking, I don't buy the "empathizing with your neighbor gives them 10 points of utility, but doesn't give you anything" assertion. That's not how human interaction works.

Comment author: V_V 31 May 2016 05:02:55PM *  0 points [-]

If you know that it is a false memory then the experience is not completely accurate, though it may be perhaps more accurate than what human imagination could produce.

Comment author: skeptical_lurker 31 May 2016 04:56:46PM 0 points [-]

Thanks! (and thanks to sixesandsevens)

Comment author: moridinamael 31 May 2016 04:48:30PM *  0 points [-]
Comment author: gjm 31 May 2016 04:35:17PM 0 points [-]

That's no-win given that ideas generally held on LW imply that we should sign up for cryonics.

There's nothing necessarily unfair about that. Suppose some group's professed beliefs imply that the sun goes around the earth; then you may say that members of the group are inconsistent if they aren't geocentrists, and crazy if they are. No win, indeed, but the problem is that their group's professed beliefs imply something crazy.

In this case, I don't think it's clear there is such a thing as LW's professed beliefs, it's not clear that if there are they imply that we should sign up for cryonics, and I don't think signing up for cryonics is particularly crazy. So I'm not exactly endorsing the no-win side of this. But it looks like you're making a complaint about the logical structure of the criticism that would invalidate some perfectly reasonable criticisms of (other?) groups and their members.

Comment author: gjm 31 May 2016 04:28:42PM 1 point [-]

they encourage a lazy style of reading

They enable a lazy style of reading. They also enable the reverse: a style of reading where the reader knows ahead of time that certain of their buttons are about to be pushed, and takes measures in advance to minimize the effect.

For my part, I find both helpful. Sometimes it's clear that something is unlikely to be worth my time to read because it's entirely based on premises I don't accept. Sometimes it's clear that while the author's position is very different from mine, they have interesting things to say that I might find helpful. Sometimes their position is very different from mine and I read on in the hope that if I'm wrong I can be corrected. All of these require different attitudes while reading.

(Of course one can do without. But the more mental effort the author kindly saves me from expending in figuring out whether their piece is worth reading, whether I need to be reading it with an eye to revising my most deeply held beliefs, etc., etc., the more I can give to the actual content of what they've written.)

Comment author: cousin_it 31 May 2016 04:23:07PM 0 points [-]

Fair point. But did the media always draw such a big proportion of the attention we could've spent on each other?

Comment author: gjm 31 May 2016 04:18:16PM 1 point [-]

What people are you talking about?

Schoolteachers teach formulaic writing because (1) it's easy to teach formulas and hard to teach actual clear thinking and good writing style, (2) it's easy to assess writing against a formula and hard to assess actual clear thinking and good writing style, (3) writing to a formula is relatively easy to do, compared with writing well without one, and (4) most schoolchildren's baseline writing skills are so terrible that giving them a formula and saying "do it like this" makes for a considerable improvement.

Schoolteachers suffering from déformation professionelle may think formulaic writing is good writing. Their pupils may think the same, having been taught that way; hopefully those who end up doing much writing will learn better in due course.

Aside from that -- does anyone actually "think formulaic writing is good writing"? I don't see anyone here saying it is. What I do see is some people saying "this article was hard to read and would have been improved by more indication of where it's going, the sort of thing that writing-by-formula tends to encourage". I hope you can see the difference between "formulaic writing is good" and "this specific element of one kind of formulaic writing is actually often a good idea".

I disagree firmly with their use

Fair enough. But note that buybuydandavis's complaint isn't really "there isn't a thesis statement" but "after a couple of paragraphs, I have no idea where this is going": a thesis statement would be one way to address that, but not the only one. (And your own articles on LW, thesis statements or no, seem to me to have the key property BBDD is complaining casebash's lacks: it is made clear from early on where the article's going, and there are sufficient signposts to keep the reader on track. Possible exception: "The Winding Path", which you say was an aesthetic experiment.)

Comment author: Lumifer 31 May 2016 04:16:57PM *  1 point [-]

I had previously heard criticism of Lesswrong, that if we really believe in cryonics, it's irrational that so few are signed up.

It's a standard no-win situation: if too few have signed up, LW people are irrational; and if many have signed up, LW is a cult.

Comment author: Lumifer 31 May 2016 04:06:57PM 0 points [-]

Consider how old and universal story-telling is. Humans felt empathy for fictional characters since forever.

Comment author: Ilverin 31 May 2016 04:05:12PM 1 point [-]
Comment author: gjm 31 May 2016 04:04:23PM 0 points [-]

So you [...]

Nope.

I noticed my defensive reflexes doing their thing. Then I (1) continued to read the article while dealing appropriately with those defensive reflexes, and (2) mentioned the uprising of those reflexes as evidence that the author had not successfully made a non-political-mindkilling article out of whatever potentially-mindkilling issue s/he had in mind.

a problem, not an excuse

I'm not sure what you mean by that, but if you mean that you think the original article killed my mind (and that rather than trying to avoid that I just said "politics is the mindkiller so I couldn't help myself" or something) then I invite you to show me some evidence for that.

Comment author: root 31 May 2016 03:16:43PM 0 points [-]

Mainly because I don't read rational fictions. I can't call myself even sufficiently rational so the whole point of a rationalfic would be lost on me. I've read HPMOR. It was nice. I just felt that I've missed something on a different level because it seems (to me) to have a large amount of praise.

Comment author: Dagon 31 May 2016 03:14:10PM -1 points [-]

No. Policy may be about incentives. Politics is mostly about misdirection of attention and taking advantage of tribal instincts to gloss over individual incentives.

Comment author: Bound_up 31 May 2016 03:06:21PM 0 points [-]

Hmm...I would be open to an alternative.

But what I've got in mind is: if someone were suddenly to acquire an extra 100 flaws, this would indeed be a loss; they would feel worse walking down the street as people glance at them, they would lose social status, people would judge them as less honest, kind, intelligent, etc.

So they are losing social status and they're losing other people thinking well of their appearance, and like any other loss will tend to fear it more than they would value gains of equal size.

And that's what people DO experience, in a less dramatic way. You could say, perhaps, that it's because we have the ability to alter our appearance that the problem exists, because sometimes we look better than at other times, and we'll tend to focus on the flaws that make the difference.

Comment author: Lumifer 31 May 2016 02:59:16PM *  0 points [-]

they encourage a lazy style of reading

Laziness is a virtue :-P

There are a great many things available for me to read and I would prefer to figure out whether I want to read a particular piece before finishing it. There are way too many idiots who managed to figure out how a keyboard works.

Comment author: Bound_up 31 May 2016 02:56:23PM 0 points [-]

My experience might add a little support to that.

I know someone who self-perceives below how others perceive them, but who, when pressed, accurately predicts that they will be found attractive by most people.

Unfortunately, this doesn't keep the negative self-perception (whatever level they believe it on) from making them feel bad

Comment author: OrphanWilde 31 May 2016 02:55:31PM 0 points [-]

Politics is 95% incentives.

Comment author: Bound_up 31 May 2016 02:52:55PM 0 points [-]

Additionally, it was suggested during editing (though I did leave it out) that I talk about the mere-exposure effect, where people like what's familiar.

A full understanding of all the factors going into self-perception would include things which contribute to AND detract from a positive self-perception, with mere-exposure and other effects biasing the answer up, and excessive attention to flaws and probably other phenomena biasing the answer down.

I might imagine we end up with a "net" self-perception, an amalgamation of all the effects. For some people, that net perception might be biased up. Indeed, while I'm very hesitant to draw too many conclusions from the study you provide from the 1800's, it is POSSIBLE that the majority of people have a net self-perception biased up.

Still leaving millions of people, several of whom I know, who could benefit from the ideas in this article, I think.

And if I had to guess, in 1878, people, on average, were probably more satisfied with their appearance than we are now.

Comment author: OrphanWilde 31 May 2016 02:52:12PM -1 points [-]

Considering the responses I observe, I'm going to say - well done. You've made people deeply uncomfortable without giving them a specific reason to be uncomfortable. Granted, in typical Less Wrongian fashion, they'd rather criticize you than take an opportunity to observe their own minds.

Other readers: If you're trying hard to figure out which side you should take based on the real-world analogue you think this could be representing... well, you're mind-killed. Take this as a learning opportunity in how to be less mind-killed. The correct stance is not the stance you already hold, and by trying to find a real-world analogue, you're admitting that your view is being informed, not by rationality, but by tribal politics.

Comment author: Bound_up 31 May 2016 02:50:45PM 0 points [-]

Mmm, good point!

Now, I might imagine, in that scenario, that they still self-perceive as less beautiful because of all the attention they're giving their flaws.

But a side effect of no longer doing so and no longer self-perceiving negatively might be a decrease in their effectiveness in countering those flaws...

Comment author: OrphanWilde 31 May 2016 02:47:47PM 0 points [-]

I'm puzzled as to why people think formulaic writing is good writing.

Thesis statements tell the reader whether they agree with the work or not in advance. I disagree firmly with their use, as they encourage a lazy style of reading in which you decide before you begin reading whether or not you're going to discard the evidence before you, or consider it.

Comment author: Bound_up 31 May 2016 02:47:35PM 0 points [-]

Yes, you make a very good point.

I'm very careful about what exactly I'm recommending.

The gist is that we should all know how beautiful we are.

Which some people interpret as meaning we should all think we're beautiful.

But I think it probably better if we all know exactly how beautiful we are.

Naturally, "beautiful" is not really the point, per se. The idea is, whatever aesthetic you're judging, if you want to embody it (or if other people want you to embody it), then deviations from it will be considered negatively, and loss aversion will focus your attention on those deviations. It applies to whatever you might be judging about yourself.

Comment author: OrphanWilde 31 May 2016 02:44:46PM -2 points [-]

So you noticed your defensive reflexes rising up, and spent effort trying to decide what you should be defensive about, instead of taking the opportunity to try to analyze and relax your defensive reflexes?

"Politics is the mindkiller" is a problem, not an excuse.

Comment author: Lumifer 31 May 2016 02:43:43PM 0 points [-]

Yes and my question was how does he know? If he never had that amount of money available to him, his guesstimate of how much utility he will be able to gain from it is subject to doubt. People do change, especially when their circumstances change.

Comment author: Viliam 31 May 2016 02:22:55PM 0 points [-]

Doesn't this suffer from a similar problem as group selection?

Imagine that the first mutant gets lucky and has 20 children; 10 of them inherited the "help your siblings" genes, and 10 of them did not. Does this give an advantage to the nice children over the non-nice ones? Well, only in the next generation... but then again, some children in the next generation will have the gene and some will not... and this feels like there is always an immediate disadvantage that is supposed to get balanced by an advantage in the next generation, except that the next generation also has an immediate disadvantage...

Uhm, let's reverse it. Imagine that everyone has the "help your siblings" gene, in the most simple version that makes them take a given fraction of their resources and distribute it indiscriminately among all siblings. Now we get one mutant that does not have this gene. Then, this mutant has an advantage over their siblings; the siblings give resources to mutant, not receiving anything in return. Yeah, the mutant is causing some damage to the siblings, reducing the success of their genes. But we don't care about genes in general here, only about the one specific "don't help your siblings" allele; and this allele clearly benefits from being a free-rider. And then it reproduces with some else, who is still an altruist, and again 50% of the mutant's children inherit the gene and get an advantage over their siblings.

So we get the group-selectionist situations where families of nice individuals prosper better than mixed families, but within each mixed family the non-nice individuals prosper better. This would need a mathematical model, but I suspect that unless the families are small, geographically isolated, and therefore heavily interbreeding, the nice genes would lose to the non-nice genes.

Comment author: cousin_it 31 May 2016 02:22:36PM *  0 points [-]

Here's a little example of prisoner's dilemma that I just thought up, which shows how mass media might contribute to modern loneliness:

Let's assume that everyone has a fixed budget of attention and empathy. Empathizing with imaginary Harry Potter gives you 1 point of utility. Empathizing with your neighbor gives them 10 points of utility, but doesn't give you anything, because your neighbor isn't as interesting as Harry Potter. So everyone empathizes with Harry Potter instead of their neighbor, and everyone is lonely.

Does that sound right? What can society do to get out of that trap?

Comment author: Stuart_Armstrong 31 May 2016 02:13:37PM 0 points [-]

Humans also have a distinction between alief and belief, that seems to map closely here. Most people believe that stoves are hot and that torture is painful. However, they'd only alieve them if they experience either one. So part of experiencing qualia might be moving things to the alieve level.

So what would we say about a Mary that has never touched anything hot, and has not only a full objective understanding of what hotness is and what kinds of items are hot, but has trained her instincts to recoil in the correct way from hot objects, etc... It would seem that that kind of Mary (objective knowledge+correct aliefs) would arguably learn nothing from touching a hot stove.

It also strikes me as interesting that the argument is made only about totally new sensations or qualia. When I'm looking at something red, as I am right now, I'm experiencing the qualia, yet not knowing anything new. So any gain of info that Mary has upon seeing red for the first time can only be self-knowledge.

In response to comment by V_V on The AI in Mary's room
Comment author: Stuart_Armstrong 31 May 2016 02:00:06PM 0 points [-]

Interesting point...

Comment author: username2 31 May 2016 01:57:53PM 0 points [-]

Counterexample: someone uses this account to ask a question and upvotes people who give helpful replies.

Comment author: gjm 31 May 2016 01:28:38PM 1 point [-]

I agree with other commenters that this reads like an obfuscated version of some real-world issue (perhaps A and B are white and black people in the USA or men and women or something?), and it ends up (for me, at least) not working well either as an oblique commentary on any real-world issue or as an abstract discussion of how to think well: it feels like politics and therefore stirs up the same defensive reflexes, the obfuscation makes it hard to be sure what the actual point is, I'm wasting brainpower trying to "decode" what I'm reading, and it's full of incidental details that I can't tell whether I need to be keeping track of (because they're probably highly relevant if this is a coded discussion of some real-world issue, but not so relevant if they're just illustrations of a general principle or even just details added for verisimilitude).

I propose the following principle: the mind-killing-ness of politics can't be removed merely by light obfuscation, so if you want to talk about a hot-button issue (or to talk about a more general point for which the hot-button issue provides a good illustration) it's actually usually better to be explicit about what that issue is. Even if only to disavow it by saying something like "I stumbled onto this issue when arguing about correlations between race and abortion among transgender neoreactionaries, but I think it applies more generally. Please try not to be distracted by any political applications you may see -- they aren't the point and I promise I'm not trying to smuggle anything past your defences.".

As to the actual point the article is (explicitly) making: I agree but it seems kinda obvious. Of course considering the incentives on all sides may be difficult to do when you're in the middle of a political battle, but I'm not sure that having read an article like this will help much in that situation.

Comment author: rayalez 31 May 2016 01:23:46PM *  0 points [-]
  1. Thanks!

  2. It works well on my iPad, haven't tested it on the phones yet. I will.

  3. There are links to author's RSS feed in the post footer and on the profile pages.

Is there a reason you don't want to use the site? I'd appreciate any feedback or ideas on how I can make it better.

Comment author: CronoDAS 31 May 2016 01:16:37PM 0 points [-]

Hmmm... Easier to find stories than on r/rational, I'll give it that...

Comment author: gjm 31 May 2016 01:02:48PM 0 points [-]

It seems to me that it doesn't weigh against it very much. A genetic change that causes a not-too-big increase in altruistic behaviour towards likely kin is unlikely to hurt your chances of survival and reproduction a lot.

The first organism with the genetic change doesn't need to be exceptionally well supplied with offspring or anything. (Unless this is an r-selected species for which surviving at all is exceptionally lucky; in that case, it needs to be about as lucky as the bearer of any other not-too-dramatic genetic change has to be.)

Comment author: root 31 May 2016 12:54:15PM *  0 points [-]

Not going to use it but:

  1. Good job on not having a javascript hell

  2. Some people might like a mobile view (if there isn't one already)

  3. No RSS feeds?

Comment author: gjm 31 May 2016 12:36:53PM 0 points [-]

There are a few links on the wiki. If none of them is what you're after, could you possibly say a little more about what was in the article you're looking for? (Was it, e.g., making the same sort of point as Eliezer's "Fallacy of Gray", or disagreeing with it, or saying some completely different thing about "continuous thinking"?)

Comment author: gjm 31 May 2016 12:12:44PM *  1 point [-]

imagine that you are literally the first organism

If the immediate consequences of the genetic change in question aren't terribly deleterious then that first organism may very well have offspring, even without it conferring any particular advantage. And now those offspring do have siblings who share the gene.

[EDITED to add: oops, saw Viliam's comment in Recent Comments and replied to it without noticing others had also done so making the same point.]

Comment author: gjm 31 May 2016 12:08:40PM 1 point [-]

There might possibly be other differences between their lifestyle and ours besides the lack of mirrors.

Comment author: toomanymetas 31 May 2016 11:32:40AM 0 points [-]

I have been using anki to install something like Trigger Action Plans for more than half a year and it's been working great. Wrote a blog post about it: http://guzey.com/blog/thought-patterns-marginal

tldr: create a deck with max interval of 1 day.

Comment author: Viliam 31 May 2016 09:57:17AM 0 points [-]

There are probably people living somewhere in jungle without mirrors.

Comment author: philh 31 May 2016 09:44:59AM *  0 points [-]

Did you offer any suggestions of things she could buy you? Cash doesn't count because mumblereasons. It sounds to me like your sister acted poorly, especially in getting your parents to contribute. But did you make it easy for her to act well?

I too would prefer simply receiving cash, but I've accepted that that's not happening, so I have an Amazon wishlist. It mostly has books and graphic novels. Graphic novels in particular make a good gift for me, because they're often a little more expensive than I'd like to spend on them myself.

(I feel like some people dislike even buying presents from a list, but you can at least suggest categories of things.)

Comment author: Pimgd 31 May 2016 07:59:58AM 1 point [-]

Maybe it doesn't help when you're the only one, but that doesn't matter; your species is one that has multiple children, and the mutation was so small it occurred in multiple children? ... And if that's too high a complexity penalty, there could be an alternative: say it is a trait which got spread due to a resource boom in a population (the resource boom makes it likely for even disadvantaged mutations to survive), and then individuals with the trait managed to find each other and be more fit?

... Just conjecture, though.

Comment author: Romashka 31 May 2016 06:37:27AM 0 points [-]

No, it does not. The less faith people put into the 'evolutionary explanation', the more water it holds. Everything that is not forbidden is allowed; as long as the two versions both exist, there is no better one.

Comment author: SquirrelInHell 31 May 2016 03:37:11AM 0 points [-]

My prior is that most people in the situation described in the post wouldn't have thought of this method as a way of resolving the tension they experience.

OK, so I have different background assumptions: to me it looks like the simplest way to complete the pattern ("how improve my self-esteem?" -> "think about your strong points") conveniently established by countless self-help slogans etc.

Comment author: kitimat 31 May 2016 03:05:43AM 0 points [-]

Help request. I am looking for an article/posting that I once read, the topic of which was reasoning about continuums , like Less Wrong's Fallacy of Grey . I think I originally found the article through a link on Less Wrong but I have been unable to locate it. Any suggestions?

Comment author: Gleb_Tsipursky 31 May 2016 02:28:19AM 0 points [-]

as if they didn't already know that!

My prior is that most people in the situation described in the post wouldn't have thought of this method as a way of resolving the tension they experience. What do you think?

Comment author: PECOS-9 31 May 2016 02:15:23AM 0 points [-]

Aside from allergies, also consider whether the digestive trouble could be due to anxiety or other psychological issues.

Comment author: SquirrelInHell 31 May 2016 02:14:26AM 0 points [-]

OK, here's one example of something that is not covered: someone can feel that by focusing on their flaws, they get the benefit of putting more effort into presenting their best side, and improving their look. So they wouldn't want to stop concentrating on the flaws.

I mean, there's a lot of psychology/social pressures/doublethink/self-image/etc. issues around this. I anticipate that simply telling people"from now on, concentrate more on your positive sides!" does not solve the problem in most cases, and can even sound condescending (as if they didn't already know that!).

Comment author: SquirrelInHell 31 May 2016 02:00:43AM 0 points [-]

Another experiment: arrange for a group of people to live in an environment with no mirrors or other ways to see themselves, for a long time. Compare with people exposed to mirrors. Ask detailed questions about estimates of attractiveness, weak points, feelings of dissatisfaction, self-doubt, inferiority etc.

But this is also hard to arrange.

Comment author: SquirrelInHell 31 May 2016 01:58:25AM *  0 points [-]

Is there any research on the "first person" view that you mention? As I'm no scientist, I've only dealt with the already firmly established findings like loss aversion.

I do not know of any research on this directly. However, there is strong support for people's reported opinions being influenced by sitting in front of a mirror. So I just do educated guesses from the tangentially related research.

I've only dealt with the already firmly established findings like loss aversion.

Yup - you are playing it safe. However, this does not satisfy my curiosity.

You quote negativity/loss aversion bias as an explanation, but do you think it is the most accurate explanation?

Comment author: SquirrelInHell 31 May 2016 01:38:22AM *  0 points [-]

OK, first a disclaimer.

My model of this is based only on the several people which I'm close enough to to get accurate reports about their private thoughts.

I have high confidence in their reports being as true to the internal experiences as they managed to communicate, but the sample is small and might not reflect the "average".

Based on this, I make the following bold claim (with moderate confidence):

The bias in question works by a sort of a doublethink: the subjects do in fact also have a roughly accurate estimate of their beauty somewhere in their heads, and when asked publicly, they will not report their inner experience of doubt.

If you ask a bunch of people who have issues with self-perception of beauty to fill a survey about it, they will tend to answer the questions by taking the "outsider view" (at least, unless the questions in the survey are very cleverly phrased).

Comment author: ygrt 30 May 2016 11:48:12PM 0 points [-]

The option A could be justified if you take into account emotional utility. Therefore you might trade 10$ for avoiding the regret you will feel in 75% of cases if you choose option B. This could hold even more true for larger sums.

By the way, I would choose option B because I don't think this self-indulging attitude is beneficial in the long run.

Comment author: Luke_A_Somers 30 May 2016 11:32:19PM 0 points [-]

Measuring it would be a ridiculously exhaustive task, but it seems like evolution has already performed the measurement for us.

Comment author: Gleb_Tsipursky 30 May 2016 09:53:12PM 2 points [-]

So is crying yourself to sleep :P

Comment author: Viliam 30 May 2016 07:23:39PM 1 point [-]

For a transhumanist, this is just a temporary inconvenience anyway. :P

Comment author: Viliam 30 May 2016 07:21:08PM 0 points [-]

I feel like this increases the amount of lucky coincidence needed. Not only I have to randomly get the right mutation, but I also need to have many children (surviving to the age when they can help each other) for reasons completely unrelated to having the mutation. Actually, the mutation may be a bit harmful in the second step, because I may give some of my resources to my siblings instead of my children.

Unfortunately, I am not familiar enough with mathematical models of evolution to evaluate how much this extra burden weighs against your hypothesis.

Comment author: pcm 30 May 2016 06:15:08PM 0 points [-]

I suggest reading Henrich's book The Secret of our Success. It describes a path to increased altruism that doesn't depend on any interesting mutation. It involves selection pressures acting on culture.

Comment author: mwengler 30 May 2016 05:35:24PM 1 point [-]

My first thoughts reading your post are 1) You start WAY TOO LATE IN THE GAME. You are essentially talking about altruism as a conscious choice which means you are well into the higher mammals.

Virtually every sexually reproducing creature devotes resources to reproduction that could have been conserved for individual survival. As you move up in complexity, you have animals feeding their young and performing other services for them. As would be expected with all evolved cooperation, the energy and cost you expend raising your young produces a more survivable young and so is net cost effective at getting the next generation going, which is pretty much what spreads genes.

How big of a leap is it from a mama bird regurgitating food into her baby's mouth to you helping your neighbor hunt for wooly mammoth?

If you were the first organism to get the gene to feed your babies or do whatever expanded their survivability, then obviously that is how that gene propagates, your babies have the gene.

As you get to the more complex forms of altruism of primates and humans, you also get to strong feedback mechanisms against non-cooperators and free-riders. The system may not be perfect but I think it allows a path from feeding babies or burying eggs in the sand to modern altruism in humans where no wierd "how do we start this" behaviors bump up to stop things.

Comment author: bentarm 30 May 2016 05:26:34PM *  4 points [-]

My first thought on reading this was that given that people tend to be overconfident in just about every other area of their lives, I would find it exceedingly surprising if it were in fact the case that people's estimates of their own attractiveness was systematically lower than the estimates of others. I notice that there isn't actually a citation for this claim anywhere in the article.

Indeed, having looked for some evidence, this was the first study I could find that attempted to investigate the claim directly: Mirror, mirror on the wall…: self-perception of facial beauty versus judgement by others.. To quote the abstract:

Our results show proof for a strikingly simple observation: that individuals perceive their own beauty to be greater than that expressed in the opinions of others (p < 0.001).

In other words, the phenomenon that you "explain" in this article is literally the opposite of the truth, at least for the people in that study.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. Yes, surely some people under-estimate their own attractiveness, but if the explanation for this is cognitive biases which are present in everyone, how do we explain the people in this study who make exactly the opposite error? If you are equally good at explaining any outcome, you have zero knowledge, etc, etc.

Comment author: Gleb_Tsipursky 30 May 2016 05:06:07PM 1 point [-]

Don't worry, on LW no one cares how you look in meatspace ;-)

Comment author: Gleb_Tsipursky 30 May 2016 04:35:31PM 0 points [-]

Right on for individual insight! The Collaborative Truth-Seeking strategy is only for cases when you disagree with someone and want to figure out the best approach for going forward to get at the truth.

Comment author: Dagon 30 May 2016 03:40:44PM 0 points [-]

Don't forget selection bias. Even if purely objective and accurate measurements are possible, almost everyone thinks they're less beautiful than everyone else.

Amy's self-image is likely formed by seeing herself in a mirror before and during grooming, where she sees her friends and others mostly already made-up. Selection bias (the true average non-representative observations) leads her to believe that she's on average less well groomed than anyone else.

Similarly in other's reactions. Rejections and bad experiences tend to be private, where successful interactions are more often shared and reviewed with others. A straight average of all your experiences compared with those experiences that people have shared with you will make those others seem better off.

And, of course "beautiful" is a fairly poorly-defined word. It's not a very good target for debiasing, as it's very hard to measure an improvement in estimation. How does your post change if Amy is, in fact, less generally attractive than her friends?

Actually, what's the goal here? This isn't a topic where accurately assessing oneself is usually recommended, but rather to worry less and believe yourself beautiful. Are you trying to increase readers' self-confidence, or to help them rationally decide whether to put more effort into their appearance, or something else?

Comment author: Vaniver 30 May 2016 03:40:10PM 0 points [-]

I have now.

Comment author: g_pepper 30 May 2016 03:32:52PM 0 points [-]

Richard Dawkins' 1976 book The Selfish Gene contains, among other things, some interesting discussions about how many altruistic behaviors might have arisen through natural selection.

Comment author: Val 30 May 2016 03:19:59PM 1 point [-]

Imagine that you are literally the first organism who by random mutation achieved a gene for "helping those who help you"

Not all information is encoded genetically. Many kinds of information have to be learned from the parents or from society.

Comment author: AspiringRationalist 30 May 2016 03:10:17PM 1 point [-]

If you have a gene that makes you help you siblings, your offspring are reasonably likely to get it too, which benefits their siblings (also your offspring).

Comment author: ImNotAsSmartAsIThinK 30 May 2016 01:22:52PM 0 points [-]

That's what I mean by complexity, yeah.

I don't know if I made this was clear, but the point I make is independent of what high level principles explain thing, only that they are high level. The ancestors that competed across history to produce the organism of interest are not small parts making up a big thing, unless you subscribe to causal reductionism where you use causes instead of internal moving parts. But I don't like calling this reductionism (out even a theory, really) because it's, as I said, a species of causality, broadly construed.

Comment author: Coacher 30 May 2016 12:18:51PM *  0 points [-]

How do you solve interpersonal problems when neither sides can see themselves as the one in fault?

Is there any other kind?

Comment author: Viliam 30 May 2016 11:53:57AM *  0 points [-]

Looking at the answers...

  • We need multiple AIs, equal to each other, to limit each other from becoming too dangerous.

  • Here are some fictional examples of x-risk, including Vogons and Cthulhu. This said, to control AI we need something like Asimov's laws, only better.

  • Something like inverse reinforcement learning, but more research is required.

  • The AI must care about consensus. Just like democracy. Or human brain.

  • The main danger from technological change is economical. On the technical side the answer is deep learning.

  • The previous answers are all wrong, read Yudkowsky to understand why. The solution is to interface human brains with computers, and evolve into cyborgs, ultimately leaving our original bodies behind.

  • The AI needs both logic and emotions. We can make it safe by giving it limited computing power.

  • The AI needs emotional intelligence. It needs to love and to be loved. Also, I had a chihuahua once.

  • We need to educate the AI just like we would educate an autistic child.

  • We should treat the AI as a foreign visitor, with hospitality. Here is a quote about foreigners by Derrida.

  • Each AI should have a unique identifier and a kill switch that the government can use to shut it down.

  • Make the AI police itself. Build it in space, so it doesn't compete for energy with humans. Don't let it self-modify.

  • I am an AI researcher, and you are just projecting your human fears into computers. The real risk is autonomous weapon systems, but that has nothing to do with computers becoming self-aware.

  • We should build a hierarchy of AIs that will police it's rogue members.

  • Tool AI.

  • We need high-quality research, development, and testing; redundant protections against system failure; and keeping engineers from sabotaging the project.

  • Read Superintelligence by Nick Bostrom.

  • The AI should consist of multiple subagents, each with limited timespan, trading information with each other. Any part showing signs of exponential growth will be removed automatically. Humans will be able to override the system at any time.

  • etc.

Comment author: Viliam 30 May 2016 10:17:15AM *  1 point [-]

At school, I was taught that the correct way to write is...

Summary-introduction.

The main text of the article.

Summary-conclusion.

...so that at the beginning people have an idea about what will be said (so they can focus on the important parts instead of tangents), and at the end they can review and remember the important points.

There is a slightly modified version for teachers, where as an introduction you ask motivating questions, such as "how could we do X?", and then you proceed by a lesson that includes how to do X.

However, out of school, when I was writing short stories, I was told that this is the part of school education that is most important to unlearn for writers. You do not write stories like this, because they will be super boring -- the introduction will contain unnecessary spoilers, and the conclusion will just repeat what you already know if you paid any attention to the story. The lesson is that text written with a different purpose requires different structure.

Instead, here is what works for stories:

Something short and impressive, even if it is completely out of context, to capture the audience.

The main text of the story, at the beginning seemingly unrelated to the introduction, but later the situation from the introduction appears in the story. (The exact place depends on the length of the story, for short stories it is about 90-95%, for a novel it must be soon enough lest the reader forgets the introduction completely.)

tl;dr -- how you write should reflect your expectations why and how people will read your text; for example textbook vs fiction, but also tutorial vs reference, etc.

Comment author: AstraSequi 30 May 2016 09:19:38AM *  0 points [-]

This can be illustrated by the example of evolution I mentioned: An evolutionary explanation is actually anti-reductionist; it explains the placement of nucleotides in terms of mathematics like inclusive genetic fitness and complexities like population ecology.

This doesn't acknowledge the other things explained on the same grounds. It's a good argument if the principles were invented for the single case you're explaining, but here they're universal. If you want to include inclusive genetic fitness in the complexity of the explanation, I think you need to include everything it's used for in the complexity of what's being explained.

Comment author: Viliam 30 May 2016 09:11:31AM 0 points [-]

Amy says “I don’t think I’m very beautiful.”

“Of course you’re beautiful!” they reassure her.

Never happened to me. I guess I realize what that means, and now I'm gonna cry myself to sleep.

(Just kidding.)

Comment author: Viliam 30 May 2016 09:06:22AM *  1 point [-]

This works only in comments.

In articles, there is an "Insert/edit link" button in the toolbar. Click the button, paste the link into the "Link URL" field (and leave the remaining fields unmodified).

Yes, LW uses two completely different systems for editing articles and editing comments.

Comment author: Viliam 30 May 2016 09:04:37AM -1 points [-]

And I think the bias occurs when interacting with videos/photos/mirror reflections/etc. of yourself, not just the "first person" view.

I wonder how people would react to the photos/videos of themselves if they wouldn't know it's themselves.

I admit such experiment could be difficult to arrange. But not impossible. Imagine filming people through hidden camera somewhere. Then... several weeks later... invite them to experiment where they will be shown videos of random people, and they have to quickly judge how attractive they are (e.g. by pressing a button). Show them a series of videos, including a short video of themselves.

The hypothesis in this article suggests that people would judge themselves as attractive if they wouldn't know it's themselves. (Also, this technique could be useful therapeutically.)

Comment author: Viliam 30 May 2016 08:51:24AM 3 points [-]

It's almost three months since a mysterious benefactor offered to donate to MIRI but insisted on doing it through other LW members contacted via private messsages.

So, I'm curious... Did anyone cooperate? Is there a story to share?

Comment author: Viliam 30 May 2016 08:44:53AM *  0 points [-]

Some people believe that altruism has evolved through helping your relatives or through helping others to help you in return. I was thinking about it; on the surface the idea looks good -- if you already have this system in place, it is easy to see how it benefits those involved -- but that doesn't explain how the system could have appeared in the first place. Anyone knows the standard answer?

Imagine that you are literally the first organism who by random mutation achieved a gene for "helping those who help you". How specifically does this gene increase your fitness, if there is no one else to reciprocate?

Or imagine that you are literally the first organism who by random mutation achieved a gene for "helping your siblings". How specifically does this gene increase your fitness, or the fitness of the gene itself, if your siblings do not have a copy of this gene?

In other words, it seems simple to explain how these kinds of altruism can work when they are already an established system, but it is more difficult to explain how it could work when it is new.

And this all is a huge simplification; for example, I doubt that "helping those who help you" could be achieved by a single mutation, since it involves multiple parts like "noticing that someone helped you", "remembering the individual who helped you" and "helping the individual who helped you in the past". Plus the problem of how to start this chain of mutual cooperation.

My guess is that... nygehvfz pbhyq unir ribyirq guebhtu frkhny fryrpgvba. Yrg'f rkcynva vg ol funevat sbbq jvgu bguref. Svefg, vaqvivqhnyf abgvpr jub vf tbbq ng tngurevat sbbq, naq gurl ribyir nggenpgvba gbjneqf tbbq sbbq pbyyrpgbef. Gung znxrf vzzrqvngr frafr orpnhfr vg vapernfrf fheiviny bs gur puvyqera, vs gurl nyfb trg gur trarf tbbq sbe tngurevat sbbq. Nsgre guvf nggenpgvba rkvfgf jvguva gur fcrpvrf, gur arkg fgrc pbhyq or fvtanyyvat: vs lbh unir fbzr rkgen sbbq lbh qba'g npghnyyl arrq, oevat vg naq ivfvoyl qebc vg arne bgure vaqvivqhnyf, fb gung bguref abgvpr lbh unir zber sbbq guna lbh pna rng. Ntnva, guvf znxrf vzzrqvngr frafr, orpnhfr vg znxrf lbh zber nggenpgvir. Abgvpr ubj arvgure "urycvat lbh eryngvirf" abe "urycvat gubfr jub uryc lbh" jnf arprffnel gb ribyir urycvat vaqvfpevzvangryl. Npghnyyl, gubfr pbhyq unir ribyirq yngre, nf shegure vzcebirzragf bs be nqqvgvbaf gb gur vaqvfpevzvangr urycvat.

Comment author: Bound_up 30 May 2016 06:34:35AM 0 points [-]

Hmm...maybe.

As I understand it, loss aversion is just a specific kind of negativity bias. Is that right, do you think?

Comment author: jaime2000 30 May 2016 06:07:14AM *  0 points [-]

I have noticed one more issue. In "Efficient Charity: Do Unto Others…" the symbol "£" is twice corrupted into "ÂŁ". This is not an ebook-wide problem, since "Searching for One-Sided Tradeoffs" and "A Modest Proposal" both use the correct symbol. Apparently this is simply a problem with the source; the copy of the post at the Effective Altruism Forum has this error, but the copy of the post at LessWrong, has the correct symbol.

View more: Next