Less Wrong: Open Thread, September 2010

3 Post author: matt 01 September 2010 01:40AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (610)

Sort By: Popular
Comment author: DanielVarga 27 September 2010 06:22:06PM *  1 point [-]
Comment author: beriukay 25 September 2010 09:50:05PM 4 points [-]

I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.

Comment author: Pavitra 23 September 2010 05:30:32AM 3 points [-]

In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).

Comment author: jimrandomh 23 September 2010 08:23:40PM *  3 points [-]

It looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.

My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.

Comment author: PeerInfinity 20 September 2010 08:19:31PM *  2 points [-]

Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?

Comment author: Relsqui 16 September 2010 10:37:14PM *  8 points [-]

I'm a translator between people who speak the same language, but don't communicate.

People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.

On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").

Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day compile them into a guide written for an audience much like this one. Do you have questions about how to communicate with people who think very much unlike you, or about specific situations that frustrate you? Would you like me to explain what appears to be an arbitrary point of etiquette? Anything else related to the topic which you'd like to see addressed?

In short: "I understand the weird emotional people who are always yelling at you, but I'm also capable of speaking your language. Ask me anything."


[1] These are both phrased as pejoratively as I could manage, on purpose. Neither extreme is healthy.

Comment author: Rain 30 September 2010 02:45:40PM 1 point [-]

I wanted to say thank you for providing these services. I like performing the same translations, but it appears I'm unable to be effective in a text medium, requiring immediate feedback, body language, etc. When I saw some of your posts on old articles, apparently just as you arrived, I thought to myself that you would genuinely improve this place in ways that I've been thinking were essential.

Comment author: Relsqui 30 September 2010 06:25:44PM 1 point [-]

Thanks! That's actually really reassuring; that kind of communication can be draining (a lot of people here communicate naturally in a way which takes some work for me to interpret as intended). It is good to hear that it seems to be doing some good.

Comment author: beriukay 25 September 2010 10:09:37PM 2 points [-]

One issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"

Comment author: Relsqui 26 September 2010 03:05:53AM *  10 points [-]

I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.

For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.

Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.

The corollary to the implied "your beliefs are wrong" is "I know better than you" (because that's how you would tell that they're wrong). This is an incredibly rude signal to send to--well, anyone, but especially to another adult. Your hackles probably rise too when someone signals that they're superior to you and you don't agree; this is the same thing.

The point, then, is not that you need to accept what people you care about say with credulity. It's that you need to accept it with respect. You do not have any greater value than the person you're talking to (even if you are smarter and more rational), just like they don't have any greater value than you (even if they're richer and more attractive). Even if you really were by some objective measure a better person (which is, as far as I can tell, a useless thing to consider), they don't think so, and acting like it will get you nowhere.

Possibly one of the hardest parts of this to swallow is that, when you're choosing words for the purpose of making another person remain comfortable talking to you, whether their beliefs are a good reflection of reality is not actually important. Obviously they think so, and merely contradicting them won't change that (nor should it). So if you sound like you're just trying to convince them that they're wrong, even if that isn't what you mean to do, they might just feel condescended to and walk away.

None of this means that you can't express your own beliefs vehemently ("gay people deserve equal rights!"). It just means that when someone expresses one of theirs, interrogating them bluntly about their reasons--especially if they haven't questioned them before--is more likely to result in defensiveness than in convincing them or even productive debate. This may run counter to your instincts, understandably, but there it is.

No fuzzy words in the world will soften your language if their inflection reveals intensity and superiority. Display real respect, including learning to read your audience and back off when they're upset. (You can always return to the topic another time, and in fact, occasional light conversations will probably do a better job with this sort of person than one long intense one.) If you aren't able to show genuine respect, well, I don't blame them for refusing to discuss their beliefs with you.

Comment author: Morendil 17 September 2010 07:17:21AM 1 point [-]

Yes please.

Does the term "bridger" ring a bell for you? (It's from Greg Egan's Diaspora, in case it doesn't, and you'd have to read it to get why I think that would be an apt name for what you're describing.)

Comment author: TobyBartels 16 September 2010 08:45:24PM *  1 point [-]

Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here

test deleted

Comment author: NancyLebovitz 12 September 2010 12:10:40PM 7 points [-]

I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.

Comment author: jimmy 16 September 2010 07:10:14AM 1 point [-]

I've noticed this too. There's no easy way to 'unfold all' is there?

Comment author: gwern 12 September 2010 01:50:03PM 2 points [-]

The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/

Comment author: simplicio 12 September 2010 05:10:27AM 3 points [-]

In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?

This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!

Comment author: allenwang 12 September 2010 04:11:17AM 1 point [-]

I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.

I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackle the irrational tangle that many of our institutions appear to be caught up in, so I thought this would be the perfect place to get some expertise. The project is kind of at a standstill, and it really needs some advice and leads (and collaborators), so please feel free to praise, criticize, advise, or even join.

I saw orthonormal's "welcome to LessWrong post," so I guess this is where to post before I build up enough points. I hope it isn't too long of an introductory post for this thread?

The aim of the project is to achieve a population that is more educated in the basics of climate change science and policy, with the hope that a more educated voting public will be a big step towards achieving the policies necessary to deal with climate change.

The basic problem of educating the public about climate change is twofold. First, people sometimes get trapped into “information cocoons” (I am using Cass Sunstein’s terminology from his book Infotopia). Information cocoons are created when the news and information people seek out and surround themselves with is biased by what they already know. They are either completely unaware of competing evidence or if they are, they revise their network of beliefs to deny the credibility of those who offer it rather than consider it serious evidence. Usually, this is because they believe it is more probable that those people are not credible than that they could be wrong. This problem has always existed, and has perhaps increased since the rise of the personalized web. People who are trapped in information cocoons of denial of anthropogenic climate change will require much more evidence and counterarguments before they can begin to revise an entire network of beliefs that support their current conclusions.

Second, the population is uneducated about climate change because they lack the incentive to learn about the issues. Although we would presumably benefit if everyone were to take the time to thoroughly understand the issue, the individual cost and benefit of doing so actually runs the other way. Because the benefits of better policies accrue to everybody, but the costs are borne by the individual, people have an incentive to free ride, to let everybody else worry about the issue because either way, their individual contribution means little, and everybody else can make the informed decision. But of course, with everybody reasoning in this way there is a much lower level of education on these issues than optimal (or even necessary to create the necessary change, especially if there are interest groups with opposing goals).

The solution is to institute some system that can crack into these information cocoons and at the same time provide wide-ranging personal incentives for participating. For the former, we propose to develop a layman’s guide to climate change science and economic and environmental policy. Many of these are already in existence, although we have some different ideas about how to make it more transparent to criticism and more thorough in its discussion of epistemic uncertainty surrounding the whole issue. There is definitely a lot we can learn from LessWrong on this point). Also, I think we have a unique idea about developing a system of personal incentives. I will discuss this latter issue first.

Comment author: Cyan 11 September 2010 06:34:33PM *  2 points [-]

Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.

It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to keep that in mind too, and I want to make the universe a less deadly place for everyone.

(If you feel like voting this comment up, please review this first.)

Comment author: beriukay 11 September 2010 04:23:50AM 4 points [-]

I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.

We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin was fair, or at least assign some likelihood distribution.

But not my frequentist archnemesis! He let it be known that he would level half the continent if the probability of getting Heads wasn't determined by his Expectation divided by the number of events. The number of events. Of an imaginary coin toss. Determine that toss' probability.

It occurs to me that there was a lot of set up for very little punch line in that anecdote. If you are unamused, you are in good company. I ordered R to calculate an integral for me today, and it politely replied: "Error in is.function(FUN) : 'FUN' is missing""

Comment author: gwern 11 September 2010 12:11:27AM 1 point [-]

NYT magazine covers engineers & terrorism: http://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t.html

Comment author: datadataeverywhere 10 September 2010 08:12:03PM *  2 points [-]

An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.

At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?

For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guaranteed to be a very uninformed guess, but some guess is possible, right?

The observer can establish a CDF of the probability of the light turning off at time t; for t <= tx, p=0. For t > tx, 0 < p < 1, assuming that the observer can never be certain that the light will ever turn off. What goes on in between is the interesting part, and I haven't the faintest idea how to justify any particular shape for the CDF.

Comment author: Wei_Dai 10 September 2010 07:27:28AM *  16 points [-]

An Alternative To "Recent Comments"

For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.

Explanation of features:

  • loads and threads up to 400 most recent comments on one screen
  • use [↑] and [↓] to mark favored/disfavored authors
  • comments are color coded based on author/points (pink) and recency (yellow)
  • replies to you are outlined in red
  • hover over [+] to view single collapsed comment
  • hover over/click [^] to highlight/scroll to parent comment
  • marks comments read (grey) based on scrolling
  • shows only new/unread comments upon refresh
  • date/time are converted to your local time zone
  • click comment date/time for permalink

To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.

ETA: I've placed the script is in the public domain. Chrome is not supported.

Comment author: Wei_Dai 10 September 2010 08:35:57AM 4 points [-]

Here's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php

I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.

Comment author: eugman 10 September 2010 12:51:28PM 4 points [-]

Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.

Thanks.

Comment author: Relsqui 16 September 2010 09:31:45PM 4 points [-]

I have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for.

The Usual Error is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book.

If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine.

A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war).

I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who are accustomed to expressing an idea by using the first words which occur to them to do so (almost everyone), this requires flexing mental muscles which don't see much use. I think of myself as a good communicator, and it's still hard for me to follow NVC when I'm upset. But the difficulty is part of the point--by forcing you to stop and rethink how you talk about the conflict, it forces you see it in a way that's less hindered by emotional reflex and more productive towards understanding what's going on and finding a solution.

Neither of these suggestions requires that your partner also read them, but it would probably help. (It just keeps you from having to explain a method you're using.)

If you find a good resource for this which is a blog, I'd be interested in it as well. Maybe obviously, this topic is something I think a lot about.

Comment author: eugman 20 September 2010 03:49:51PM *  1 point [-]

Both look rather useful, thanks for the suggestions. Also, Google Books has Nonviolent Communication.

Comment author: rhollerith_dot_com 10 September 2010 06:40:43PM *  1 point [-]

I could point to some blogs whose advice seems good to me, but I won't because I think I can help you best by pointing only to material (alas no blogs though) that has actually helped me in a serious relationship -- there being a huge difference in quality between advice of the form "this seems true to me" and advice of the form "this actually helped me".

What has helped me more in my relationships than any other information has is the non-speculative parts of the consensus among evolutionary psychologists on sexuality because they provide a vocabulary for me to express hypotheses (about particular situations I was facing) and a way for me to winnow the field of prospective hypotheses and bits of advice I get online from which I choose hypotheses and bits of advice to test. In other words, ev psy allows me to dismiss many ideas so that I do not incur the expense of testing them.

I needed a lot of free time however to master that material. Probably the best way to acquire the material is to read the chapters on sex in Robert Wright's Moral Animal. I read that book slowly and carefully over 12 months or so, and it was definitely worth the time and energy. Well, actually the material in Moral Animal on friendship (reciprocal altruism) is very much applicable to serious relationships, too, and the stuff on sex and friendship together form about half the book.

Before I decided to master basic evolutionary psychology in 2000, the advice that helped me the most was from John Gray, author of Men Are From Mars, Women Are From Venus.

Analytic types will mistrust author and speaker John Gray because he is glib and charismatic (the Maharishi or such who founded Transcendental Meditation once offered to make Gray his successor and the inheritor of his organization) but his pre-year-2000 advice is an accurate map of reality IMHO. (I probably only skimmed Mars and Venus, but I watched long televised lectures on public broadcasting that probably covered the same material.)

Comment author: cousin_it 09 September 2010 10:25:53PM *  2 points [-]

The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.

Comment author: Vladimir_Nesov 10 September 2010 06:54:26PM 1 point [-]

Human intelligence, certainly; but just intelligence, I'm not so sure.

Comment author: datadataeverywhere 09 September 2010 03:14:37PM 2 points [-]

How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.

We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.

However, if we are predominately white males, why are we? Should that concern us? There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist. I'm willing to bet that after correcting for socioeconomic correlations with ethnicity, we still don't make par. Perhaps naïvely, I feel like we must explain ourselves if this is the case.

Comment author: [deleted] 16 September 2010 06:40:35PM *  2 points [-]

I don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer)

I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women.

Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majority demographics still have a more overall rationality friendly climate (due to the caprice of history) than basically any part of the world? I freely admit my own native culture (though I'm probably thoroughly Westernised by now due to late childhood influences of mass media and education) is probably less rational than the Anglosaxon one. However simply going on a "crusade" to make other cultures more rational first since they are "clearly" more in need is besides sending terribly bad signals as well as the potential for self-delusion perhaps a bad idea for humanitarian reasons.

Sex ration: There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male for the foreseeable future (until human modification takes off). This obviously affects our potential pool of recruits.

Age: People grow more conservative as they age, Lesswrong is firstly available only on a relatively a new medium, secondly has a novel approach to popularizing rationality. Also as people age the mind unfortunately do deteriorate. Very few people have a IQ high enough to master difficult fields before they are 15, and even their interests are somewhat affected by their peers.

I am sure I am rationalizing at least a few of these points, however I need to ask you is pursuing some popular concept of diversity (why did you for example not commend LW on its inclusion of non-neurotypicals who are often excluded in some segments of society? Also why do you only bemoan the under-representation of groups everyone else does? Is this really a rational approach? Why don't we go study where the in the memspace we might find truly valuable perspectives and focus on those? Maybe they overlap with the popular kinds, maybe they don't, but can we really trust popular culture and especially the standard political discourse on this? ) is truly cost-effective at this point?

Comment author: datadataeverywhere 17 September 2010 05:19:36AM *  1 point [-]

If you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people.

I smell a whiff of that weird American memplex [...] you know the one that for example uses the word minority to describe women.

If less than 4% of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group?

but God for starters isn't among them

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

age

This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes".

why do you only bemoan the under-representation of groups everyone else does?

Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.

Comment author: wedrifid 18 September 2010 06:50:07PM *  2 points [-]

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences;

Given new evidence from the ongoing discussion I retract my earlier concession. I have the impression that the bottom line preceded the reasoning.

Comment author: datadataeverywhere 18 September 2010 10:22:41PM *  2 points [-]

I expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself.

That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you.

Edit: fixed typo

Comment author: Will_Newsome 19 September 2010 05:56:35AM 4 points [-]

Mathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one.

I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here.

Comment author: [deleted] 19 September 2010 11:25:06PM 3 points [-]

Is mathematical ability a bell curve?

My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.

Comment author: Will_Newsome 20 September 2010 12:06:59AM 4 points [-]

Is mathematical ability a bell curve?

I have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher.

I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.

Comment author: datadataeverywhere 19 September 2010 06:34:32PM 2 points [-]

I'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent.

I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?

Comment author: wedrifid 19 September 2010 07:03:49PM 1 point [-]

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

, but also why I believe that this is such a difficult task?

That was the point of making the demand.

You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.

Comment author: Perplexed 19 September 2010 07:24:33PM 1 point [-]

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

It might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

... but also why I believe that this is such a difficult task?

I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math.

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?

Comment author: datadataeverywhere 20 September 2010 02:47:31AM 1 point [-]

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

I agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task.

Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.

Comment author: wedrifid 19 September 2010 04:43:18AM 4 points [-]

If you insist on supporting that view

Absolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.

Comment author: datadataeverywhere 19 September 2010 05:13:42AM 2 points [-]

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.

Comment author: wedrifid 19 September 2010 05:20:18AM 1 point [-]

I think this applies to motivation as well!

It doesn't. (Unfortunately.)

Comment author: datadataeverywhere 19 September 2010 05:29:06AM *  1 point [-]

Am I misunderstanding, or are you claiming that motivation is purely an inherited trait? I can't possibly agree with that, and I think even simple experiments are enough to disprove that claim.

Comment author: wedrifid 19 September 2010 08:42:57AM 3 points [-]

Am I misunderstanding, or are you claiming that motivation is purely an inherited trait?

Misunderstanding. Expanding the context slightly:

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

It doesn't. (Unfortunately.)

When it comes to motivation the differences between people are not trivial. When it comes the particular instance of difference between the sexes there are powerful differences in motivating influences. Most human motives are related to sexual signalling and gaining social status. The optimal actions to achieve these goals is significantly different for males and females, which is reflected in which things are the most motivating. It most definitely should not be assumed that motivational differences are purely cultural - and it would be astonishing if they were.

Comment author: [deleted] 18 September 2010 06:40:01PM *  2 points [-]

I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

How do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field?

Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more).

Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate.

My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong.

This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression.

But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the value of life and existence (thought these have been questioned and debated here too) or perhaps some utilitarian principles.

But how do we know any position people take would really reflect their values and wouldn't jut be status signalling? Heck many people who profess their values include or don't include a certain inherent "goodness" to existence probably do for signalling reasons and would quickly change their minds in a different situation!

Not even mentioning the general effect of the mindkiller.

But like I have stated before, there are certainly many spaces where we can optimize the stated goal by outreach. This is why I think this debate should continue but with a slightly different spirit. More in line with, to paraphrase you:

Which groups should we pay more attention to? This is a serious question, since we haven't thought too much about it.

Comment author: [deleted] 18 September 2010 06:39:42PM *  2 points [-]

I ascribe differences in those to cultural influences;I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

But if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system.

I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence.

Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims?

I would venture to state that this case is especially strong for preferences.

And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it?

Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.

Comment author: lmnop 18 September 2010 07:41:09PM *  4 points [-]

But if we can't measure the cultural factors and account for them

We can't directly measure them, but we can get an idea of how large they are and how they work.

For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men.

The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire.

And then there's priming. Asian American women do better on math tests when primed with their race (by filling in a "race" bubble at the top of the test) than when primed with their gender (by filling in a "sex" bubble). More subtly, priming affects people's implicit attitudes towards gender-stereotyped domains too. People are often primed about their gender in real life, each time affecting their actions a little, which over time will add up to significant differences in the paths they choose in life in addition to that which is caused by innate gender differences. Right now we don't have enough information to say how much is caused by each, but I don't see why we can't make more headway into this in the future.

Comment author: [deleted] 18 September 2010 05:51:43PM *  2 points [-]

If less than 4% of us are women, I am quite willing to call that a minority. Would you >prefer me to call them an excluded group

I'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority.

I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority.

Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing.

Example: Discussing the repression of the Mayan minority in Mexico.

While other times we can't do this.

Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania.

This (age) also doesn't bother me, for reasons similar to yours.

Ah, apologies I see I carried it over from here:

How diverse is Less Wrong? I am under the impression that we disproportionately >consist of 20-35 year old white males, more disproportionately on some axes than >on others.

You explicitly state later that you are particularly interested in this axis of diversity

However, if we are predominately white males, why are we?

Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)?

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders?

mentioning the western world as the source of the scientific method.

While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years has been disproportionately seen in Europe and later its colonies. I would argue its adoption was a part of the reason for the later (lets say last 300 years) technological superiority of the West.

Edit: I wrote up quite a long wall of text. I'm just going to split it into a few posts as to make it more readable as well as give me a better sense of what is getting up or downvoted based on its merit or lack of there of.

Comment author: timtyler 09 September 2010 08:12:51PM *  8 points [-]

How diverse is Less Wrong?

You may want to check the survey results.

Comment author: Relsqui 16 September 2010 09:38:28PM 1 point [-]

Thank you; that was one of the things I'd come to this thread to ask about.

Comment author: datadataeverywhere 09 September 2010 09:19:55PM 2 points [-]

Thank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.

Comment author: gwern 09 September 2010 04:05:23PM *  9 points [-]

This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...

In other words, here be dragons.

* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.

Comment author: datadataeverywhere 09 September 2010 04:26:32PM 3 points [-]

That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US!

It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them.

The Tale of Genji has gone on my list of books to read. Thanks!

Comment author: gwern 09 September 2010 04:58:40PM *  6 points [-]

At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better.

Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas.

The Tale of Genji has gone on my list of books to read. Thanks!

I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing:

The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition.

The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless.

The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles.

The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: 'while sick, pay priests to chant.' If chanting doesn't work, hire more priests. (One remarkable freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it's still sickening to read through dozens of people dying amidst chanting. In comparison, the bizarre superstitions that guide many characters' activities (trapping them in their houses on inauspicious days) are practically unobjectionable.

Comment author: NancyLebovitz 09 September 2010 04:41:08PM *  6 points [-]

I've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.)

Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in?

I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled.

This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1]

He recommended recruiting Japanese members, since they're more apt to like and trust robots.

I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work.

[1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.

Comment author: [deleted] 16 September 2010 06:50:35PM *  3 points [-]

He recommended recruiting Japanese members, since they're more apt to like and >trust robots.

He has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages.

Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.

Comment author: Wei_Dai 18 September 2010 07:48:05PM 3 points [-]

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian.

Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.

Comment author: Vladimir_Nesov 19 September 2010 08:06:56AM 1 point [-]

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

All LW users display near-native control of English, which won't be as universal, and typically requires years-long consumption of English content. English-speaking world is the default source of non-Russian content for Russians, but it might not be the case with native Asians (what's your impression?)

Comment author: Wei_Dai 20 September 2010 06:33:38PM 2 points [-]

My impression is that for most native Asians, the English-speaking world is also their default source of non-native-language content. I have some relatives in China, and to the extent they do consume non-Chinese content, they consume English content. None of them consume enough of it to obtain near-native control of English though.

I'm curious, what kind of English content did you consume before you came across OB/LW? How typical do you think that level of consumption is in Russia?

Comment author: Perplexed 16 September 2010 06:55:55PM 1 point [-]

Unfortunately, browser spell checkers usually can't help you to spell your own name correctly. ;) That is one advantage to my choice of nym.

Comment author: cousin_it 09 September 2010 04:20:20PM *  6 points [-]

However, if we are predominately white males, why are we?

Ignoring the obviously political issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.

Comment author: datadataeverywhere 09 September 2010 04:30:23PM 4 points [-]

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.

Comment author: cousin_it 09 September 2010 04:45:48PM *  5 points [-]

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

The flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again.

ETA: apparenty datadataeverywhere is female.

Comment author: Perplexed 09 September 2010 04:35:54PM 4 points [-]

I generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people.

My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.

Comment author: timtyler 09 September 2010 08:09:57PM *  2 points [-]

But perhaps the women should speak for themselves on that subject.

We have already had quite a lot of that.

Comment author: Perplexed 09 September 2010 08:44:56PM 2 points [-]

OMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;)

But thx for the link.

Comment author: timtyler 09 September 2010 08:48:13PM *  1 point [-]

It does have about 100 pages of comments. Consider also the "links to followup posts" in line 4 of that article. It all seemed to go on forever - but maybe that was just me.

Comment author: Perplexed 09 September 2010 08:54:39PM 2 points [-]

Ok. Well, it is on my reading list now. Again, thx.

Comment author: Emile 09 September 2010 04:25:55PM 2 points [-]

There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist.

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

Comment author: thomblake 16 September 2010 08:00:03PM 5 points [-]

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

My vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population.

This is probably explained completely by Lw's tendency to attract <strike>weirdos</strike> people who are willing to question orthodoxy.

Comment author: Perplexed 09 September 2010 03:26:45AM 1 point [-]

Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.

Comment author: Perplexed 12 September 2010 07:25:43PM 1 point [-]

And now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?

Comment author: RobinZ 09 September 2010 03:49:14AM 3 points [-]

While katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.

Comment author: katydee 09 September 2010 03:43:54AM 2 points [-]

It is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point.

In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.

Comment author: Perplexed 09 September 2010 03:55:27AM 2 points [-]

I think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.

Comment author: jacob_cannell 09 September 2010 07:04:04AM 1 point [-]

That happened to me three days ago or so after my last top level post. At the time said post was at -6 or so, and my karma was at 60+ something. Then, within a space of < 10 minutes, my karma dropped to zero (actually i think it went substantially negative). So what is interesting to me is the timing.

I refresh or click on links pretty quickly. It felt like my karma dropped by more than 50 points instantly (as if someone had dropped my karma in one hit), rather than someone or a number of people 'tracking me'.

However, I could be mistaken, and I'm not certain I wasn't away from my computer for 10 minutes or something. Is there some way for high karma people to adjust someone's karma? Seems like it would be useful for troll control.

Comment author: gwern 08 September 2010 01:07:52PM *  5 points [-]

Relevant to our akrasia articles:

If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.

http://www.marginalrevolution.com/marginalrevolution/2010/09/should-you-bet-on-your-own-ability-to-lose-weight.html

Comment author: Sniffnoy 08 September 2010 08:29:39PM 1 point [-]

I recall someone claiming here earlier that they could do anything if they bet they could, though I can't find it right now. Useful to have some more explicit evidence about that.

Comment author: DSimon 07 September 2010 08:28:02PM *  13 points [-]

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The player shouldn't have the feeling that the game is designed to directly reward rationality (even though it is, in a sense). The player should think that they are solving a general problem with rationality as their asset.

  • Not allegorical: I don't want to raise any likely mind-killing associations in the player's mind, like politics or religion. The problem they are solving should be allegorical to real world problems, but to a general class of problems, not to any specific problems that will raise hackles and defeat the educational purpose of the game.

  • Surprising: The rationality technique being taught should not be immediately obvious to an untrained player. A typical first session should involve the player first trying an irrational method, seeing how it fails, and then eventually working their way up to a rational method that works.

A lot of the rationality-related games that people bring up fail some of these criterion. Zendo, for example, is not "dramatic in outcome" enough for my taste. Avoiding confirmation bias and understanding something about experimental design makes one a better Zendo player... but in my experience not as much as just developing a quick eye for pattern recognition and being able to read the master's actions.

Anyone here have any suggestions for possible game designs?

Comment author: khafra 09 September 2010 11:01:37AM *  5 points [-]

I'm not sure if transformice counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results seem to support ciphergoth's conjecture on intelligence levels.

Comment author: Emile 09 September 2010 12:03:46PM 2 points [-]

Transformice is awesome :D A game hasn't made me laugh that much for a long time.

And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.

Comment author: Emile 08 September 2010 10:29:11PM 8 points [-]

Note also the Wiki page, with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!)

One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky. That's more about SIAI propaganda than rationality though.

One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback.

For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad.

So basically the idea would be that the rules of the game you're really playing wouldn't be the ones you would think at first glance, which is a pretty good metaphor for real life too.

It needs to be well-designed enough so that it's not "guessing the programmer's password", but that should be possible.

Making a game around experiment design would be interesting too - have some kind of physics / chemistry / biology system that obeys some rules (mostly about transformations, not some "real" physics with motion and collisions etc.), have game mechanics that allow you to do something like experimentation, and have a general context (the feedbacks you get, what other characters say, what you can buy) that points towards a slightly wrong understanding of reality. This is bouncing off Silas' ideas, things that people say are good for you may not really be so, etc.

Here again, you can exploit the conventions of video games to mislead the player. For example, red creatures like eating red things, blue creatures like eating blue things, etc. - but the rule doesn't always hold.

Comment author: PeerInfinity 09 September 2010 05:23:22PM 2 points [-]

"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky."

See also: The Friendly AI Critical Failure Table

And I think all of the other suggestions you made in this comment would make an awesome game! :D

Comment author: DSimon 08 September 2010 10:55:28PM *  4 points [-]

Here again, you can exploit the conventions of video games to mislead the player.

I think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed.

Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.

Comment author: Raemon 09 September 2010 02:39:21AM 17 points [-]

The problem is that, simply put, such games generally fail on the "fun" meter.

There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.

The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.

Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.

The game is filled with awesome flavor, and a lot of awesome mechanics. (Specifically mechanics I had imagined independently and wanted to make my own game regarding). It looked to me like one of the coolest sounding games ever. And it was amazingly NOT FUN AT ALL for the first four hours of play. I stuck with it anyway, if for no other reason than to figure out how a game with such awesome ideas could turn out so badly. Eventually I learned how to play, and while it never became fun it did become beautiful and poignant and it's now one of my favorite games ever. But most people do not stick with something they don't like for four hours.

Toying with player's expectations sounds cool to the people who understand how the toying works, but is rarely fun for the player themselves. I don't think that's an insurmountable obstacle, but if you're going to attempt to do this, you need to really fathom how hard it is to work around. Most games telegraph everything for a reason.

Comment author: Emile 09 September 2010 02:52:35PM 1 point [-]

Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?

*updates*

I hadn't heard of that game, I might try it out. I'm actually surprised a game like that was made and commercially published.

Comment author: NihilCredo 25 September 2010 10:29:33PM *  2 points [-]

It was made by a Russian developer which is better known for its previous effort, Pathologic, a somewhat more classical first-person adventure game (albeit very weird and beautiful, with artistic echoes from Brecht to Dostoevskij), but with a similar problem of being murderously hard and deceptive - starving to death is quite common. Nevertheless, in Russia Pathologic had acceptable sales and excellent critical reviews, which is why Ice-Pick Lodge could go on with a second project.

Comment author: Raemon 09 September 2010 11:19:22PM *  4 points [-]

It's a good game, just with a very narrow target audience. (This site is probably a good place to find players who will get something out of it, since you have higher than average percentages of people willing to take a lot of time to think about and explore a cerebral game).

Some specific lessons I'd draw from that game and apply here:

  1. Don't penalize failure too hard. The Void's single biggest issue (for me) is that even when you know what you're doing you'll need to experiment and every failure ends with death (often hours after the failure). I reached a point where every time I made even a minor failure I immediately loaded a saved game. If the purpose is to experiment, build the experimentation into the game so you can try again without much penalty (or make the penalty something that is merely psychological instead of an actual hampering of your ability to play the game.)

  2. Don't expect players to figure things out without help. There's a difference between a game that teaches people to be rational and a game that simply causes non-rational people to quit in frustration. Whenever there's a rational technique you want people to use, spell it out. Clearly. Over and over (because they'll miss it the first time).

The Void actually spells out everything as best they can, but the game still drives players away because the mechanics are simply unlike any other game out there. Most games rely on an extensive vocabulary of skills that players have built up over years, and thus each instruction only needs to be repeated once to remind you of what you're supposed to be doing. The Void repeats instructions maybe once or twice, and it simply isn't enough to clarify what's actually going on. (The thing where NPCs lie to you isn't even relevant till the second half of the game. By the time you get to that part you've either accepted how weird the game is or you've quit already).

My sense is that the best approach would be to start with a relatively normal (mechanics-wise) game, and then have NPCs that each encourage specific applications of rationality, but each of which has a rather narrow mindset and so may give bad advice for specific situations. But your "main" friend continuously reminds you to notice when you are confused, and consider which of your assumptions may be wrong. (Your main friend will eventually turn out to be wrong/lying/unhelpful about something, but only the once and only towards the end when you've built up the skills necessary to figure it out).

Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?

This was my experience with the Void exactly. Basically all the mechanics and flavors were things I had come up with one my own that I wanted to make games out of, and I'm really glad I played the Void first because I might have wasted a huge chunk of time making a really bad game if I didn't get to learn from their mistakes.

Comment author: Emile 08 September 2010 11:21:35PM 1 point [-]

Riffing off my weird biology / chemistry thing: a game based on the breeding of weird creatures, by humans freshly arrived on the planet (add some dimensional travel if you want to justify weird chemistry - I'm thinking of Tryslmaistan.

The catch is (spoiler warning!), the humans got the wrong rules for creature breeding, and some plantcrystalthingy they think is the creatures' food is actually part of their reproduction cycle, where some essential "genetic" information passes.

And most of the things that look like in-game help and tutorials are actually wrong, and based on a model that's more complicated than the real one (it's just a model that's closer to earth biology).

Comment author: SilasBarta 07 September 2010 10:11:35PM 8 points [-]

Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.

(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)

Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.

Comment author: CronoDAS 08 September 2010 10:00:19PM *  4 points [-]

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

Or you could just go look up the correct answers on gamefaqs.com.

Comment author: JGWeissman 08 September 2010 10:06:46PM 2 points [-]

So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.

Comment author: CronoDAS 08 September 2010 10:08:56PM 4 points [-]

Ever played Nethack? ;)

Comment author: humpolec 08 September 2010 12:21:47PM *  11 points [-]

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

Comment author: steven0461 07 September 2010 10:41:07PM *  6 points [-]

I think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying.

The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.

Comment author: SilasBarta 07 September 2010 11:37:48PM 1 point [-]

Good point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.

Comment author: DSimon 08 September 2010 09:28:02PM *  6 points [-]

Best I think would be if the warning came implicitly as part of the game, and a little ways into it.

For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion.

Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.

Comment author: SilasBarta 09 September 2010 12:48:14PM 1 point [-]

Exactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong.

So is this game idea something feasible and which meets your criteria?

Comment author: DSimon 09 September 2010 02:55:33PM 3 points [-]

I think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens; maybe the game could be called Standing On The Eyestalks Of Giants.

On the particular criteria:

  • Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount.

  • Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias.

  • Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising).

  • Not rigged: Assuming the interface for modelling the game world's physics and doing experiments is sophisticated enough, this should prevent the feeling that the player can win by just finding the button marked "I Am Rational" and hitting it. However, I think this is the trickiest part programming-wise.

I'm going to look into IF programming a bit to figure out how implementable some of this stuff is. I won't and can't make promises regarding timescale or even completability, however: I have several other projects going right now which have to take priority.

Comment author: SilasBarta 09 September 2010 03:40:43PM *  1 point [-]

Thanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea.

I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England

Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!"

Different magnetic field too, so they can't just say, "lol, make a compass, it points north".

I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself.

Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.

Comment author: humpolec 07 September 2010 09:00:57PM 8 points [-]

RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could

  • explicitly make optimization a part of game's storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
  • create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don't apply - make the player shut up and multiply
  • create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)

Sorry if this isn't very fleshed-out, just a possible direction.

Comment author: Perplexed 07 September 2010 10:44:57PM 3 points [-]

Dramatic in outcome:

One way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0.

Not allegorical:

The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter.

Surprising

Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable.

But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.

Comment author: steven0461 07 September 2010 09:53:11PM *  3 points [-]

Text adventures seem suitable for this sort of thing, and are relatively easy to write. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.

Comment author: JamesAndrix 08 September 2010 04:51:55AM 1 point [-]

Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)

In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.

Comment author: Larks 08 September 2010 06:25:11AM 3 points [-]

False positives, false negatives

This sounds pretty exhaustive.

Comment author: khafra 07 September 2010 01:31:41PM 3 points [-]

The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.

Comment author: Clippy 07 September 2010 02:19:16PM 2 points [-]

That's a really good article, the Microsoft humans really know their stuff.

Comment author: steven0461 07 September 2010 04:58:42AM *  7 points [-]

In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.

Comment author: Matt_Simpson 07 September 2010 03:57:41PM 1 point [-]

Something I learned myself that the article supported: taking tests increases retention

Something I learned from the article: varying study location increases retention.

Comment author: MartinB 07 September 2010 04:27:34AM 1 point [-]

Did anyone here read Buckminster Fullers synergetics? And if so did understand it?

Comment author: utilitymonster 07 September 2010 01:07:06AM 1 point [-]

Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?

Comment author: Document 06 September 2010 08:23:32PM 2 points [-]

Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.

Comment author: Nisan 07 September 2010 05:11:53AM 1 point [-]

You beat me to it :)

Comment author: knb 06 September 2010 06:16:03PM *  2 points [-]

Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?

Comment author: teageegeepea 06 September 2010 01:24:54AM 2 points [-]

Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.

Comment author: Will_Sawin 06 September 2010 02:53:29AM 2 points [-]

It's very silly. What he's saying is that there are properties at high levels of organizations that don't exist at low levels of organizations.

As Eliezer says, emergence is trivial. Everything that isn't quarks is emergent.

His "universality" argument seems to be that different parts can make the same whole. Well of course they can.

He certainly doesn't make any coherent arguments. Maybe he does in his book?

Comment author: Perplexed 06 September 2010 03:10:35AM 3 points [-]

Yet another example of a Nobel prize winner in disagreement with Eliezer within his own discipline.

What is wrong with these guys?

Why if they would just read the sequences, they would learn the correct way for words like "reduction" and "emergence" to be used in physics.

Comment author: khafra 07 September 2010 02:44:30AM 2 points [-]

To be fair, "reductionism is experimentally wrong" is a statement that would raise some argument among Nobel laureates as well.

Comment author: Perplexed 07 September 2010 03:16:02AM 2 points [-]

Argument from some Nobelists. But agreement from others. Google on the string "Philip Anderson reductionism emergence" to get some understanding of what the argument is about.

My feeling is that everyone in this debate is correct, including Eliezer, except for one thing - you have to realize that different people use the words "reductionism" and "emergence" differently. And the way Eliezer defines them is definitely different from the way the words are used (by Anderson, for example) in condensed matter physics.

Comment author: khafra 07 September 2010 05:28:59AM 1 point [-]

If the first hit is a fair overview, I can see why you're saying it's a confusion in terms; the only outright error I saw was confusing "derivable" with "trivially derivable."

If you're saying that nobody important really tries to explain things by just saying "emergence" and handwaving the details, like EY has suggested, you may be right. I can't recall seeing it.

Of course, I don't think Eliezer (or any other reductionist) has said that throwing away information so you can use simpler math isn't useful when you're using limited computational power to understand systems which would be intractable from a quantum perspective, like everything we deal with in real life.

Comment author: JamesAndrix 05 September 2010 08:22:54PM 4 points [-]

Finally Prompted by this, but it would be too offtopic there

http://lesswrong.com/lw/2ot/somethings_wrong/

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.

There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.

Comment author: Perplexed 05 September 2010 11:02:21PM 1 point [-]

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

My guess is that "Harry Potter and the Methods of Rationality" is the best piece of publicity the SIAI has ever produced.

I think that the only way to top it would be a Singularity/FAI-themed computer game.

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

Comment author: ata 05 September 2010 11:46:07PM *  8 points [-]

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)

Unfortunately, I don't think we can get people excited about bringing about a Friendly singularity by speaking honestly about how it happens purely at the object level, because what actually needs to be done is tons of math (plus some outreach and maybe paper-writing and book-writing and eventually a lot of coding). Saving the world isn't actually going to be an exciting ultimate showdown of ultimate destiny, and any marketing and publicity shouldn't be setting people up for disappointment by portraying it as such... and it should also be making it clear that even if existential risk reduction were fun and exciting, it wouldn't be something you do for yourself because it's fun and exciting, and you don't do it because you get to affiliate with smart/high-status people and/or become known as one yourself, and you don't do it because you personally want to live forever and don't care about the rest of the world, you do it because it's the right thing to do no matter how little you personally get out of it.

So we don't want to push the public further toward thinking of the singularity as a geek / sci-fi / power-fantasy / narcissistic thing (I realize some of those are automatic associations and pattern completions that people independently generate, but that's to be resisted and refuted rather than embraced). Fiction that portrays rationality as virtuous (and transparent, as in the Rationalist Fanfiction Principle) and that portrays transhumanistic protagonists that people can identify with (or at least like) is good because it makes the right methods and values salient and sympathetic and exciting. Giving people a vision of a future where humanity has gotten its shit together as a thing-to-protect is good; anything that makes AI or the Singularity or even FAI seem too much like an end in itself will probably be detrimental, especially if it is portrayed anywhere near anthropomorphically enough for it to be a protagonist or antagonist in a video game.

Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

Only if they can be lured to the Light Side. The *chans seem rather tribal and amoral (at least the /b/s and the surrounding culture; I know that's not the entirety of the *chans, but they have the strongest influence in those circles). If the right marketing can turn them from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.

Comment author: Perplexed 06 September 2010 12:26:51AM 2 points [-]

I don't think that would be very helpful. [And here is why...]

I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.

If the right marketing can turn them [the *chans] from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.

"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.

Comment author: ata 06 September 2010 01:02:05AM *  5 points [-]

I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.

Thank you for taking it well; sometimes I still get nervous about criticizing. :)

"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.

I've heard the /b/ / "Anonymous" culture described as Chaotic Neutral, which seems apt. My main concern is that waiting for the right thing to become fun for them to rebel against is not efficient. (Example: Anonymous's movement against Scientology began not in any of the preceding years when Scientology was just as harmful as always, but only once they got an embarrassing video of Tom Cruise taken down from YouTube. "Project Chanology" began not as anything altruistic, but as a morally-neutral rebellion against what was perceived as anti-lulz. It did eventually grow into a larger movement including people who had never heard of "Anonymous" before, people who actually were in it to make the world a better place whether the process was funny or not. These people were often dismissed as "moralfags" by the 4chan old-timers.) Indeed they are not inherently evil, but when morality is not a strong consideration one way or the other, it's too easy for evil to be more fun than good. I would not rely on them (or even expect them) to accomplish any long-term good when that's not what they're optimizing for.

(And there's the usual "herding cats" problem — even if something would normally seem fun to them, they're not going to be interested if they get the sense that someone is trying to use them.)

Maybe some useful goal that appeals to their sensibilities will eventually present itself, but for now, if we're thinking about where to direct limited resources and time and attention, putting forth the 4chan crowd as a good target demographic seems like a privileged hypothesis. "Teenage hackers" are great (I was one!), but I'm not sure about reaching out to them once they're already involved in 4chan-type cultures. There are probably better times and places to get smart young people interested.

Comment author: xamdam 05 September 2010 06:53:09PM 1 point [-]

Looks like an interesting course from MIT:

Reflective Practice: An Approach for Expanding Your Learning Frontiers

Is anyone familiar with the approach, or with the professor?

Comment author: Sniffnoy 04 September 2010 08:06:14AM 3 points [-]
Comment author: David_Allen 04 September 2010 07:17:26AM *  1 point [-]

The Idea

I am working on a new approach to creating knowledge management systems. An idea that I backed into as part of this work is the context principle.

Traditionally, the context principle states that a philosopher should always ask for a word's meaning in terms of the context in which it is being used, not in isolation.

I've redefined this to make it more general: Context creates meaning and in its absence there is no meaning.

And I've added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meaning and open a path for communication between disparate domains.

Possible Topics

I'm considering posting on how the context principle relates to certain topics. Right now I'm researching and collecting notes.

Possible topics to relate the context principle to:

  • explicit and tacit knowledge
  • theory of computation
  • debate and communication
  • rationality
  • morality
  • natural and artificial intelligence
  • "emergence"

My Request

I am looking for general feedback from this forum on the context principle and on my possible topics. I have only started working through the sequences so I am interested in specific pointers to posts I should read.

Perplexed has already started this off with his reply to my Welcome to Less Wrong! (2010) introduction.

Comment author: CronoDAS 04 September 2010 07:00:13AM *  1 point [-]
Comment author: taw 04 September 2010 06:04:29AM 1 point [-]

A question about modal logics.

Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.

It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.

What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. Low expressiveness, ambiguous interpretations, far too many paradoxes that seem to be more about failing to specify underlying logic correctly than about actual problems, and no convergence on a single deontic logic than works.

After reading all this, I made a few quick attempts at defining logic of obligations, just to be sure it's not some sort of collective insanity, but they all ran into very similar problems extremely quickly.

Now I'm in no way deontologically inclined, but if I were it would really bother me. If it's really impossible to formally express obligations, this kind of ethics is built on extremely flimsy basis. Consequentialism has plenty of problems in practice, but at least in hypothetical scenarios it's very easy to model correctly. Deontic logic seems to lack even that.

Is there any kind of deontic logic that works well that I missed? I'm not talking about solving FAI, constructing universal rules of morality or anything like it - just about a language that expresses exactly the kind of obligations we want, and which works well in simple hypothetical worlds.

Comment author: Will_Newsome 03 September 2010 11:02:19AM *  10 points [-]

I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.

Comment author: [deleted] 09 September 2010 02:52:36AM 4 points [-]

Righteous indignation is a good word for it.

I, personally, see it as one of the emotional capacities of a healthy person. Kind of like lust. It can be misused, it can be a big time-waster if you let it occupy your whole life, but it's basically a sign that you have enough energy. If it goes away altogether, something may be wrong.

I had a period a few years ago of something like anhedonia. The thing is, I also couldn't experience righteous indignation, or nervous worry, or ordinary irritability. It was incredibly satisfying to get them back. I'm not a psychologist at all, but I think of joy, anger, and worry (and lust) as emotions that require energy. The miserably lethargic can't manage them.

So that's my interpretation and very modest defense of righteous indignation. It's not a very practical emotion, but it is a way of engaging personally with the world. It motivates you in the minimal way of making you awake, alert, and focused on something. The absence of such engagement is pretty horrible.

Comment author: brian_jaress 05 September 2010 10:11:18AM 3 points [-]

I've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness."

As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link was this. I don't know about the quality of the study, but you could use the term.

Comment author: David_Allen 04 September 2010 06:02:09AM *  3 points [-]

In myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others.

I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature.

I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.

Comment author: komponisto 04 September 2010 12:54:13AM 4 points [-]

Interestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time, which initiated what has thus far been my greatest period of development as a rationalist.

It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer.

It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.

Comment author: jimrandomh 03 September 2010 07:08:17PM 6 points [-]

I noticed this emotion cropping up a lot when I read Reddit, and stopped reading it for that reason. It's too easy to, for example, feel outraged over a video of police brutality, but not notice that it was years ago and in another state and already resolved.

Comment author: Eliezer_Yudkowsky 03 September 2010 07:07:18PM 5 points [-]

Sounds related to the failure class I call "living in the should-universe".

Comment author: Will_Newsome 03 September 2010 10:51:03PM *  3 points [-]

It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I'm sure you have lots of useful cached thoughts on the matter.

Added: Ah, I'd thought you'd just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.

Comment author: Airedale 03 September 2010 03:08:20PM *  8 points [-]

The words righteous indignation in combination are sufficiently well-recognized as to have their own wikipedia page. The page also says that righteous indignation has overtones of religiosity, which seems like a reason not to use it in your sense . It also says that it is akin to a "sense of injustice," but at least for me, that phrase doesn't have as much resonance.

Edited to add this possibly relevant/interesting link I came across, where David Brin describes self-righteous indignation as addictive.

Comment author: Perplexed 03 September 2010 04:20:22PM 4 points [-]

which seems like a reason not to use it in your sense.

Strikes me as exactly the reason you should use it. What you are describing is indignation, it is righteous, and it is counterproductive in both rationalists and less rational folks for pretty much the same reasons.

Comment author: wedrifid 03 September 2010 12:09:08PM *  4 points [-]

Should I just use 'indignation' and then define what I mean in the first few sentences?

That could work well when backed up by with the description of just what you will be using the term to mean.

I will be interested to read your post - from your brief introduction here I think I have had similar observations about emotions that interfere with thought, independent of raw overwhelm from primitives like anger.

Comment author: Perplexed 03 September 2010 04:05:53PM *  1 point [-]

Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.

Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions are:

  • Are these AIs born speaking English, Chinese, Arabic, Hindi, etc., or do they have to learn these languages?
  • If they learn these languages, do they have to pass some kind of language proficiency test before they are permitted to use them?
  • Are they born with any built in language capability or language learning capability at all?
  • Are the "objective functions" with which we seek to leash AIs expressed in some kind of language, or in something more like "object code"?
Comment author: Kaj_Sotala 02 September 2010 09:04:37PM *  22 points [-]

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

I see the same thing with tennis, golf, music, and just about any other skill, at least at non-professional levels. And research supports the obvious, that practice is the main determinant of success in a particular field.

As a practical matter, you can't keep logs of all the hours you have spent practicing various skills. And I wonder how that affects our perception of what it takes to be a so-called winner. We focus on the contest instead of the practice because the contest is easy to measure and the practice is not.

Complicating our perceptions is professional sports. The whole point of professional athletics is assembling freaks of nature into teams and pitting them against other freaks of nature. Practice is obviously important in professional sports, but it won't make you taller. I suspect that professional sports demotivate viewers by sending the accidental message that success is determined by genetics.

My recommendation is to introduce eight-ball into school curricula, but in a specific way. Each kid would be required to keep a log of hours spent practicing on his own time, and there would be no minimum requirement. Some kids could practice zero hours if they had no interest or access to a pool table. At the end of the school year, the entire class would compete in a tournament, and they would compare their results with how many hours they spent practicing. I think that would make real the connection between practice and results, in a way that regular schoolwork and sports do not. That would teach them that winning happens before the game starts.

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

ETA: I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for, AFAIK. But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Comment author: Jonathan_Graehl 24 September 2010 11:00:35PM 1 point [-]

I'm guilty of a sort of fixation on IQ (not actual scores or measurements of it). I have an unhealthy interest in food, drugs and exercises (physical and mental) that are purported to give some incremental improvement. I see this in quite a few folks here as well.

To actually accomplish something, more important than these incremental IQ differences are: effective high-level planning and strategy, practice, time actually spent trying, finding the right collaborators, etc.

I started playing around with some IQ-test-like games lately and was initially a little let down with how low my performance (percentile, not absolute) was on some tasks at first. I now believe that these tasks are quite specifically-trainable (after a few tries, I may improve suddenly, but after that I can, but choose not to, steadily increase my performance with work), and that the population actually includes quite a few well-practiced high-achievers. At least, I prefer to console myself with such thoughts.

But, seeing myself scored as not-so-smart in some ways, I started to wonder what difference it makes to earn a gold star that says you compute faster than others, if you don't actually do anything with it. Most people probably grow out of such rewards at a younger age than I did.

Comment author: [deleted] 09 September 2010 02:33:41AM 3 points [-]

I like this anecdote.

I never valued intelligence relative to practice, thanks to an upbringing that focused pretty heavily on the importance of effort over talent. I'm more likely to feel behind, insufficiently knowledgeable to the point that I'm never going to catch up. I don't see why it's necessarily a cheerful observation that practice makes a big difference to performance. It just means that you'll never be able to match the person who started earlier.

Comment author: jimrandomh 03 September 2010 01:30:47PM 6 points [-]

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something.

This is certainly true of me, but I try to make sure that the positive feeling of having identified the mistakes and improved outweighs the negative feeling of having needed the improvement. Tsuyoku Naritai!

Comment author: hegemonicon 03 September 2010 03:59:53AM *  6 points [-]

people in this community are unusually prone to feeling that they're stupid if they do badly at something

I suspect this is a result of the tacit assumption that "if you're not smart enough, you don't belong at LW". If most members are anything like me, this combined with the fact that they're probably used to being "the smart one" makes it extremely intimidating to post anything, and extremely de-motivational if they make a mistake.

In the interests of spreading the idea that it's ok if other people are smarter than you, I'll say that I'm quite certainly one of the less intelligent members of this community.

I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Practice and expertise tend to be domain-specific - Scott isn't any better at darts or chess after playing all that pool. Even learning things like metacognition tend not to apply outside of the specific domain you've learned it in. Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Comment author: xax 03 September 2010 09:07:19PM 1 point [-]

Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Only if you've already defined intelligence as not domain-specific in the first place. Conversely, meta-cognition about a person's own learning processes could help them learn faster in general, which has many varied applications.

Comment author: Daniel_Burfoot 03 September 2010 03:47:37AM 4 points [-]

I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task. A key problem is to identify tasks as intelligence-dominated (the smart guy always wins) vs. practice-dominated (the experienced guy always wins).

As a first observation about this problem, notice that clearly definable or objective tasks (chess, pool, basketball) tend to be practice-dominated, whereas more ambiguous tasks (leadership, writing, rationality) tend to be intelligence-dominated.

Comment author: Kaj_Sotala 03 September 2010 08:38:20AM 2 points [-]

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task.

This is true. Intelligence research has shown that intelligence is more useful for more complex tasks, see e.g. Gottfredson 2002.

Comment author: Wei_Dai 02 September 2010 11:51:21PM 1 point [-]

But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

I'm not sure I agree with that. In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas?

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

Comment author: Kaj_Sotala 03 September 2010 08:45:02AM *  5 points [-]

In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas?

I should probably note that my overvaluing of intelligence is more of an alief than a belief. Mostly it shows up if I'm unable to master (or at least get a basic proficiency in) a topic as fast as I'd like to. For instance, on some types of math problems I get quickly demotivated and feel that I'm not smart enough for them, when the actual problem is that I haven't had enough practice on them. This is despite the intellectual knowledge that I could master them, if I just had a bit more practice.

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

That sounds about right, though I would note that there's a huge amount of background knowledge that you need to absorb on LW. Not just raw facts, either, but ways of thinking. The lack of improvement might partially be because some people have absorbed that knowledge when they start posting and some haven't, and absorbing it takes such a long time that the improvement happens too slowly to notice.

Comment author: wedrifid 03 September 2010 09:25:10AM *  3 points [-]

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

That's interesting. I hadn't got that impression but I haven't looked too closely at such trends either. There are a few people whose comments have improved dramatically but the difference seems to be social development and and not necessarily their rational thinking - so perhaps you have a specific kind of improvement in mind.

I'm interested in any further observations on the topic by yourself or others.

Comment author: Houshalter 02 September 2010 10:28:40PM 1 point [-]

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

Comment author: Sniffnoy 03 September 2010 07:32:38PM *  3 points [-]

There's a large difference between the "leveling up" in such games, where you gain new in-game capabilities, and actually getting better, where your in-game capabilities stay the same but you learn to use them more effectively.

ETA: I guess perhaps a better way of saying it is, there's a large difference between the causal chains time->winning, and time->skill->winning.

Comment author: mattnewport 02 September 2010 10:34:37PM 9 points [-]

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

I imagine lots of kids play Farmville already.

Comment author: Kaj_Sotala 03 September 2010 08:53:06AM *  3 points [-]

Those games don't really improve any sort of skill, though, and neither does anyone expect them to. To teach kids this, you need a game where you as a player pretty much never stop improving, so that having spent more hours on the game actually means you'll beat anyone who has spent less.

Go might work.

Comment author: rwallace 03 September 2010 12:57:16PM 5 points [-]

There are schools that teach Go intensively from an early age, so that a 10-year-old student from one of those schools is already far better than a casual player like me will ever be, and it just keeps going up from there. People don't seem to get tired of it.

Every time I contemplate that, I wish all the talent thus spent, could be spent instead on schools providing similarly intensive teaching in something useful like science and engineering. What could be accomplished if you taught a few thousand smart kids to be dan-grade scientists by age 10 and kept going from there? I think it would be worth finding out.

Comment author: Christian_Szegedy 08 September 2010 07:08:36AM *  2 points [-]

I agree with you. I also think that there are several reasons for that:

First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer.

The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities."

Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)

Comment author: NihilCredo 06 September 2010 01:43:50AM 3 points [-]

A somewhat related, impactful graph.

Of course, human effort and interest is far from perfectly fungible. But your broader point retains a lot of validity.

Comment author: realitygrill 03 September 2010 04:18:06AM 5 points [-]

This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.

Comment author: Oscar_Cunningham 03 September 2010 09:14:09AM *  1 point [-]

Someone made a page that automatically collects high karma comments. Could someone point me at it please?

Comment author: Kazuo_Thow 04 September 2010 07:01:09AM 1 point [-]

Here's the Open Thread comment where Daniel Varga made the page and its source code public. I don't know how often it's updated.

Comment author: wedrifid 03 September 2010 09:29:02AM 1 point [-]

They did? I've been wishing for something like that myself. I'd also like another page that collects just my high karma comments. Extremely useful feedback!

Comment author: Morendil 02 September 2010 10:21:35PM 5 points [-]

I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial features of which it’s a byproduct.

Neil Van Leuween, Why Self-Deception Research Hasn’t Made Much Progress

Comment author: b1shop 02 September 2010 05:01:47PM *  6 points [-]

I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.

Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.

But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still seems like a good goal.

Edit: For example, someone can be a truth-seeking scientist if they are doing it to answer questions or if they're doing it for the chicks.

Comment author: blogospheroid 02 September 2010 12:32:25PM 3 points [-]

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might just be let loose.

But just consider, the small probability that some of the rationalists come together as a non-profit corporation to contribute to mitigating existential risk. There are many reasons our kind cannot cooperate . Also, the fact is that coordination is hard

But if we could, then with the latest in decision theory, argument diagrams ( 1,2, 3 ), internal futarchy (after the size of the coporation gets big), we could create a corporation that wins. There are many people from the world of software here. Within the corporation itself, there is no need to stick to legacy systems. We could interact with the best of coordination software and keep the corporation "sane".

We can create products and services like any for-profit corporation and sell them at market rates, but use the surplus to mitigate existential risk. In other words, it is difficult, but in the everett branches where x-rationalists manage a synergistic outcome, it might be possible to strengthen the funding of existential risk mitigation considerably.

Some criticisms of this idea which I could think of

  • The corporation becomes a lost cause. Goodhart's law kicks in and the original purpose of forming the corporation is lost.
  • People are polite when in a situation where no important decisions are being made (like an internet forum like lesswrong), but if actual productivity is involved, they might get hostile when someone lowers their corporate karma. Perfect internet buddies might become co-workers who hate each other's guts.
  • The argument that there is no possibility of synergy. The present situation, where rational people spread over the world and in different situations are money pumping from less rational people around them is better.
  • People outside the corporation might mentally slot existential risk as a kooky topic that "that creepy company talks about all the time" and not see it as a genuine issue that diverse persons from different walks of life are interested in.

and so on..

But still, my question is - Shouldn't we atleast consider the possibilities of synergy in a manner indicated?

Comment author: wedrifid 02 September 2010 01:45:11PM *  1 point [-]

The would be more likely to work if you completely took out the 'for existential risk' part. Find a way to cooperate with people effectively "to make money". No need to get religion all muddled up in it.

Comment author: JohnDavidBustard 02 September 2010 01:15:08PM 2 points [-]

Apologies if this question seems naive but I would really appreciate your wisdom.

Is there a reasonable way of applying probability to analogue inference problems?

For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If this assumption must be made how can this be reasonably determined, and crucially, what events could occur that would lead to it being changed?

A real example would be, that the PDF is often modelled as a Gaussian distribution, but more recent approaches tend to use different distributions because of outliers. This seems like the right thing to do because our visual sense of distribution can easily identify such points, but is there any more rigorous justification?

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Comment author: Perplexed 02 September 2010 01:57:10PM 4 points [-]

Is there a reasonable way of applying probability to analogue inference problems?

Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with "the real challenge". In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, ... to put things mildly. But then classical statistics has its subtleties too.

Comment author: JanetK 02 September 2010 07:54:27AM 1 point [-]

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Comment author: thomblake 02 September 2010 05:19:34PM 3 points [-]

The philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.

Comment author: Emile 02 September 2010 08:15:56AM 2 points [-]

But now I sense an even more disturbing definition: rational as opposed to empirical.

I don't think that's how most people here understand "rationalism".

Comment author: JanetK 02 September 2010 09:09:40AM 1 point [-]

I don't think that's how most people here understand "rationalism".

Good

Comment author: timtyler 02 September 2010 08:39:23AM *  1 point [-]

There is at least one post about that - though I don't entirely approve of it.

Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.

Comment author: Kenny 03 September 2010 10:56:39PM 2 points [-]

Occam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).

Comment author: kodos96 02 September 2010 08:33:28AM 1 point [-]

But now I sense an even more disturbing definition: rational as opposed to empirical.

Ummmmmmmm.... no.

The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.

Comment author: JanetK 02 September 2010 08:56:11AM 1 point [-]

According to my dictionary: rationalism 1. Philos. the theory that reason is the foundation of certainty in knowledge (opp. empiricism, sensationalism)

This is there as well as: rational 1. of or based on reasoning or reason

So although there are other (more everyday) definitions also listed at later numbers, the opposition to empirical is one of the literal definitions. The Bayesian updating thing is why it took me a long time to notice the other anti-scientific tendency.

Comment author: timtyler 03 September 2010 07:55:54AM *  2 points [-]

I wouldn't say "anti-scientific" - but it certainly would be good if scientists actually studied rationality more - and so were more rational.

With lab equipment like the human brain, you have really got to look into its strengths and weaknesses - and read the manual about how to use it properly.

Personally, when I see material like Science or Bayes - my brain screams: false dichotomy: Science and Bayes! Don't turn the scientists into a rival camp: teach them.

Comment author: wedrifid 02 September 2010 08:17:38AM *  1 point [-]

But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Indeed. It is heretic in the extreme! Burn them!

Comment author: JamesAndrix 02 September 2010 01:49:42AM 3 points [-]

I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.

Does anyone have something that could turn into a top level post? or even a open thread comment?

Comment author: JohnDavidBustard 02 September 2010 08:10:52AM *  10 points [-]

I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).

Comment author: Mass_Driver 02 September 2010 05:13:26PM 3 points [-]

I found Rules of Play to be little more than a collection of unnecessary (if clearly-defined) jargon and glittering generalities about how wonderful and legitimate games are. Possibly an alien or non-neurotypical who had no idea what a game was might gather some idea of games from reading the book, but it certainly didn't do anything for me to help me understand games better than I already do from playing them. Did I miss something?

Comment author: JohnDavidBustard 02 September 2010 05:41:35PM *  5 points [-]

Yes I take your point. There isn't a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.

Comment author: komponisto 02 September 2010 02:15:57AM *  5 points [-]

I've long had the idea of writing a sequence on aesthetics; I'm not sure if and when I'll ever get around to it, however. (I have a fairly large backlog of post ideas that have yet to be realized.)

Comment author: Kaj_Sotala 01 September 2010 04:46:23PM 15 points [-]

Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.

Comment author: Snowyowl 02 September 2010 01:24:54PM *  6 points [-]

Wow. I thought I understood regression to the mean already, but the "correlation between X and Y-X" is so much simpler and clearer than any explanation I could give.

Comment author: Vladimir_M 02 September 2010 04:00:14AM *  2 points [-]

When I tried making sense of this topic in the context of the controversies over IQ heritability, the best reference I found was this old paper:

Brian Mackenzie, Fallacious use of regression effects in the I.Q. controversy, Australian Psychologist 15(3):369-384, 1980

Unfortunately, the paper failed to achieve any significant impact, probably because it was published in a low-key journal long before Google, and it's now languishing in complete obscurity. I considered contacting the author to ask if it could be put for open access online -- it would be definitely worth it -- but I was unable to find any contact information; it seems like he retired long ago.

There is also another paper with a pretty good exposition of this problem, which seems to be a minor classic, and is still cited occasionally:

Lita Furby, Interpreting regression toward the mean in developmental research, Developmental Psychology, 8(2):172-179, 1973

Comment author: homunq 01 September 2010 03:52:49PM *  17 points [-]

I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.

My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.

I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?

Comment author: [deleted] 16 September 2010 10:39:14PM *  3 points [-]

A minute in Konkvistador's mind:

Again the very evil mind shattering secret, why do I keep running into you?

This is getting old, lots of people seem to know about it. And a few even know the evil soul wrecking idea.

The truth is out there, my monkey brains can't cope with the other's having a secret not willing to share, they may bash my skull in with a stone! I should just mass PM the guys who know about the secret in a nonconspicus way. They will drop hints, they are weak. Also traces of the relevant texts have to still be on-line.

That job advert seems to be the kind a rather small subset of organizations would put out.

That is just paranoid don't even think about that.

XXX asf ag agdlqog hh hpoq fha r wr rqw oipa wtrwz wrz wrhz. W211!!

Yay posting on Lesswrong feels like playing Call of Cthulhu!

....

These are supposed to be not only very smart, but very rational people, people you have a high opinion of, who seem to take the idea very seriously. They may be trying to manipulate you. There may be a non-trivial possibility of them being right.

....

I suddenly feel much less enthusiastic about life extension and cryonics.

Comment author: thomblake 16 September 2010 10:53:45PM 2 points [-]

I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed.

I don't think there's anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.

Comment author: [deleted] 16 September 2010 11:06:05PM *  3 points [-]

Can I haz evil soul crushing idea plz?

But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don't take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious.

What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I'm quite sure there are a few that could prove very toxic to humans.

The LW member that take it seriously are doing a horrible job of it.

Comment author: NancyLebovitz 20 September 2010 04:24:12PM 1 point [-]

Upvoted for the cat picture.

Comment author: xamdam 05 September 2010 07:21:48PM 4 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.

Comment author: Relsqui 16 September 2010 10:08:42PM 3 points [-]

reflects poorly on the objectivity of moderators

As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we've compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn't yet on our list--or which doesn't quite match the way we worded it--or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).

This is not to say that I wouldn't like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.

Comment deleted 05 September 2010 11:07:30AM *  [-]
Comment author: NihilCredo 05 September 2010 11:53:35AM 6 points [-]

As a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome).

At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else - specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.

Comment author: Perplexed 01 September 2010 06:47:49PM *  14 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.

As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.

Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.

It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.

Edit: typo correction - insert missing words