Less Wrong: Open Thread, September 2010

3 Post author: matt 01 September 2010 01:40AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (610)

Comment author: David_Gerard 06 December 2010 12:51:24PM 0 points [-]

Is the Open Thread now deprecated in favour of the Discussion section? If so, I suggest an Open Thread over there for questions not worked out enough for a Discussion post. (I have some.)

Comment author: Cyan 01 October 2010 10:46:41PM 0 points [-]

Request: someone make a fresh open thread, and someone else make a rationality thread. I'd do it myself, but I've already done one of each this year; each kind of thread is usually good for two or three karma, and it wouldn't be fair.

Comment author: whpearson 02 October 2010 06:48:48PM 1 point [-]

Personally I don't care about karma much, you can have my slice of the karma pie.

Perhaps put a note reminding other people that they can post them.

Comment author: JGWeissman 01 October 2010 10:57:22PM 2 points [-]

With the new discussion section, do we really need these recurring threads?

Comment author: Cyan 02 October 2010 06:37:40PM 2 points [-]

Probably not the open thread, but I'd like the tradition of monthly rationality quotes threads to continue.

Comment author: NancyLebovitz 02 October 2010 11:36:16AM 3 points [-]

I don't know. Open threads strike me as a better structure for conversation.

Comment author: DanielVarga 27 September 2010 06:22:06PM *  1 point [-]
Comment author: beriukay 25 September 2010 09:50:05PM 4 points [-]

I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.

Comment author: magfrump 25 September 2010 09:38:30PM *  0 points [-]

Omega comes up to you and tells you that if you believe in science it will make your life 1000 utilons better. He then goes on to tell you that if you believe in god, it will make your afterlife 1 million utilons better. And finally, if you believe in both science and god, you won't get accepted into the afterlife so you'll only get the 1000 utilons.

If it were me, I would tell omega that he's not my real dad and go on believing in science and not believing in god.

Am I being irrational?

EDIT: if omega is an infinitely all-knowing oracle, the answer may be different than if omega is ostensibly a normal human who has predicted many things correctly. Also by "to believe in science" I mean to pursue epistemic rationality as a standard for believing things rather than, for example, literal interpretation of the bible.

Comment author: NihilCredo 25 September 2010 09:52:50PM *  1 point [-]

The definition of Omega includes him being completely honest and trustworthy. He wouldn't tell you "I will make your afterlife better" unless he knew that there is an afterlife (otherwise he couldn't make it better), just like he wouldn't say "the current Roman Emperor is bald". If he were to say instead "I will make your afterlife better, if you have one", I would keep operating on my current assumption that there is no such thing as an afterlife.

Oh, I almost forgot - what does it even mean to "believe in science"?

Comment author: Pavitra 23 September 2010 05:30:32AM 3 points [-]

In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).

Comment author: jimrandomh 23 September 2010 08:23:40PM *  3 points [-]

It looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.

My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.

Comment author: PeerInfinity 20 September 2010 08:19:31PM *  2 points [-]

Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?

Comment author: Relsqui 16 September 2010 10:37:14PM *  8 points [-]

I'm a translator between people who speak the same language, but don't communicate.

People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.

On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").

Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day compile them into a guide written for an audience much like this one. Do you have questions about how to communicate with people who think very much unlike you, or about specific situations that frustrate you? Would you like me to explain what appears to be an arbitrary point of etiquette? Anything else related to the topic which you'd like to see addressed?

In short: "I understand the weird emotional people who are always yelling at you, but I'm also capable of speaking your language. Ask me anything."


[1] These are both phrased as pejoratively as I could manage, on purpose. Neither extreme is healthy.

Comment author: Rain 30 September 2010 02:45:40PM 1 point [-]

I wanted to say thank you for providing these services. I like performing the same translations, but it appears I'm unable to be effective in a text medium, requiring immediate feedback, body language, etc. When I saw some of your posts on old articles, apparently just as you arrived, I thought to myself that you would genuinely improve this place in ways that I've been thinking were essential.

Comment author: Relsqui 30 September 2010 06:25:44PM 1 point [-]

Thanks! That's actually really reassuring; that kind of communication can be draining (a lot of people here communicate naturally in a way which takes some work for me to interpret as intended). It is good to hear that it seems to be doing some good.

Comment author: beriukay 25 September 2010 10:09:37PM 2 points [-]

One issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"

Comment author: Relsqui 26 September 2010 03:05:53AM *  10 points [-]

I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.

For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.

Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.

The corollary to the implied "your beliefs are wrong" is "I know better than you" (because that's how you would tell that they're wrong). This is an incredibly rude signal to send to--well, anyone, but especially to another adult. Your hackles probably rise too when someone signals that they're superior to you and you don't agree; this is the same thing.

The point, then, is not that you need to accept what people you care about say with credulity. It's that you need to accept it with respect. You do not have any greater value than the person you're talking to (even if you are smarter and more rational), just like they don't have any greater value than you (even if they're richer and more attractive). Even if you really were by some objective measure a better person (which is, as far as I can tell, a useless thing to consider), they don't think so, and acting like it will get you nowhere.

Possibly one of the hardest parts of this to swallow is that, when you're choosing words for the purpose of making another person remain comfortable talking to you, whether their beliefs are a good reflection of reality is not actually important. Obviously they think so, and merely contradicting them won't change that (nor should it). So if you sound like you're just trying to convince them that they're wrong, even if that isn't what you mean to do, they might just feel condescended to and walk away.

None of this means that you can't express your own beliefs vehemently ("gay people deserve equal rights!"). It just means that when someone expresses one of theirs, interrogating them bluntly about their reasons--especially if they haven't questioned them before--is more likely to result in defensiveness than in convincing them or even productive debate. This may run counter to your instincts, understandably, but there it is.

No fuzzy words in the world will soften your language if their inflection reveals intensity and superiority. Display real respect, including learning to read your audience and back off when they're upset. (You can always return to the topic another time, and in fact, occasional light conversations will probably do a better job with this sort of person than one long intense one.) If you aren't able to show genuine respect, well, I don't blame them for refusing to discuss their beliefs with you.

Comment author: Morendil 17 September 2010 07:17:21AM 1 point [-]

Yes please.

Does the term "bridger" ring a bell for you? (It's from Greg Egan's Diaspora, in case it doesn't, and you'd have to read it to get why I think that would be an apt name for what you're describing.)

Comment author: Relsqui 17 September 2010 07:37:56AM 0 points [-]

It doesn't, and I haven't, although I can infer at least a little from the term itself. Your call if you want to try and explain it or wait for me to remember, find a library that has it, acquire it, and read it before understanding. ;)

Is there any specific subject under that umbrella which you'd like addressed? Narrowing the focus will help me actually put something together.

Comment author: Morendil 17 September 2010 08:30:56AM 0 points [-]

The Wikipedia page explains a little about Bridgers.

I'm afraid if I knew how to narrow this topic down I'd probably be writing it up myself. :)

Comment author: Relsqui 17 September 2010 04:24:58PM 0 points [-]

Hmm. I'm wary of the analogy to separate species; humans treat each other enough like aliens as it is. But so noted, thank you.

Comment author: TobyBartels 16 September 2010 08:45:24PM *  1 point [-]

Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here

test deleted

Comment author: Clippy 13 September 2010 01:42:25AM *  0 points [-]

\>equals(correct_reasoning , Bayesian_inference)

Comment author: Clippy 13 September 2010 08:27:03PM 0 points [-]

This server is really slow.

Comment author: Will_Newsome 12 September 2010 08:51:41PM 0 points [-]

Shangri-La dieters: So I just recently started reading through the archives of Seth Roberts' blog, and it looks like there's tons of benefits of getting 3 or so tablespoons of flax seed oil a day (cognitive performance, gum health, heart health, etc.). That said, it also seems to reduce appetite/weight, neither of which I want. I haven't read through Seth's directory of related posts yet, but does anyone have any advice? I guess I'd be willing to set alarms for myself so that I remembered to eat, but it just sounds really unpleasant and unwieldy.

Comment author: jimmy 16 September 2010 07:09:24AM 1 point [-]

Flaxseed oil has a strong odor. I think most people try to choke it down with their breath held to avoid the smell. It probably wouldn't count as 'flavorless calories' if you didn't.

If you can't stand that, eat it with some consistent food.

Comment author: Will_Newsome 16 September 2010 07:13:59AM 0 points [-]

Of note is that I was recommended fish oil instead as it has a better omega-3/omega-6 ratio, so I'll probably go that route.

Comment author: AnnaSalamon 12 September 2010 09:01:30PM *  2 points [-]

Perhaps add your flax seed oil to food, preferably food with notable flavors of various kinds. It's tasty that way and should avoid the tasteless calories that are supposed to be important to Shangri-La (although I haven't read about Shangri-La, so don't trust me).

Comment author: gwern 12 September 2010 01:50:03PM 2 points [-]

The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/

Comment author: NancyLebovitz 12 September 2010 12:10:40PM 7 points [-]

I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.

Comment author: jimmy 16 September 2010 07:10:14AM 1 point [-]

I've noticed this too. There's no easy way to 'unfold all' is there?

Comment author: simplicio 12 September 2010 05:10:27AM 3 points [-]

In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?

This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!

Comment author: jimmy 16 September 2010 07:15:09AM *  0 points [-]

I just finished reading that book. It is mostly from a "defense against" perspective.

Reading the chapter names provides a decent [extremely short] summary, and I expect that you're already aware that they are influences. That said, when I read through it, there were a lot of "Aha!" moments, when I realized something I've seen was actually a well thought out 'weapon of influence'- and now my new hobby is saying "Chapter 3: Commitment and Consistency!" every time I see it used as persuasion.

The whole book is hard to put down, and makes me want to quote part of it to the nearest person in about every paragraph or two.

I'd consider writing such a post, but I'm not sure how to compress it- the very basics should be obvious to the regulars here, but the details take time to flush out.

Comment author: CronoDAS 12 September 2010 06:34:26AM *  0 points [-]

Yes, I would like such a post.

Comment author: allenwang 12 September 2010 04:11:17AM 1 point [-]

I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.

I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackle the irrational tangle that many of our institutions appear to be caught up in, so I thought this would be the perfect place to get some expertise. The project is kind of at a standstill, and it really needs some advice and leads (and collaborators), so please feel free to praise, criticize, advise, or even join.

I saw orthonormal's "welcome to LessWrong post," so I guess this is where to post before I build up enough points. I hope it isn't too long of an introductory post for this thread?

The aim of the project is to achieve a population that is more educated in the basics of climate change science and policy, with the hope that a more educated voting public will be a big step towards achieving the policies necessary to deal with climate change.

The basic problem of educating the public about climate change is twofold. First, people sometimes get trapped into “information cocoons” (I am using Cass Sunstein’s terminology from his book Infotopia). Information cocoons are created when the news and information people seek out and surround themselves with is biased by what they already know. They are either completely unaware of competing evidence or if they are, they revise their network of beliefs to deny the credibility of those who offer it rather than consider it serious evidence. Usually, this is because they believe it is more probable that those people are not credible than that they could be wrong. This problem has always existed, and has perhaps increased since the rise of the personalized web. People who are trapped in information cocoons of denial of anthropogenic climate change will require much more evidence and counterarguments before they can begin to revise an entire network of beliefs that support their current conclusions.

Second, the population is uneducated about climate change because they lack the incentive to learn about the issues. Although we would presumably benefit if everyone were to take the time to thoroughly understand the issue, the individual cost and benefit of doing so actually runs the other way. Because the benefits of better policies accrue to everybody, but the costs are borne by the individual, people have an incentive to free ride, to let everybody else worry about the issue because either way, their individual contribution means little, and everybody else can make the informed decision. But of course, with everybody reasoning in this way there is a much lower level of education on these issues than optimal (or even necessary to create the necessary change, especially if there are interest groups with opposing goals).

The solution is to institute some system that can crack into these information cocoons and at the same time provide wide-ranging personal incentives for participating. For the former, we propose to develop a layman’s guide to climate change science and economic and environmental policy. Many of these are already in existence, although we have some different ideas about how to make it more transparent to criticism and more thorough in its discussion of epistemic uncertainty surrounding the whole issue. There is definitely a lot we can learn from LessWrong on this point). Also, I think we have a unique idea about developing a system of personal incentives. I will discuss this latter issue first.

Comment author: allenwang 12 September 2010 04:11:38AM *  -1 points [-]

(sorry of this comment is too long, continued from above) Creating Incentives

Of course, a sense of public pride exists in many people, and this has led large numbers of people to learn about the issues without external inducements. But the population of educated voters could be vastly increased if there were these personal benefits, especially for groups where environmentalism has not become a positive norm.

While we have thought about other approaches to creating these wide-ranging personal incentives, specifically, material prizes and the intangible benefits of social networking and personal pride (such as are behind Wikipedia or Facebook’s success), it appears that these are difficult to apply to the issue of climate change. Material prizes would be costly to fund, especially to make them worth the several hours necessary to learn about the issues. The issues are difficult enough, and the topic possibly scary enough, that it is not necessarily fun to learn about them and discuss with your friends. For another, it takes time and a little bit of dedicated thinking to achieve an adequate understanding of the problem, but part of the incentive to do so on Wikipedia—to show off your genuine expertise on the topic, even if anonymous—is exactly not what is supposed to happen when there is an educated populace on the topic: you will not be a unique expert, just another person who understands the issue like everyone else. The sense of urgency and personal importance needed to spur people to learn just is not there with these modes of incentivization.

But there is one already extremely effective way that companies, schools, and other organizations incentivize behavior that has little to do with immediate personal benefits. These institutions use their ability to advance or deter people’s future careers to motivate performance in certain areas. The gatekeepers to these future prospects can use their position to bring about all kinds of behavior that would otherwise seem to be a huge burden on those individuals. Ordinary hiring and admissions processes, for example, can impose large writing and learning requirements on their applicants, but because the personal benefits of getting into these organizations are enormous, people are more than willing to fulfill these requirements. Oftentimes, these requirements do not even necessarily have much to do with the stated purpose of the organization, but are used as filtering mechanisms to determine which are the best candidates. Admissions essays are not what universities set out to produce, but rather a bar they set to see which candidates can do well. These bars (known as “sorting mechanisms” in economics) sometimes have additional beneficial effects such as increased writing practice for future students, but not necessarily. For example, polished CV writing is a skill that is only good for overcoming these bars, without additional personal or social benefits. But because these additional effects are really only secondary attributes of the main function of the hurdle, the bar can be modified in ways that create socially beneficial purposes without affecting their main function.

So our specific proposal is to leverage employers’ and schools’ gatekeeper status to impose a hiring hurdle, similar to a polished CV or a high standardized test score, of learning about contemporary climate change science and policy. This hiring hurdle would act much like other hiring hurdles imposed by organizations, but would create a huge personal incentive for individuals to learn about climate change in place of or in addition to the huge personal incentive to write good covering letters or scoring well on the SATs.

The hiring hurdle would be implemented by a third party, a website that acts both as the layman’s guide to climate change science and policy (possibly with something that already exists, but hopefully with something more modular) and as a secure testing center of this knowledge. The website would provide an easy way for people to learn about the most up to date climate science and different policy options available, something that could probably be read and understood with an afternoon’s effort. Once the individual feels that he or she understands the material well enough, a secure test can be taken which measures the extent of that individuals’ climate knowledge. (This test could be retaken if the individual is dissatisfied with the result, or it could be imposed again once new and highly relevant information is discovered). The score that individuals receive could be reported to institutions they apply to. This score would be just one more tickbox for institutions to check before accepting their applicants, and they could determine the score they require.

The major benefit of this approach is that it creates enormous personal incentives for a very small cost. Companies and other institutions already have hiring hurdles in place, and they do not have to burden their HR staff with hundreds of climate change essays but just a simple score that they could look up on the website. The website itself can be hosted for a relatively small cost, and institutions can sign up to the program as more executives and leaders are convinced that this is a good idea.

Presumably, it is much easier to convince a few people who are in charge of such organizations that climate change education is important than to convince individual members of the public. Potentially, this project could affect millions, especially if large corporations such as McDonalds or Walmart or universities with many applicants sign on to the program. Furthermore, approaching the problem of global climate change through nongovernmental institutions seems like a good approach because it avoids the stasis in many public institutions, and it can be done by convincing much fewer stakeholders. Also, many of these institutions have an increasingly global scope.

Developing a platform to combat “information cocoons” yet retain legitimacy

The major problem is that this type of incentivizing might be seen as a way of buying off or patronizing voters, but this appears to be necessary to break the “information cocoons” that many people unknowingly fall into.

Hopefully a charge of having a political agenda can be answered by allowing a certain amount of feedback and continuing development of the guide as more arguments are voiced. Part of the website will be organized so that dissent can be voiced publicly and openly, but only in an organized and reasoned way (something like lesswrong but with stricter limits on posting). The guide would have to maintain public legitimacy by being open to criticism and new evidence as we discover more and also display the evidence that is supporting the current arguments. We would like to include a rating system, something like Rotten Tomatoes, where we have climate experts and the general public vote on various arguments and scenarios that are developed (but this would probably be only for those who develop a specific interest, not part of the testable guide. Of course, the testable guide would follow major developments on this more detailed information). We have thought of using an argument map to better organize such information.

But still, it could not be so flexible that those previous information cocoons redevelop on the website, and a similar polarization occurs on the website as before. Some degree of control is necessary to drive some points home. Thus, a delicate balance might have to be achieved.

That sums up pretty much the ideas to this point. At this point, the project is pretty much all theorizing, although we have found a couple of programmers who might help for a reduced fee (Know of anyone that would be interested in this for free?) and are looking into some funding sources. This would be a large scale attempt at rational debate and discussion, spurred by a mechanism to encourage everybody to participate, so please if you have any advice it would be enormously appreciated.

Sincerely, Allen Wang

Comment author: CronoDAS 12 September 2010 06:17:33AM *  1 point [-]

This seems to have the same problem as teaching evolution in high school biology classes: you can pass a test on something and not believe a word of it. Cracking an information cocoon can be damn hard; just consider how unusual religious conversions are, or how rarely people change their minds on such subjects as UFOs, conspiracy theories, cryonics, or any other subject that attracts cranks.

Also, why should employers care about a person's climate change test score?

Finally, why privilege knowledge about climate change, or all things, by using it for gatekeeping, instead of any of the many non-controversial subjects normally taught in high schools, for which SAT II subject tests already exist?

Comment author: Cyan 11 September 2010 06:34:33PM *  2 points [-]

Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.

It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to keep that in mind too, and I want to make the universe a less deadly place for everyone.

(If you feel like voting this comment up, please review this first.)

Comment author: beriukay 11 September 2010 04:23:50AM 4 points [-]

I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.

We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin was fair, or at least assign some likelihood distribution.

But not my frequentist archnemesis! He let it be known that he would level half the continent if the probability of getting Heads wasn't determined by his Expectation divided by the number of events. The number of events. Of an imaginary coin toss. Determine that toss' probability.

It occurs to me that there was a lot of set up for very little punch line in that anecdote. If you are unamused, you are in good company. I ordered R to calculate an integral for me today, and it politely replied: "Error in is.function(FUN) : 'FUN' is missing""

Comment author: snarles 11 September 2010 01:04:48AM 0 points [-]

NYT article on good study habits: http://www.nytimes.com/2010/09/07/health/views/07mind.html?_r=1

I don't have time to look into the sources but I am very interested in knowing the best way to learn.

Comment author: gwern 11 September 2010 12:11:27AM 1 point [-]

NYT magazine covers engineers & terrorism: http://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t.html

Comment author: datadataeverywhere 10 September 2010 08:12:03PM *  2 points [-]

An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.

At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?

For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guaranteed to be a very uninformed guess, but some guess is possible, right?

The observer can establish a CDF of the probability of the light turning off at time t; for t <= tx, p=0. For t > tx, 0 < p < 1, assuming that the observer can never be certain that the light will ever turn off. What goes on in between is the interesting part, and I haven't the faintest idea how to justify any particular shape for the CDF.

Comment author: eugman 10 September 2010 12:51:28PM 4 points [-]

Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.

Thanks.

Comment author: Relsqui 16 September 2010 09:31:45PM 4 points [-]

I have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for.

The Usual Error is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book.

If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine.

A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war).

I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who are accustomed to expressing an idea by using the first words which occur to them to do so (almost everyone), this requires flexing mental muscles which don't see much use. I think of myself as a good communicator, and it's still hard for me to follow NVC when I'm upset. But the difficulty is part of the point--by forcing you to stop and rethink how you talk about the conflict, it forces you see it in a way that's less hindered by emotional reflex and more productive towards understanding what's going on and finding a solution.

Neither of these suggestions requires that your partner also read them, but it would probably help. (It just keeps you from having to explain a method you're using.)

If you find a good resource for this which is a blog, I'd be interested in it as well. Maybe obviously, this topic is something I think a lot about.

Comment author: eugman 20 September 2010 03:49:51PM *  1 point [-]

Both look rather useful, thanks for the suggestions. Also, Google Books has Nonviolent Communication.

Comment author: Relsqui 20 September 2010 05:17:59PM 0 points [-]

You're welcome, and thanks--that's good to know. I'll bookmark it for when it comes up again.

Comment author: pjeby 16 September 2010 10:57:30PM 0 points [-]

I rather liked the page about how we're made of meat.

Thanks for the cool link!

Comment author: Relsqui 16 September 2010 11:32:49PM 1 point [-]

You're welcome! Glad you like it. I'm a fan of that particular page as well--it's probably the technique I refer to/think about explicitly from that book second most, after the usual error itself. It's valuable to be able to separate the utility of hearing something to gain knowledge and that of hearing something you already know to gain reassurance--it just bypasses a whole bunch of defensiveness, misunderstanding, or insecurity that doesn't need to be there.

Comment author: rhollerith_dot_com 10 September 2010 06:40:43PM *  1 point [-]

I could point to some blogs whose advice seems good to me, but I won't because I think I can help you best by pointing only to material (alas no blogs though) that has actually helped me in a serious relationship -- there being a huge difference in quality between advice of the form "this seems true to me" and advice of the form "this actually helped me".

What has helped me more in my relationships than any other information has is the non-speculative parts of the consensus among evolutionary psychologists on sexuality because they provide a vocabulary for me to express hypotheses (about particular situations I was facing) and a way for me to winnow the field of prospective hypotheses and bits of advice I get online from which I choose hypotheses and bits of advice to test. In other words, ev psy allows me to dismiss many ideas so that I do not incur the expense of testing them.

I needed a lot of free time however to master that material. Probably the best way to acquire the material is to read the chapters on sex in Robert Wright's Moral Animal. I read that book slowly and carefully over 12 months or so, and it was definitely worth the time and energy. Well, actually the material in Moral Animal on friendship (reciprocal altruism) is very much applicable to serious relationships, too, and the stuff on sex and friendship together form about half the book.

Before I decided to master basic evolutionary psychology in 2000, the advice that helped me the most was from John Gray, author of Men Are From Mars, Women Are From Venus.

Analytic types will mistrust author and speaker John Gray because he is glib and charismatic (the Maharishi or such who founded Transcendental Meditation once offered to make Gray his successor and the inheritor of his organization) but his pre-year-2000 advice is an accurate map of reality IMHO. (I probably only skimmed Mars and Venus, but I watched long televised lectures on public broadcasting that probably covered the same material.)

Comment author: Violet 10 September 2010 12:56:01PM -2 points [-]

Do you really need a "theory of mind" for that?

Our partners are not a foreign species. Communicate lots in an open and honest manner with hir and try to understand what makes that particular person click.

Comment author: eugman 10 September 2010 01:06:59PM 0 points [-]

Yes. You are assuming ze has a high level of introspection which would facilitate communication. This isn't always the case.

Comment author: JoshuaZ 10 September 2010 01:03:47PM 7 points [-]

Yes, you do. Many people who have highly developed theories of mind seem to underestimate how much unconscious processing they are doing that is profoundly difficult for people to do who don't have as developed theories of mind. People who are mildly on the autism spectrum in particular (generally below the threshold of diagnosis) often have a lot of difficulty with this sort of unconscious processing but can if given a lot of explicit rules or heuristics do a much better job.

Comment author: eugman 10 September 2010 01:08:27PM 0 points [-]

Thank you. I believe I may fall in this category. I am highly quantitative and analytical, often to my detriment.

Comment author: Wei_Dai 10 September 2010 07:27:28AM *  16 points [-]

An Alternative To "Recent Comments"

For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.

Explanation of features:

  • loads and threads up to 400 most recent comments on one screen
  • use [↑] and [↓] to mark favored/disfavored authors
  • comments are color coded based on author/points (pink) and recency (yellow)
  • replies to you are outlined in red
  • hover over [+] to view single collapsed comment
  • hover over/click [^] to highlight/scroll to parent comment
  • marks comments read (grey) based on scrolling
  • shows only new/unread comments upon refresh
  • date/time are converted to your local time zone
  • click comment date/time for permalink

To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.

ETA: I've placed the script is in the public domain. Chrome is not supported.

Comment author: NihilCredo 17 September 2010 08:56:36PM *  0 points [-]

May I suggest submitting the script to userscripts.org? It will make it easier for future LessWrong readers to find it, as well as detectable by Greasefire.

Comment author: ata 10 September 2010 09:12:22PM *  0 points [-]

Nice! Thanks.

Edit: "shows only new/unread comments upon refresh" — how does it determine readness?

Comment author: Wei_Dai 10 September 2010 09:59:10PM 0 points [-]

Any comment that has been scrolled off the screen for 5 seconds is considered read. (If you scroll back, you can see that the text and border have turn from black to gray.) If you scroll to the bottom and stay there for 5 seconds, all comments are marked read.

Comment author: andreas 10 September 2010 09:08:56PM 0 points [-]

Thanks for coding this!

Currently, the script does not work in Chrome (which supports Greasemonkey out of the box).

Comment author: Wei_Dai 10 September 2010 10:07:44PM *  1 point [-]

From http://dev.chromium.org/developers/design-documents/user-scripts

  • Chromium does not support @require, @resource, unsafeWindow, GM_registerMenuCommand, GM_setValue, or GM_getValue
  • GM_xmlhttpRequest is same-origin only

My script uses 4 out of these 6 features, and also cross-domain GM_xmlhttpRequest (the comments are actually loaded from a PHP script hosted elsewhere, because LW doesn't seem to provide a way to grab 400 comments at once), so it's going to have to stay Firefox-only for the time being.

Oh, in case anyone developing LW is reading this, I release my script into the public domain, so feel free to incorporate the features into LW itself.

Comment author: Morendil 10 September 2010 01:45:31PM 0 points [-]

Would you consider making display of author names and points a toggle and hidden by default, à la Anti-Kibitzer?

Comment author: Wei_Dai 10 September 2010 06:50:21PM 1 point [-]

I've added some code to disable the author/points-based coloring when Anti-Kibitzer is turned on in your account preferences. (Names and points are already hidden by the Anti-Kibitzer.) Here is version 1.0.1.

More feature requests or bug reports are welcome.

Comment author: Wei_Dai 10 September 2010 08:35:57AM 4 points [-]

Here's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php

I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.

Comment author: cousin_it 09 September 2010 10:25:53PM *  2 points [-]

The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.

Comment author: Vladimir_Nesov 10 September 2010 06:54:26PM 1 point [-]

Human intelligence, certainly; but just intelligence, I'm not so sure.

Comment author: datadataeverywhere 09 September 2010 03:14:37PM 2 points [-]

How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.

We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.

However, if we are predominately white males, why are we? Should that concern us? There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist. I'm willing to bet that after correcting for socioeconomic correlations with ethnicity, we still don't make par. Perhaps naïvely, I feel like we must explain ourselves if this is the case.

Comment author: CaveJohnson 20 July 2011 05:56:51PM *  0 points [-]

I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.

People are touchy on this. I guess its because in public discourse pointing something like this out is nearly always a call to change it.

Comment author: [deleted] 16 September 2010 06:40:35PM *  2 points [-]

I don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer)

I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women.

Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majority demographics still have a more overall rationality friendly climate (due to the caprice of history) than basically any part of the world? I freely admit my own native culture (though I'm probably thoroughly Westernised by now due to late childhood influences of mass media and education) is probably less rational than the Anglosaxon one. However simply going on a "crusade" to make other cultures more rational first since they are "clearly" more in need is besides sending terribly bad signals as well as the potential for self-delusion perhaps a bad idea for humanitarian reasons.

Sex ration: There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male for the foreseeable future (until human modification takes off). This obviously affects our potential pool of recruits.

Age: People grow more conservative as they age, Lesswrong is firstly available only on a relatively a new medium, secondly has a novel approach to popularizing rationality. Also as people age the mind unfortunately do deteriorate. Very few people have a IQ high enough to master difficult fields before they are 15, and even their interests are somewhat affected by their peers.

I am sure I am rationalizing at least a few of these points, however I need to ask you is pursuing some popular concept of diversity (why did you for example not commend LW on its inclusion of non-neurotypicals who are often excluded in some segments of society? Also why do you only bemoan the under-representation of groups everyone else does? Is this really a rational approach? Why don't we go study where the in the memspace we might find truly valuable perspectives and focus on those? Maybe they overlap with the popular kinds, maybe they don't, but can we really trust popular culture and especially the standard political discourse on this? ) is truly cost-effective at this point?

Comment author: datadataeverywhere 17 September 2010 05:19:36AM *  1 point [-]

If you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people.

I smell a whiff of that weird American memplex [...] you know the one that for example uses the word minority to describe women.

If less than 4% of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group?

but God for starters isn't among them

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

age

This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes".

why do you only bemoan the under-representation of groups everyone else does?

Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.

Comment author: wedrifid 18 September 2010 06:50:07PM *  2 points [-]

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences;

Given new evidence from the ongoing discussion I retract my earlier concession. I have the impression that the bottom line preceded the reasoning.

Comment author: datadataeverywhere 18 September 2010 10:22:41PM *  2 points [-]

I expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself.

That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you.

Edit: fixed typo

Comment author: Will_Newsome 19 September 2010 05:56:35AM 4 points [-]

Mathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one.

I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here.

Comment author: [deleted] 19 September 2010 11:25:06PM 3 points [-]

Is mathematical ability a bell curve?

My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.

Comment author: Will_Newsome 20 September 2010 12:06:59AM 4 points [-]

Is mathematical ability a bell curve?

I have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher.

I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.

Comment author: datadataeverywhere 19 September 2010 06:34:32PM 2 points [-]

I'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent.

I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?

Comment author: wedrifid 19 September 2010 07:03:49PM 1 point [-]

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

, but also why I believe that this is such a difficult task?

That was the point of making the demand.

You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.

Comment author: Perplexed 19 September 2010 07:24:33PM 1 point [-]

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

It might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

... but also why I believe that this is such a difficult task?

I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math.

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?

Comment author: datadataeverywhere 20 September 2010 02:47:31AM 1 point [-]

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

I agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task.

Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.

Comment deleted 19 September 2010 11:10:06PM *  [-]
Comment author: wedrifid 19 September 2010 04:43:18AM 4 points [-]

If you insist on supporting that view

Absolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.

Comment author: datadataeverywhere 19 September 2010 05:13:42AM 2 points [-]

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.

Comment author: wedrifid 19 September 2010 05:20:18AM 1 point [-]

I think this applies to motivation as well!

It doesn't. (Unfortunately.)

Comment author: datadataeverywhere 19 September 2010 05:29:06AM *  1 point [-]

Am I misunderstanding, or are you claiming that motivation is purely an inherited trait? I can't possibly agree with that, and I think even simple experiments are enough to disprove that claim.

Comment author: wedrifid 19 September 2010 08:42:57AM 3 points [-]

Am I misunderstanding, or are you claiming that motivation is purely an inherited trait?

Misunderstanding. Expanding the context slightly:

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

It doesn't. (Unfortunately.)

When it comes to motivation the differences between people are not trivial. When it comes the particular instance of difference between the sexes there are powerful differences in motivating influences. Most human motives are related to sexual signalling and gaining social status. The optimal actions to achieve these goals is significantly different for males and females, which is reflected in which things are the most motivating. It most definitely should not be assumed that motivational differences are purely cultural - and it would be astonishing if they were.

Comment author: [deleted] 18 September 2010 06:40:01PM *  2 points [-]

I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

How do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field?

Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more).

Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate.

My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong.

This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression.

But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the value of life and existence (thought these have been questioned and debated here too) or perhaps some utilitarian principles.

But how do we know any position people take would really reflect their values and wouldn't jut be status signalling? Heck many people who profess their values include or don't include a certain inherent "goodness" to existence probably do for signalling reasons and would quickly change their minds in a different situation!

Not even mentioning the general effect of the mindkiller.

But like I have stated before, there are certainly many spaces where we can optimize the stated goal by outreach. This is why I think this debate should continue but with a slightly different spirit. More in line with, to paraphrase you:

Which groups should we pay more attention to? This is a serious question, since we haven't thought too much about it.

Comment author: wedrifid 18 September 2010 07:09:52PM 0 points [-]

will affect how many people can grasp ev

Typo in a link?

Comment author: [deleted] 18 September 2010 07:24:10PM *  1 point [-]

I changed the first draft midway when I was still attempting to abbreviate it. I've edited and reformulated the sentence, it should make sense now.

Comment author: [deleted] 18 September 2010 06:39:42PM *  2 points [-]

I ascribe differences in those to cultural influences;I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

But if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system.

I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence.

Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims?

I would venture to state that this case is especially strong for preferences.

And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it?

Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.

Comment author: lmnop 18 September 2010 07:41:09PM *  4 points [-]

But if we can't measure the cultural factors and account for them

We can't directly measure them, but we can get an idea of how large they are and how they work.

For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men.

The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire.

And then there's priming. Asian American women do better on math tests when primed with their race (by filling in a "race" bubble at the top of the test) than when primed with their gender (by filling in a "sex" bubble). More subtly, priming affects people's implicit attitudes towards gender-stereotyped domains too. People are often primed about their gender in real life, each time affecting their actions a little, which over time will add up to significant differences in the paths they choose in life in addition to that which is caused by innate gender differences. Right now we don't have enough information to say how much is caused by each, but I don't see why we can't make more headway into this in the future.

Comment author: [deleted] 18 September 2010 05:51:43PM *  2 points [-]

If less than 4% of us are women, I am quite willing to call that a minority. Would you >prefer me to call them an excluded group

I'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority.

I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority.

Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing.

Example: Discussing the repression of the Mayan minority in Mexico.

While other times we can't do this.

Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania.

This (age) also doesn't bother me, for reasons similar to yours.

Ah, apologies I see I carried it over from here:

How diverse is Less Wrong? I am under the impression that we disproportionately >consist of 20-35 year old white males, more disproportionately on some axes than >on others.

You explicitly state later that you are particularly interested in this axis of diversity

However, if we are predominately white males, why are we?

Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)?

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders?

mentioning the western world as the source of the scientific method.

While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years has been disproportionately seen in Europe and later its colonies. I would argue its adoption was a part of the reason for the later (lets say last 300 years) technological superiority of the West.

Edit: I wrote up quite a long wall of text. I'm just going to split it into a few posts as to make it more readable as well as give me a better sense of what is getting up or downvoted based on its merit or lack of there of.

Comment author: timtyler 09 September 2010 08:12:51PM *  8 points [-]

How diverse is Less Wrong?

You may want to check the survey results.

Comment author: Relsqui 16 September 2010 09:38:28PM 1 point [-]

Thank you; that was one of the things I'd come to this thread to ask about.

Comment author: datadataeverywhere 09 September 2010 09:19:55PM 2 points [-]

Thank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.

Comment author: NancyLebovitz 09 September 2010 04:41:08PM *  6 points [-]

I've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.)

Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in?

I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled.

This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1]

He recommended recruiting Japanese members, since they're more apt to like and trust robots.

I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work.

[1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.

Comment author: [deleted] 16 September 2010 06:50:35PM *  3 points [-]

He recommended recruiting Japanese members, since they're more apt to like and >trust robots.

He has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages.

Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.

Comment author: Wei_Dai 18 September 2010 07:48:05PM 3 points [-]

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian.

Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.

Comment author: Vladimir_Nesov 19 September 2010 08:06:56AM 1 point [-]

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

All LW users display near-native control of English, which won't be as universal, and typically requires years-long consumption of English content. English-speaking world is the default source of non-Russian content for Russians, but it might not be the case with native Asians (what's your impression?)

Comment author: Wei_Dai 20 September 2010 06:33:38PM 2 points [-]

My impression is that for most native Asians, the English-speaking world is also their default source of non-native-language content. I have some relatives in China, and to the extent they do consume non-Chinese content, they consume English content. None of them consume enough of it to obtain near-native control of English though.

I'm curious, what kind of English content did you consume before you came across OB/LW? How typical do you think that level of consumption is in Russia?

Comment author: Perplexed 16 September 2010 06:55:55PM 1 point [-]

Unfortunately, browser spell checkers usually can't help you to spell your own name correctly. ;) That is one advantage to my choice of nym.

Comment author: wedrifid 16 September 2010 07:10:38PM 0 points [-]

Unfortunately, browser spell checkers usually can't help you to spell your own name correctly.

Right click, add to dictionary. If that doesn't work then get a better browser.

Comment author: [deleted] 16 September 2010 07:46:16PM *  0 points [-]

Ehm, you do realize he was making a humorous remark about "Konkvistador" being my user name right?

Edit:

Actually it was more about Konkivstador not being your name.

Well its all clearly Alicorn's fault. ;)

Comment author: Perplexed 16 September 2010 09:39:41PM *  1 point [-]

Actually it was more about Konkivstador not being your name.

Comment author: Perplexed 09 September 2010 04:35:54PM 4 points [-]

I generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people.

My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.

Comment author: timtyler 09 September 2010 08:09:57PM *  2 points [-]

But perhaps the women should speak for themselves on that subject.

We have already had quite a lot of that.

Comment author: Perplexed 09 September 2010 08:44:56PM 2 points [-]

OMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;)

But thx for the link.

Comment author: timtyler 09 September 2010 08:48:13PM *  1 point [-]

It does have about 100 pages of comments. Consider also the "links to followup posts" in line 4 of that article. It all seemed to go on forever - but maybe that was just me.

Comment author: Perplexed 09 September 2010 08:54:39PM 2 points [-]

Ok. Well, it is on my reading list now. Again, thx.

Comment author: Emile 09 September 2010 04:25:55PM 2 points [-]

There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist.

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

Comment author: thomblake 16 September 2010 08:00:03PM 5 points [-]

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

My vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population.

This is probably explained completely by Lw's tendency to attract <strike>weirdos</strike> people who are willing to question orthodoxy.

Comment author: [deleted] 18 September 2010 03:39:13PM 0 points [-]

For starters we have a quite a few people who practice polyamory.

Comment author: datadataeverywhere 09 September 2010 04:33:45PM 0 points [-]

It might matter whether or not one counts closeted gays. Either way, I was just throwing another potential partition into the argument. I also doubt that we differ significantly in our proportion of deaf people, but the point is that being deaf is qualitatively different, but shouldn't impair one's rational capabilities. Same for being female, black, or most of the groups that we think of as adding to diversity.

Comment author: [deleted] 16 September 2010 09:34:51PM *  2 points [-]

I am actually interested in, among other things, whether a lack of diversity is a >functional impairment for a group. I feel strongly that it is, but I can't back up that >claim with evidence strong enough to match my belief. For a group such as Less >Wrong, I have to ask what we miss due to a lack of diversity.

To little memetic diversity is clearly a bad thing, for the same reason too little genetic variability. However how much and what kind are optimal depends on the environment.

Also have you considered the possibility that diversity for you is not a means to an end but a value in itself? In that case unless it conflicts with more any other values you would perhaps consider more important values you don't need any justification for it. I'm quite honest with myself that I hope that post-singularity the universe will not be paperclipped by only things I and people like me (or humans in general for that matter) value. I value a diverse universe.

Edit:

Same for being female, black, or most of the groups that we think of as adding to >diversity.

I.. uhm...see. At first I was very confused by all the far reaching implications of this however thanks to keeping a few things in mind, I'm just going to ascribe this to you being from a different cultural background than me.

Comment author: datadataeverywhere 17 September 2010 05:02:01AM 1 point [-]

Diversity is a value for me, but I'd like to believe that is more than simply an aesthetic value. Of course, if wishes were horses we'd all be eating steak.

Memetic diversity is one of the non-aesthetic arguments I can imagine, and my question is partially related to that. Genetic diversity is superfluous past a certain point, so it seems reasonable that the same might be true of memetic diversity. Where is that point relative to where Less Wrong sits?

Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?

Comment author: [deleted] 18 September 2010 04:36:07PM 3 points [-]

"Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?"

To get back to basics for a moment: we don't know that women and black people are underrepresented here. Usernames are anonymous. Even if we suspect they're underrepresented, we don't know by how much -- or whether they're underrepresented compared to the internet in general, or the geek cluster, or what.

Even assuming you want more demographic diversity on LW, it's not at all clear that the best way to get it is by doing something differently on LW itself.

Comment author: [deleted] 18 September 2010 07:14:24PM *  0 points [-]

You highlighted this point much better than I did.

Comment author: [deleted] 18 September 2010 04:05:22PM *  4 points [-]

Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?

Well I will try to elaborate.

Same for being female, black, or most of the groups that we think of as adding to >diversity.

After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values). I'm not saying you don't (I can't know this) or that you should. I at first assumed you thought the way you do because you came up with a system more or less similar to my own, a incredibly unlikely event, that is why I scolded myself for employing the mind projection fallacy while providing a link pointing that this particular component is firmly integrated into the whole "stuff White people like" (for lack of a better word) culture that exists in the West so anyone I encounter online with whom I share the desire for certain spaces of diversity is on average overwhelmingly more likely to get it from that memplex.

Also while I'm certainly sympathetic about hoping one's values are practical, but one needs to learn to live with the possibility one's values are neutral or even impractical or perhaps conflicting with each other. I overall in principle support efforts to lower unnecessary barriers for people to join Lesswrong.But the OP doesn't seem to make it explicit that this is about values, and you wanting other Lesswrongers to live by your values but seems to communicate that its about it being the optimal course of improving rationality.

You haven't done this. Your argument so far has been to simply go from:

"arbitrary designated group/blacks/women are capable of rationality, but are underrepresented on Lesswrong"

to

"Lesswrong needs to divert some (as much as needed?) efforts to correct this."

Why?

Like I said lowering unnecessary barriers (actually you at this point even have to make the case that they exist and that they aren't simply the result of the other factors I described in the post) won't repel the people who already find LW interesting, so it should in principle get a more effective and healthy community.

However what if this should prove to be insufficient? Divert resources to change the preferences of designated under-represented groups? Add elements to Lesswrong that aren't strictly necessary to reach its stated objectives? Which is not to say we don't have them now, however the ones we have now probably cater to the largest potential pool of people predisposed to find LW's goals interesting.

Comment author: datadataeverywhere 18 September 2010 10:13:14PM *  3 points [-]

I think I'm going to stop responding to this thread, because everyone seems to be assuming I'm meaning or asking something that I'm not. I'm obviously having some problems expressing myself, and I apologize for the confusion that I caused. Let me try once more to clarify my position and intentions:

I don't really care how diverse Less Wrong is. I was, however, curious how diverse the community is along various axes, and was interested in sparking a conversation along those lines. Vladimir's comment is exactly the kind of questions I was trying to encourage, but instead I feel like I've been asked to defend criticism that I never thought I made in the first place.

I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.

Comment author: Vladimir_M 18 September 2010 06:46:13PM *  8 points [-]

Konkvistador:

After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values).

There is a fascinating question that I've asked many times in many different venues, and never received anything approaching a coherent answer. Namely, among all the possible criteria for categorizing people, which particular ones are supposed to have moral, political, and ideological relevance? In the Western world nowadays, there exists a near-consensus that when it comes to certain ways of categorizing humans, we should be concerned if significant inequality and lack of political and other representation is correlated with these categories, we should condemn discrimination on the basis of them, and we should value diversity as measured by them. But what exact principle determines which categories should be assigned such value, and which not?

I am sure that a complete and accurate answer to this question would open a floodgate of insight about the modern society. Yet out of all difficult questions I've ever discussed, this seems to be the hardest one to open a rational discussion about; the amount of sanctimoniousness and/or logical incoherence in the answers one typically gets is just staggering. One exception are several discussions I've read on Overcoming Bias, which at least asked the right questions, but unfortunately only scratched the surface in answering them.

Comment author: NancyLebovitz 21 September 2010 05:45:04PM 2 points [-]

That's intriguing. Would you care to mention some of the sorts of diversity which usually aren't on the radar?

Comment author: AdeleneDawner 18 September 2010 10:25:35PM 3 points [-]

I've spent some time thinking about this, and my conclusion is that, at least personally, what I value about diversity is the variety of worldviews that it leads to.

This does result in some rather interesting issues, though. For example, one of the major factors in the difference in worldview between dark-skinned Americans and light-skinned Americans is the existence of racism, both overt and institutional. Thus, if I consider diversity to be very valuable, it seems that I should support racism. I don't, though - instead, I consider that the relevant preferences of dark-skinned Americans take precedence over my own preference for diversity. (Similarly, left-handed peoples' preference for non-abusive writing education appropriately took precedence over the cultural preference for everyone to write with their right hands, and left-handedness is, to the best of my knowledge, no longer a significant source of diversity of worldview.)

That assumes coherence in the relevant group's preference, though, which isn't always the case. For example, among people with disabilities, there are two common views that are, given limited resources, significantly conflicting: The view that disabilities should be cured and that people with disabilities should strive to be (or appear to be) as normal as possible, and the view that disabilities should be accepted and that people with disabilities should be free to focus on personal goals rather than being expected to devote a significant amount of effort to mitigating or hiding their disabilities. In such cases, I support the preference that's more like the latter, though I do prefer to leave the option open for people with the first preference to pursue that on a personal level (meaning I'd support the preference 'I'd prefer to have my disability cured', but not 'I'd prefer for my young teen's disability to be treated even though they object', and I'm still thinking about the grey area in the middle where such things as 'I'd prefer for my baby's disability to be cured, given that it won't be able to be cured when they're older if it's not cured now, and given that if it's not cured I'm likely to be obligated to take care of them for the rest of my life' exist).

I think that's coherent, anyway, as far as it goes. I'm sure there are issues I haven't addressed, though.

Comment author: Vladimir_M 18 September 2010 11:34:07PM *  10 points [-]

With your first example, I think you're on to an important politically incorrect truth, namely that the existence of diverse worldviews requires a certain degree of separation, and "diversity" in the sense of every place and institution containing a representative mix of people can exist only if a uniform worldview is imposed on all of them.

Let me illustrate using a mundane and non-ideological example. I once read a story about a neighborhood populated mostly by blue-collar folks with a strong do-it-yourself ethos, many of whom liked to work on their cars in their driveways. At some point, however, the real estate trends led to an increasing number of white collar yuppie types moving in from a nearby fancier neighborhood, for whom this was a ghastly and disreputable sight. Eventually, they managed to pass a local ordinance banning mechanical work in front yards, to the great chagrin of the older residents.

Therefore, when these two sorts of people lived in separate places, there was on the whole a diversity of worldview with regards to this particular issue, but when they got mixed together, this led to a conflict situation that could only end up with one or another view being imposed on everyone. And since people's worldviews manifest themselves in all kinds of ways that necessarily create conflict in case of differences, this clearly has implications that give the present notion of "diversity" at least a slight Orwellian whiff.

Comment author: wedrifid 18 September 2010 06:59:54PM *  2 points [-]

Yet out of all difficult questions I've ever discussed, this seems to be the hardest one to open a rational discussion about; the amount of sanctimoniousness and/or logical incoherence in the answers one typically gets is just staggering.

My experience is similar. Even people that are usually extremely rational go loopy.

One exception are several discussions I've read on Overcoming Bias, which at least asked the right questions, but unfortunately only scratched the surface in answering them.

I seem to recall one post there that specifically targeted the issue. But you did ask "what basis should" while Robin was just asserting a controversial is.

Comment author: Vladimir_M 18 September 2010 07:08:39PM 2 points [-]

wedrifid:

But you did ask "what basis should" while Robin was just asserting a controversial is.

I probably didn't word my above comment very well. I am also asking only for an accurate description of the controversial "is."

The fact is that nearly all people attach great moral importance to these issues, and what I'd like (at least for start) is for them to state the "shoulds" they believe in clearly, comprehensively, and coherently, and to explain the exact principles with which they justify these "shoulds." My above stated questions should be understood in these terms.

Comment author: wedrifid 18 September 2010 07:18:44PM 1 point [-]

If you are sufficiently curious you could make a post here. People will be somewhat motivated to tone down the hysteria given that you will have pre-emptively shunned it.

Comment author: wedrifid 17 September 2010 06:00:54AM 1 point [-]

Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong.

"Ought"? I say it 'ought' to be explained away be the subject matter of less wrong if and only if that is an accurate explanation. Truth isn't normative.

Comment author: datadataeverywhere 17 September 2010 06:20:06AM 2 points [-]

Is this a language issue? Am I using "ought" incorrectly? I'm claiming that the truth of the matter is that women are capable of rationality, and have a place here, so it would be wrong (in both an absolute and a moral sense) to claim that their lack of presence is due to this being a blog about rationality.

Perhaps I should weaken my statement to say "if women are as capable as men in rationality, their underrepresentation here ought not be explained away by the subject matter". I'm not sure whether I feel like I should or shouldn't apologize for taking the premise of that sentence as a given, but I did, hence my statement.

Comment author: wedrifid 17 September 2010 06:34:06AM 1 point [-]

Ahh, ok. That seems reasonable. I had got the impression that you had taken the premise for granted primarily because it would be objectionable if it was not true and the fact of the matter was an afterthought. Probably because that's the kind of reasoning I usually see from other people of your species.

I'm not going to comment either way about the premise except to say that it is inclination and not capability that is relevant here.

Comment author: cousin_it 09 September 2010 04:20:20PM *  6 points [-]

However, if we are predominately white males, why are we?

Ignoring the obviously political issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.

Comment author: datadataeverywhere 09 September 2010 04:30:23PM 4 points [-]

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.

Comment author: cousin_it 09 September 2010 04:45:48PM *  5 points [-]

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

The flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again.

ETA: apparenty datadataeverywhere is female.

Comment author: datadataeverywhere 09 September 2010 04:54:46PM 0 points [-]

No, I think that's a much less flippant answer :-)

Comment author: gwern 09 September 2010 04:05:23PM *  9 points [-]

This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...

In other words, here be dragons.

* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.

Comment author: datadataeverywhere 09 September 2010 04:26:32PM 3 points [-]

That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US!

It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them.

The Tale of Genji has gone on my list of books to read. Thanks!

Comment author: gwern 09 September 2010 04:58:40PM *  6 points [-]

At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better.

Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas.

The Tale of Genji has gone on my list of books to read. Thanks!

I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing:

The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition.

The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless.

The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles.

The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: 'while sick, pay priests to chant.' If chanting doesn't work, hire more priests. (One remarkable freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it's still sickening to read through dozens of people dying amidst chanting. In comparison, the bizarre superstitions that guide many characters' activities (trapping them in their houses on inauspicious days) are practically unobjectionable.

Comment author: Perplexed 09 September 2010 03:26:45AM 1 point [-]

Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.

Comment author: Perplexed 12 September 2010 07:25:43PM 1 point [-]

And now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?

Comment author: jacob_cannell 09 September 2010 07:04:04AM 1 point [-]

That happened to me three days ago or so after my last top level post. At the time said post was at -6 or so, and my karma was at 60+ something. Then, within a space of < 10 minutes, my karma dropped to zero (actually i think it went substantially negative). So what is interesting to me is the timing.

I refresh or click on links pretty quickly. It felt like my karma dropped by more than 50 points instantly (as if someone had dropped my karma in one hit), rather than someone or a number of people 'tracking me'.

However, I could be mistaken, and I'm not certain I wasn't away from my computer for 10 minutes or something. Is there some way for high karma people to adjust someone's karma? Seems like it would be useful for troll control.

Comment author: RobinZ 09 September 2010 03:49:14AM 3 points [-]

While katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.

Comment author: katydee 09 September 2010 03:43:54AM 2 points [-]

It is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point.

In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.

Comment author: Perplexed 09 September 2010 03:55:27AM 2 points [-]

I think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.

Comment author: gwern 08 September 2010 01:07:52PM *  5 points [-]

Relevant to our akrasia articles:

If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.

http://www.marginalrevolution.com/marginalrevolution/2010/09/should-you-bet-on-your-own-ability-to-lose-weight.html

Comment author: Sniffnoy 08 September 2010 08:29:39PM 1 point [-]

I recall someone claiming here earlier that they could do anything if they bet they could, though I can't find it right now. Useful to have some more explicit evidence about that.

Comment author: JamesAndrix 08 September 2010 04:51:55AM 1 point [-]

Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)

In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.

Comment author: Larks 08 September 2010 06:25:11AM 3 points [-]

False positives, false negatives

This sounds pretty exhaustive.

Comment author: DSimon 07 September 2010 08:28:02PM *  13 points [-]

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The player shouldn't have the feeling that the game is designed to directly reward rationality (even though it is, in a sense). The player should think that they are solving a general problem with rationality as their asset.

  • Not allegorical: I don't want to raise any likely mind-killing associations in the player's mind, like politics or religion. The problem they are solving should be allegorical to real world problems, but to a general class of problems, not to any specific problems that will raise hackles and defeat the educational purpose of the game.

  • Surprising: The rationality technique being taught should not be immediately obvious to an untrained player. A typical first session should involve the player first trying an irrational method, seeing how it fails, and then eventually working their way up to a rational method that works.

A lot of the rationality-related games that people bring up fail some of these criterion. Zendo, for example, is not "dramatic in outcome" enough for my taste. Avoiding confirmation bias and understanding something about experimental design makes one a better Zendo player... but in my experience not as much as just developing a quick eye for pattern recognition and being able to read the master's actions.

Anyone here have any suggestions for possible game designs?

Comment author: Oscar_Cunningham 09 September 2010 02:16:32PM 0 points [-]

One idea I'd like to suggest would be a game where the effectiveness of the items a player has changes randomly hour by hour. Maybe a MMO with players competing against each other, so that they can communicate information about which items are effective. Introduce new items with weird effects every so often so that players have to keep an eye on their long term strategy as well.

Comment author: DSimon 09 September 2010 02:57:55PM 1 point [-]

I think a major problem with that is that most players would simply rely upon the word on the street to tell them what was currently effective, rather than performing experiments themselves. Furthermore, changes in only "effectiveness" would probably be too easy to discover using a "cookbook" of experiments (see the NetHack discussion in this thread).

Comment author: Oscar_Cunningham 09 September 2010 03:30:33PM 1 point [-]

I'm thinking that the parameters should change just quickly enough to stop consensus forming (maybe it could be driven by negative feedback, so that once enough people are playing one strategy it becomes ineffective). Make using a cookbook expensive. Winning should be difficult, and only just the right combination will succeed.

Comment author: taryneast 23 March 2011 07:32:21AM 0 points [-]

Hmmm - changing things frequently means you'll have some negative knock-on effects. You'll be penalising anybody that doesn't game as often - eg people with a life. You stand a chance of alienating a large percentage of the audience, which is not a good idea.

Comment author: DSimon 09 September 2010 06:18:33PM *  1 point [-]

I think this makes sense, but can you go into more detail about this:

Make using a cookbook expensive.

I didn't mean a cookbook as an in-game item (I'm not sure if that's what you were implying...), I meant the term to mean a set of well-known experiments which can simply be re-ran every time new results are required. If the game can be reduced to that state, then a lot of its value as a rationality teaching tool (and also as an interesting game, to me at least) is lost. How can we force the player to have to come up with new ideas for experiments, and see some of those ideas fail in subtle ways that require insight to understand?

My tendency is to want to solve this problem by just making a short game, so that there's no need to figure out how to create a whole new, interesting experimental space for each session. This would be problematic in an MMO, where replayablity is expected (though there have been some interesting exceptions, like Uru).

Comment author: Oscar_Cunningham 09 September 2010 07:56:12PM 2 points [-]

Ah, I meant: "Make each item valuable enough that using several just to work out how effective each one is would be a fatal mistake" Instead you would have to keep track of how effective each one was, or watch the other players for hints.

Comment author: khafra 09 September 2010 11:01:37AM *  5 points [-]

I'm not sure if transformice counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results seem to support ciphergoth's conjecture on intelligence levels.

Comment author: Emile 09 September 2010 12:03:46PM 2 points [-]

Transformice is awesome :D A game hasn't made me laugh that much for a long time.

And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.

Comment author: Emile 08 September 2010 10:29:11PM 8 points [-]

Note also the Wiki page, with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!)

One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky. That's more about SIAI propaganda than rationality though.

One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback.

For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad.

So basically the idea would be that the rules of the game you're really playing wouldn't be the ones you would think at first glance, which is a pretty good metaphor for real life too.

It needs to be well-designed enough so that it's not "guessing the programmer's password", but that should be possible.

Making a game around experiment design would be interesting too - have some kind of physics / chemistry / biology system that obeys some rules (mostly about transformations, not some "real" physics with motion and collisions etc.), have game mechanics that allow you to do something like experimentation, and have a general context (the feedbacks you get, what other characters say, what you can buy) that points towards a slightly wrong understanding of reality. This is bouncing off Silas' ideas, things that people say are good for you may not really be so, etc.

Here again, you can exploit the conventions of video games to mislead the player. For example, red creatures like eating red things, blue creatures like eating blue things, etc. - but the rule doesn't always hold.

Comment author: PeerInfinity 09 September 2010 05:23:22PM 2 points [-]

"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky."

See also: The Friendly AI Critical Failure Table

And I think all of the other suggestions you made in this comment would make an awesome game! :D

Comment author: Emile 09 September 2010 06:00:20PM 0 points [-]

Ooh, I had forgot about that table - Gurps Friendly AI is also of interest.

Comment author: Emile 08 September 2010 11:21:35PM 1 point [-]

Riffing off my weird biology / chemistry thing: a game based on the breeding of weird creatures, by humans freshly arrived on the planet (add some dimensional travel if you want to justify weird chemistry - I'm thinking of Tryslmaistan.

The catch is (spoiler warning!), the humans got the wrong rules for creature breeding, and some plantcrystalthingy they think is the creatures' food is actually part of their reproduction cycle, where some essential "genetic" information passes.

And most of the things that look like in-game help and tutorials are actually wrong, and based on a model that's more complicated than the real one (it's just a model that's closer to earth biology).

Comment author: DSimon 08 September 2010 10:55:28PM *  4 points [-]

Here again, you can exploit the conventions of video games to mislead the player.

I think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed.

Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.

Comment author: Raemon 09 September 2010 02:39:21AM 17 points [-]

The problem is that, simply put, such games generally fail on the "fun" meter.

There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.

The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.

Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.

The game is filled with awesome flavor, and a lot of awesome mechanics. (Specifically mechanics I had imagined independently and wanted to make my own game regarding). It looked to me like one of the coolest sounding games ever. And it was amazingly NOT FUN AT ALL for the first four hours of play. I stuck with it anyway, if for no other reason than to figure out how a game with such awesome ideas could turn out so badly. Eventually I learned how to play, and while it never became fun it did become beautiful and poignant and it's now one of my favorite games ever. But most people do not stick with something they don't like for four hours.

Toying with player's expectations sounds cool to the people who understand how the toying works, but is rarely fun for the player themselves. I don't think that's an insurmountable obstacle, but if you're going to attempt to do this, you need to really fathom how hard it is to work around. Most games telegraph everything for a reason.

Comment author: Emile 09 September 2010 02:52:35PM 1 point [-]

Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?

*updates*

I hadn't heard of that game, I might try it out. I'm actually surprised a game like that was made and commercially published.

Comment author: NihilCredo 25 September 2010 10:29:33PM *  2 points [-]

It was made by a Russian developer which is better known for its previous effort, Pathologic, a somewhat more classical first-person adventure game (albeit very weird and beautiful, with artistic echoes from Brecht to Dostoevskij), but with a similar problem of being murderously hard and deceptive - starving to death is quite common. Nevertheless, in Russia Pathologic had acceptable sales and excellent critical reviews, which is why Ice-Pick Lodge could go on with a second project.

Comment author: Raemon 09 September 2010 11:19:22PM *  4 points [-]

It's a good game, just with a very narrow target audience. (This site is probably a good place to find players who will get something out of it, since you have higher than average percentages of people willing to take a lot of time to think about and explore a cerebral game).

Some specific lessons I'd draw from that game and apply here:

  1. Don't penalize failure too hard. The Void's single biggest issue (for me) is that even when you know what you're doing you'll need to experiment and every failure ends with death (often hours after the failure). I reached a point where every time I made even a minor failure I immediately loaded a saved game. If the purpose is to experiment, build the experimentation into the game so you can try again without much penalty (or make the penalty something that is merely psychological instead of an actual hampering of your ability to play the game.)

  2. Don't expect players to figure things out without help. There's a difference between a game that teaches people to be rational and a game that simply causes non-rational people to quit in frustration. Whenever there's a rational technique you want people to use, spell it out. Clearly. Over and over (because they'll miss it the first time).

The Void actually spells out everything as best they can, but the game still drives players away because the mechanics are simply unlike any other game out there. Most games rely on an extensive vocabulary of skills that players have built up over years, and thus each instruction only needs to be repeated once to remind you of what you're supposed to be doing. The Void repeats instructions maybe once or twice, and it simply isn't enough to clarify what's actually going on. (The thing where NPCs lie to you isn't even relevant till the second half of the game. By the time you get to that part you've either accepted how weird the game is or you've quit already).

My sense is that the best approach would be to start with a relatively normal (mechanics-wise) game, and then have NPCs that each encourage specific applications of rationality, but each of which has a rather narrow mindset and so may give bad advice for specific situations. But your "main" friend continuously reminds you to notice when you are confused, and consider which of your assumptions may be wrong. (Your main friend will eventually turn out to be wrong/lying/unhelpful about something, but only the once and only towards the end when you've built up the skills necessary to figure it out).

Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?

This was my experience with the Void exactly. Basically all the mechanics and flavors were things I had come up with one my own that I wanted to make games out of, and I'm really glad I played the Void first because I might have wasted a huge chunk of time making a really bad game if I didn't get to learn from their mistakes.

Comment author: Perplexed 07 September 2010 10:44:57PM 3 points [-]

Dramatic in outcome:

One way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0.

Not allegorical:

The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter.

Surprising

Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable.

But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.

Comment author: SilasBarta 07 September 2010 10:11:35PM 8 points [-]

Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.

(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)

Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.

Comment author: CronoDAS 08 September 2010 10:00:19PM *  4 points [-]

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

Or you could just go look up the correct answers on gamefaqs.com.

Comment author: JGWeissman 08 September 2010 10:06:46PM 2 points [-]

So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.

Comment author: CronoDAS 08 September 2010 10:08:56PM 4 points [-]

Ever played Nethack? ;)

Comment author: JGWeissman 08 September 2010 10:15:07PM 0 points [-]

Yes, a little, but I never really got into it. As I recall, Nethack didn't do what I suggest so much as not tell you what certain things are until you magically indentify them.

Comment author: DSimon 08 September 2010 10:36:04PM *  6 points [-]

Well, there are other ways in NetHack to identify things besides the "identify" spell (which itself must be identified anyways). You can:

  • Try it out on yourself. This is often definitive, but also often dangerous. Say if you drink a potion, it might be a healing spell... or it might be poison... or it might be fruit juice. 1/3 chance of existential failure for a given experiment is crappy odds; knowledge isn't that valuable.

  • Get an enemy to try it. Intelligent enemies will often know the identies of scrolls and potions you aren't yet familiar with. Leaving a scroll or potion on the ground and seeing what the next dwarf that passes by does with it can be informative.

  • Try it out on an enemy. Potions can be shattered over an enemy's head instead of being drunk; this is safer than drinking it yourself, though you may not notice the effects as readily, and it's annoyingly easy to miss and just waste the potion on the wall behind the monster.

  • Various other methods that can at least narrow down the identification: have your pet walk on it to see if it's cursed, offer to sell it to to a shopkeep to get an idea of how valuable it is, dip things in unknown potions to see if some obvious effect (i.e. corrosion) occurs, scratch at the ground with unknown wands to see if sparks/flames are created and if so what kind, kick things to see if they are heavy or light, and so on and so on...

The reason NetHack isn't already the Ideal Experimental Method Game is because once you learn what the right experiments are, you can just use them repeatedly each game; the qualitative differences between magical items are always the same, and it's just a matter of rematching label to effect for each new session.

On the other hand, for newbie players, where the experimental process might be exciting and novel... well, usually they're too busy experiencing Yet Another Silly Death to play scientist thoroughly. Heck, a lot of the early deaths will be directly due to un-clever experimentation, which discourages a scientific mindset.

Curiosity killed the cat... indirectly, with a shiny unlabeled Amulet of Strangulation.

And anyways, hardly anybody figures out the solutions to NetHack on their own. The game is just too punishing for that, and the cheatsfiles are too easily available online. (Any NetHack ascendants here who didn't ever look stuff up online?)

Comment author: CronoDAS 08 September 2010 11:10:08PM 1 point [-]

The reason NetHack isn't already the Ideal Experimental Method Game is because once you learn what the right experiments are, you can just use them repeatedly each game; the qualitative differences between magical items are always the same, and it's just a matter of rematching label to effect for each new session.

Yes. That's why

So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.

isn't quite the perfect solution: you can still look up a "cookbook" set of experiments to distinguish between Potion That Works and Potion That Will Get You Killed.

Comment author: Raemon 10 September 2010 12:31:42AM 5 points [-]

To be fair, in real life, it's perfectly okay that once you determine the right set of experiments to run to analyze a particular phenomena, you can usually use similar experiments to figure out similar phenomena. I'm less worried about infinite replay value and more worried about the game being fun the first time through.

Comment author: JGWeissman 08 September 2010 11:18:25PM 2 points [-]

Cookbook experiments will suffice if you are handed potions that may have a good effect or that may kill you. But if you have to figure out how to mix the potion yourself, this is much more difficult. Learning the cookbook experiments could be the equivalent of learning chemistry.

Comment author: Alicorn 08 September 2010 11:03:27PM 6 points [-]

This reminds me of something I did in a D&D game once. My character found three unidentified cauldronsful of potions, so she caught three rats and dribbled a little of each on a different rat. One rat died, one turned to stone, and one had no obvious effects. (She kept the last rat and named it Lucky.)

Comment author: CronoDAS 08 September 2010 11:07:53PM 0 points [-]

Did you try using the two lethal potions as weapons?

Comment author: Alicorn 08 September 2010 11:15:58PM 2 points [-]

I didn't get ahold of vials that would shatter on impact before the game fizzled out (a notorious play-by-post problem). I did at one time get to use Lucky as a weapon, though. Sadly, my character was not proficient with rats.

Comment author: humpolec 08 September 2010 12:21:47PM *  11 points [-]

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

Comment author: steven0461 07 September 2010 10:41:07PM *  6 points [-]

I think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying.

The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.

Comment author: SilasBarta 07 September 2010 11:37:48PM 1 point [-]

Good point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.

Comment author: DSimon 08 September 2010 09:28:02PM *  6 points [-]

Best I think would be if the warning came implicitly as part of the game, and a little ways into it.

For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion.

Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.

Comment author: SilasBarta 09 September 2010 12:48:14PM 1 point [-]

Exactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong.

So is this game idea something feasible and which meets your criteria?

Comment author: DSimon 09 September 2010 02:55:33PM 3 points [-]

I think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens; maybe the game could be called Standing On The Eyestalks Of Giants.

On the particular criteria:

  • Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount.

  • Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias.

  • Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising).

  • Not rigged: Assuming the interface for modelling the game world's physics and doing experiments is sophisticated enough, this should prevent the feeling that the player can win by just finding the button marked "I Am Rational" and hitting it. However, I think this is the trickiest part programming-wise.

I'm going to look into IF programming a bit to figure out how implementable some of this stuff is. I won't and can't make promises regarding timescale or even completability, however: I have several other projects going right now which have to take priority.

Comment author: Mass_Driver 30 September 2010 05:02:03PM 0 points [-]

Let me know if you would like help with the writing, either in terms of brainstorming, mapping the flow, or even just copyediting.

Comment author: SilasBarta 09 September 2010 03:40:43PM *  1 point [-]

Thanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea.

I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England

Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!"

Different magnetic field too, so they can't just say, "lol, make a compass, it points north".

I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself.

Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.

Comment author: steven0461 07 September 2010 09:53:11PM *  3 points [-]

Text adventures seem suitable for this sort of thing, and are relatively easy to write. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.

Comment author: CronoDAS 10 September 2010 01:17:41AM 0 points [-]

Another thing that's relatively easy to create is a Neverwinter Nights module, but you're pretty much stuck with the D&D mechanics if you go that route.

Comment author: humpolec 07 September 2010 09:00:57PM 8 points [-]

RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could

  • explicitly make optimization a part of game's storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
  • create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don't apply - make the player shut up and multiply
  • create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)

Sorry if this isn't very fleshed-out, just a possible direction.

Comment author: khafra 07 September 2010 01:31:41PM 3 points [-]

The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.

Comment author: Clippy 07 September 2010 02:19:16PM 2 points [-]

That's a really good article, the Microsoft humans really know their stuff.

Comment author: steven0461 07 September 2010 04:58:42AM *  7 points [-]

In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.

Comment author: Matt_Simpson 07 September 2010 03:57:41PM 1 point [-]

Something I learned myself that the article supported: taking tests increases retention

Something I learned from the article: varying study location increases retention.

Comment author: MartinB 07 September 2010 04:27:34AM 1 point [-]

Did anyone here read Buckminster Fullers synergetics? And if so did understand it?

Comment author: timtyler 08 September 2010 07:23:52AM *  0 points [-]

Hefty quantities of Synergetics seem incomprehensible to me.

Fuller was trying to make himself into a mystical science guru - and Synergetics laid out his domain.

There is some worthwhile material in there - though you might be better of with more recent secondary sources.

Comment author: MartinB 22 October 2010 06:28:57AM 0 points [-]

But which sources. The reading of his that I understood I found amazing. And i can imagine that grasping synergistic might be useful for my brain.

Recommendations for reading are always welcome.

Comment author: timtyler 22 October 2010 09:03:52AM 1 point [-]

It depends on what aspect you are interested in.

For example, I found this book pretty worthwhile:

"Light Structures - Structures of Light: The Art and Engineering of Tensile Architecture" Illustrated by the Work of Horst Berger.

...and here's one of my links pages: http://pleatedstructures.com/links/

Comment author: Risto_Saarelma 07 September 2010 06:07:47AM *  0 points [-]

Seconding this question.

I found Synergetics in the local library when I was in high school, was duly impressed by Arthur C. Clarke's endorsement on the cover, but didn't understand much at all about the book. I was too young to tell if the book was obvious math crankery or not back then, but the magnum opus style of Synergetics combined with it being pretty completely ignored nowadays makes it look a lot like an earlier example of the type of book Wolfram's A New Kind of Science turned out to be.

Still, I'm curious about what the big idea was supposed to be and what did people who seriously read the book thought about it.

ETA: For the curious, the whole book available is online.

Comment author: utilitymonster 07 September 2010 01:07:06AM 1 point [-]

Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?

Comment author: timtyler 08 September 2010 07:20:05AM 0 points [-]

Don't do that - instead associate programs with sensory input streams.

Comment author: utilitymonster 08 September 2010 09:09:48AM 0 points [-]

Ok, but how?

Comment author: timtyler 08 September 2010 09:31:10AM *  0 points [-]

A stream of sense data is essentially equivalent to a binary stream - the associated programs are the ones that output that stream.

Comment author: utilitymonster 08 September 2010 12:00:42PM *  0 points [-]

Still don't get it. Let's say cards are being put in front of my face, and all I'm getting is their color. I can reliability distinguish the colors here "http://www.webspresso.com/color.htm". How do I associate a sequence of cards with a string? It doesn't seem like there is any canonical way of doing this. Maybe it won't matter that much in the end, but are there better and worse ways of starting?